id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
240340690 | pes2o/s2orc | v3-fos-license | Fibrolipoma of the tongue; a case report with literature review
Introduction Fibrolipoma is a less frequent variant of lipoma, it is rarely reported in the oral cavity, especially in the tongue. This study aims to report a very rare case of tongue fibrolipoma. Case report A 53-year-old female presented with a painless mass at the anterior part of the tongue. It was soft with a smooth regular border. The patient underwent wide local excision to remove the lesion, and the sample was sent for histopathological examination which confirmed the diagnosis of a single fibrolipoma. Discussion Fibrolipoma is rare in the oral cavity, however, they have been seen in the buccal mucosa, lips, buccal vestibule, floor of the mouth, and retromolar area. It has been proposed that disturbance in glucose and lipid metabolism, hormone therapy, and trauma can lead to the formation and proliferation of the tumor. Conclusion Fibrolipoma of the tongue is a rare occurrence. Surgical excision is the ideal management strategy. Histopathological examination is the gold standard for definitive diagnosis.
Introduction
Lipoma is a benign soft-tissue tumor with a mesenchymal origin. It primarily consists of mature adipocytes, hence they can occur in any part of the body where fat is present. It is a common tumor that compromises 4-5% of all benign neoplasms in the body [1,2]. It has a solitary and slow growing nature, it occurs frequently in the upper trunk, abdomen, shoulders, followed by head and neck [3,4]. Based on the morphological features, multiple histological variants of the tumor exist, including conventional lipoma, fibrolipoma, angiolipoma, myelolipoma, and spindle cell lipoma [5]. Fibrolipoma is considered a least frequent variant of lipoma in which adipose tissues are embedded within dense collagen fibers [6]. Fibrolipoma, even other variants of lipoma have rarely been reported in the oral cavity [1]. An even rarer phenomenon is the occurrence of fibrolipoma in the tongue with only a few cases have been reported in the English literature [7].
This study aims to report a very rare case of tongue fibrolipoma, with a brief review of the literature. The report has been written in line with SCARE 2020 guidelines [8].
Case presentation
Patient information: A-53-year-old female presented with a painless mass at the anterior part of the tongue. The mass has been present since birth and it has grown gradually over the last 2 years. She was a known case of diabetes mellitus and underwent thyroid surgery. She was on insulin (10 IUx2) and thyroxin medication (100 Mgx1).
Clinical findings: There was a mobile round mass (2 × 2 cm) located at the anterior part of the tongue, with smooth surface and regular outline.
Diagnostic approach: Laboratory diagnosis showed a very low level of thyroid stimulating hormone (TSH) (˂0.005 ulU/ml), high level of free T3 (7.16 Pmol/L) with normal free T4.
Therapeutic intervention: The patient underwent wide local excision and the sample was sent for histopathological examination which confirmed the diagnosis of fibrolipoma (Fig. 1).
Follow-up and outcome: The patient was discharged in a good health on the first postoperative day. A 2-month follow up showed no sign of reoccurrence.
Discussion
Lipomas are relatively common and most frequently occur in the trunk and extremities. They make up 13-20% of all head and neck tumors [5,9]. Fibrolipoma as a rarer variant of lipoma has been infrequently reported in the oral cavity. When they do occur in this area, they can be observed in the buccal mucosa, lips, buccal vestibules, floor of the mouth, and retromolar areas [1]. Lipoma of the oral cavity was first described by Roux et al., in 1848, and later in 1858, Barling and colleagues reported the first case of tongue lipoma [10,11]. Throughout the literature, only 185 cases of tongue lipoma have been reported with only 16 cases of tongue fibrolipoma. This might be explained by the fact that the tongue lacks fat tissue [7,12].
The pathogenesis of fibrolipoma is not yet completely understood, however, it has been proposed that disturbance in glucose and lipid metabolism, hormone therapy, and trauma can lead to the formation and proliferation of the tumor [12]. Hence, association between these cases and diabetes has been reported [13]. The current case was a diabetic patient who also had a history of thyroid operation.
In general, lipoma most commonly occurs in the adult population after the age of 40 years with no gender predominance [14]. Meanwhile, some studies have reported higher male incidence [15]. However, a slight female predominance has been observed in the cases of fibrolipoma [16]. Oral fibrolipoma usually presents with a slow growing, painless, well-defined yellowish mass with soft consistency and painless superficial or submucosal mass which tends to be asymptomatic, hence they can commonly be found accidently by dentists [1,14]. They may interfere with speaking, chewing, and swallowing [9]. In the current case, the mass did not interfere with speaking.
While imaging techniques and fine needle aspiration (FNA) can be used to determine the nature of the mass, they are not always required, as clinical examination can suspect the condition [12,14,17]. Imaging modalities were not required in this case. In order to confirm the diagnosis of the lipoma variant, histopathological examination is required as other methods can't provide definitive diagnosis [64]. Under the microscope, fibrolipoma is made of mature adipocytes within lobules of dense collagen fibers, it can easily be distinguished from conventional lipoma because of more represented fibrous connective tissues [6,12]. Similar findings were observed in the current case.
The standard management of fibrolipoma is surgical excision, and the reoccurrence of the tumor is extremely rare [15]. However, resection can sometimes poses a challenge to the surgeon and be mistaken for carcinoma, as the tumor can attach to the surrounding tissue due to the fibrous nature of the neoplasm [7].
In conclusion, fibrolipoma of the tongue is a an extremely rare condition. Surgical excision is the ideal management approach with a very low reoccurrence rate. Although lipomas are easily diagnosed clinically, histopathological examination stays as the gold standard for the definitive diagnosis.
Patient consent
Consent has been taken from the patient and the family of the patient.
Provenance and peer review
Not commissioned externally peer reviewed.
Conflicts of interest
There is no conflict to be declared.
Sources of funding
No source to be stated.
Ethical approval
Approval is not necessary for case report in our locality.
Consent
Consent has been taken from the patient and the family of the patient.
Registration of research studies
According to the previous recommendation, registration is not required for case report.
Guarantor
Fahmi Hussein Kakamad is the Guarantor of submission. | 2021-11-01T15:09:58.806Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "c6570c7a4d748e47218e979192db84b9542e1fb6",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.amsu.2021.102985",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a3348bd049d448cf30c27c008b4c9a49a183d12",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253256561 | pes2o/s2orc | v3-fos-license | L-arginine and lisinopril supplementation protects against sodium fluoride–induced nephrotoxicity and hypertension by suppressing mineralocorticoid receptor and angiotensin-converting enzyme 3 activity
Sodium fluoride (NaF) is one of the neglected environmental toxicants that has continued to silently cause toxicity to both humans and animals. NaF is universally present in water, soil, and atmosphere. The persistent and alarming rate of increase in cardiovascular and renal diseases caused by chemicals such as NaF in mammalian tissues has led to the use of various drugs for the treatment of these diseases. The present study aimed at evaluating the renoprotective and antihypertensive effects of L-arginine against NaF-induced nephrotoxicity. Thirty male Wistar rats (150–180 g) were used in this study. The rats were randomly divided into five groups of six rats each as follows: Control, NaF (300 ppm), NaF + L-arginine (100 mg/kg), NaF + L-arginine (200 mg/kg), and NaF + lisinopril (10 mg/kg). Histopathological examination and immunohistochemistry of renal angiotensin-converting enzyme (ACE) and mineralocorticoid receptor (MCR) were performed. Markers of renal damage, oxidative stress, antioxidant defense system, and blood pressure parameters were determined. L-arginine and lisinopril significantly (P < 0.05) ameliorated the hypertensive effects of NaF. The systolic, diastolic, and mean arterial blood pressure of the treated groups were significantly (P < 0.05) reduced compared with the hypertensive group. This finding was concurrent with significantly increased serum bioavailability of nitric oxide in the hypertensive rats treated with L-arginine and lisinopril. Also, there was a significant reduction in the level of blood urea nitrogen and creatinine of hypertensive rats treated with L-arginine and lisinopril. There was a significant (P < 0.05) reduction in markers of oxidative stress such as malondialdehyde and protein carbonyl and concurrent increase in the levels of antioxidant enzymes in the kidney of hypertensive rats treated with L-arginine and lisinopril. The results of this study suggest that L-arginine and lisinopril normalized blood pressure, reduced oxidative stress, and the expression of renal ACE and mineralocorticoid receptor, and improved nitric oxide production. Thus, L-arginine holds promise as a potential therapy against hypertension and renal damage.
Introduction
Sodium fluoride (NaF) is one of the neglected environmental toxicants that has continued to cause toxicity to both humans and animals (Oyagbemi et al. 2021). NaF is universally present in water, soil, and the atmosphere (Oyagbemi et al. 2020). Human activity including massive global industrialization such as the industrial and pharmaceutical products and other sources has also contributed significantly to the presence of NaF in the environment (Irigoyen-Camacho et al. 2016;Choubisa and Choubisa 2016;Said et al. 2020). However, water-borne fluoride has been documented to represent the largest single component of NaF element's daily intake (Catani et al. 2007;Molina-Frechero et al. 2012). Dental fluorosis has been observed to occur normally from excess fluoride ingestion during tooth formation (Aoba and Fejerskov 2002). However, other parts of the tooth such as the enamel and dentine can be affected by fluorosis resulting from fluoride exposure that occurs during childhood (Akpata 2001;DenBesten and Li 2011).
L-arginine is one of the most metabolically versatile amino acids (Gad 2010). L-arginine is known to participate in the synthesis of nitric oxide and serves as a precursor for the synthesis of polyamines, proline, glutamate, creatine, agmatine, and urea (Viribay et al. 2020). Several human and experimental animal studies have indicated that exogenous L-arginine intake has multiple beneficial biological and pharmacological effects (Pahlavani et al. 2017;Dumont et al. 2001). Meta-analysis provides further evidence that oral L-arginine supplementation significantly lowers both systolic and diastolic blood pressure (Viribay et al. 2020;Dong et al. 2011). Nitric oxide (NO) is a wellknown vasodilator produced by the vascular endothelium via the enzyme endothelial nitric oxide synthase (eNOS), the house-keeping enzyme. The inadequate production of NO has been linked to elevated blood pressure (BP) in both human and animal studies and might be due to substrate inaccessibility (Khalaf et al. 2019;Tsuboi et al. 2018). L-arginine administration has been demonstrated to improve endothelial function in various disease states (McRae 2016) and improved risk factors of cardiovascular diseases (CVD) as reported by Pahlavani et al. (2014). Interestingly, L-arginine supplementation was documented to have significant effect of lowering diastolic blood pressure and prolonging gestational age in pregnancy (Zhu et al. 2013). Another amino acid, L-citrulline has been reported to improve vascular function through increased L-arginine bioavailability and NO synthesis (Figueroa et al. 2017). ACE inhibitors are medications used to treat and manage hypertension, which is a significant risk factor for coronary disease, heart failure, stroke, and a host of other cardiovascular conditions (Mall et al. 2021). Lisinopril is a non-sulfhydryl ACE inhibitor that lowers peripheral vascular resistance with a concomitant decrease in blood pressure (Mall et al. 2021). Lisinopril has now been shown to reduce mortality and cardiovascular morbidity in patients with myocardial infarction when administered as early treatment (Wihandono et al. 2021). Lisinopril produces a smooth, gradual BP reduction in hypertensive patients without affecting heart rate or cardiovascular reflexes (Wihandono et al. 2021). Lisinopril has been reported for its antioxidant (Scisciola et al. 2022), nephroprotective, and cardioprotective properties (Ruggenenti 2017;Brown et al. 2021;Wihandono et al. 2021;Østergaard et al. 2021).
The present study elucidated the molecular mechanism of anti-hypertensive action of L-arginine in a toxicant-induced hypertensive and nephrotoxic rat model.
Experimental animals and design
Thirty male Wistar rats (150-180 g) were used in this study; the rats were randomly divided into five groups of six rats per group as control, NaF (300 ppm), NaF + L-arginine (100 mg/kg), NaF + L-arginine (200 mg/kg), and NaF + lisinopril (10 mg/kg), respectively, orally for 8 days. The administration of drugs was given daily. The concentration of NaF (Oyagbemi et al. 2021) and the dosages of L-arginine (Adejare et al., 2020) and lisinopril (Oyagbemi et al. 2021) were chosen based on the previous literature. The rats were kept in wire mesh cages under controlled light cycle (12 h light/12 h dark) and fed with commercial rat chow ad libitum and liberally supplied with water. Body weight and kidney weight were also measured at the end of the experiment. The blood of the rats was taken on the 8th day and rats were euthanized on the 9th day. The rats were acclimatized for 2 weeks before the commencement of the experiment.
Ethical approval
The study was conducted following guidelines approved by the Animal Care and Use Research Ethics Committee (ACUREC) of the University of Ibadan with the approval number: UIACUREC/ 19/124.
Blood pressure measurement
The systolic (SBP), diastolic (DBP), and mean arterial (MAP) blood pressures were determined non-invasively in conscious animals by tail plethysmography using an automated blood pressure monitor (CODA S1, Kent Scientific Corporation, Connecticut, USA). The blood pressure parameters were obtained by an indirect method of blood pressure measurement as recently reported in our laboratory (Oyagbemi et al. 2019).
Serum preparation
Blood was collected from the retro-orbital venous plexus. The serum was obtained from whole blood collected into anticoagulant free sample bottles following a post-collection waiting period of 60 min. Thereafter, the serum was kept at a 4 °C temperature.
Determination of serum markers of renal damage
Serum creatinine and blood urea nitrogen (BUN) were determined following the manufacturer's instructions in the purchased Randox® kits (Randox® Laboratories Ardmore, UK).
Preparation of renal post mitochondrial fractions (PMFs)
The kidneys were quickly excised, rinsed, weighed, and homogenized with homogenizing buffer (0.1 M phosphate buffer, pH 7.4) using a Teflon homogenizer. The homogenate was centrifuged at 10,000 g for 10 min at − 4 °C.
Estimation of renal oxidative stress
The malondialdehyde (MDA) content as an index of lipid peroxidation was quantified in the PMFs of renal tissues according to the method of Varshney and Kale (1990). The absorbance was measured against a blank of distilled water at 532 nm. Lipid peroxidation was calculated with a molar extinction coefficient of 1.56 × 10 5 /M/cm. Protein carbonyl (PCO) contents in the renal tissues were measured using the method of Reznick and Packer (1994). The absorbance of the sample was measured at 370 nm. The carbonyl content was calculated based on the molar extinction coefficient of DNPH (2.2 10 4 cm 1 M 1 ) and expressed as nmoles/mg protein while vitamin C contents were measured as earlier described (Jacques-Silva et al. 2001).
Renal antioxidant status
The superoxide dismutase (SOD) assay was carried out by the method of Misra and Fridovich (1972), with slight modification (Oyagbemi et al. 2015). The increase in absorbance at 480 nm was monitored every 30 s for 150 s. The one unit of SOD activity was given as the amount of SOD necessary to cause 50% inhibition of the oxidation of adrenaline to adrenochrome. Reduced glutathione (GSH) was estimated by the method of Jollow et al. (1974). Glutathione peroxidase (GPx) activity was also measured according to Beutler et al. (1963). Glutathione S-transferase (GST) was estimated by the method of Habig et al. (1974) using 1-chloro-2,4-dinitrobenzene as substrate. The protein and non-protein thiol contents were determined as described by Ellman (1959).
Estimation of serum nitric oxide concentration and total protein
The serum nitric oxide concentrations were measured spectrophotometrically at 548 nm as previously described (Olaleye et al. 2007). Protein concentration was determined by the Biuret method of Gornal et al. (1949), using bovine serum albumin (BSA) as standard.
Immunohistochemistry
Immunohistochemistry was done as described by Oyagbemi et al. (2019). Antibodies against renal ACE and mineralocorticoid receptor (MCR) were probed in the kidney using a 2-step plus Poly-HRP Anti Mouse/Rabbit IgG Detection System with DAB solution (Catalog number: E-IR-R217 from Elabscience Biotechnology®, China). The slides were subsequently dewaxed in xylene solution for 2 min and afterward, hydration was carried out in different concentrations of ethanol (100%, 90%, and 80%) for 2 min each. Antigen retrieval was performed and followed with endogenous peroxidase blocking. Goat serum (E-1R-R217A) was added to prevent nonspecific binding and the tissues were probed with primary antibodies viz-a-viz angiotensin converting enzyme polyclonal antibody (E-AB-16159: 1:500 dilution) and anti-mineralocorticoid receptor polyclonal antibody (E-AB-70261: 1:500 dilution). Thereafter, a secondary antibody labelled E-1R-R217B was added, and the slides were incubated in humidifying chamber at room temperature for 20 min. Finally, a few drops of the substrate diaminobenzidine (DAB) were added in the dark. The reaction was terminated with deionized water and slides were immersed in hematoxylin (Sigma-Aldrich, USA) for 3 s before rinsing with PBS. The slides were placed in 80%, 90%, and 100% of ethanol, and then xylene (100%) for 2 min each. Slides were removed, allowed to dry, and a DPX mountant was applied. Sections were observed with a light microscope (Leica LAS-EZ®) using Leica software application suite version 3.4 equipped with a digital camera.
Statistical analysis
All values are expressed as mean ± S.D. The test of significance between two groups was estimated by Student's t test with P value less than 0.05. The one-way analysis of variance (ANOVA) with Turkey's post-hoc test of Graph pad prism 5.0 was also carried out with p-values < 0.05 considered statistically significant (Fleiss et al. 2003).
Sodium fluoride intoxication on body weight and kidney relative weight
The results in Fig. 1 showed a significant (P < 0.05) reduction in relative body weight of rats intoxicated with NaF and those co-administered with either L-arginine or lisinopril. Similarly, there was a significant (P < 0.05) reduction in relative kidney weight of rats administered only NaF. However, L-arginine supplementation and lisinopril co-administration showed significant restorative effect on the relative kidney weight to near normal values (Fig. 1).
Hemodynamic parameters
The blood pressure parameters measured in the present study indicated significant (P < 0.05) increases in the values of SBP, DBP, and MAP of rats intoxicated with NaF (Fig. 2). On the other hand, there was a dose-dependent reduction in the values of SBP, DBP, and MAP of rats intoxicated with NaF and treated with L-arginine, and lisinopril, respectively (Fig. 2). Lisinopril co-administration gave a better reduction of blood pressure parameters as recorded in Fig. 2.
Renal antioxidant defense system
From Table 1, 200 mg/kg of L-arginine and lisinopril supplementation was found to significantly improve the activities of renal GPx, GSH, PSH, and NPSH content, respectively. Our results showed that NaF intoxication however significantly (P < 0.05) increased the activities of renal GST and SOD in comparison to the control (Table 1). It was interesting to observe that there was no appreciable improvement in the renal content of vitamin C except in the rats administered lisinopril (Table 1). It is worth noting that treatment with lisinopril gave better improvement renal antioxidant defense systems (Table 1).
Markers of renal damage and oxidative stress
We also observed that intoxication with NaF caused a significant (P < 0.05) increase in the values of serum BUN and creatinine when compared to the control and rats co-administered with L-arginine (100 mg/kg and 200 mg/kg) as indicated in Fig. 3. The nephron-protective effect of L-arginine and lisinopril was demonstrated as indicated by a significant (P < 0.05) reduction in the serum levels of BUN and creatinine in comparison to the NaF-intoxicated group (Fig. 3).
In Fig. 4, renal MDA which is the product of lipid peroxidation, in NaF-intoxicated rats, increased significantly as compared to the control group. There was a significant (P < 0.05) reduction in the MDA content of L-arginine and lisinopril co-administered rats when compared to the NaF alone-treated rats (Fig. 4). Our data also revealed an exaggerated increase in the content of PCO in NaF only-administered rats in comparison to the control (Fig. 4). The free radical scavenging action of L-arginine and lisinopril was demonstrated by a significant (P < 0.05) reduction in the content of renal PCO when compared to NaF only (Fig. 4). Also in Fig. 4, the administration of NaF caused a significant (P < 0.05) reduction in NO bioavailability relative to the control. Again, L-arginine supplementation caused a significant (P < 0.05) improvement in NO bioavailability similar to that of lisinopril (Fig. 4).
Histopathology and immunohistochemistry
The histopathology of the kidney revealed mild tubular necrosis in rats intoxicated with NaF, while rats coadministered with L-arginine showed minute tubular necrosis, and no visible lesion was observed in lisinopriltreated group (Fig. 5). The renal immunohistochemistry of MCR revealed a higher expression of MCR in NaFintoxicated rats relative to the control (Fig. 5). However, lower expression of MCR was observed in L-arginine and lisinopril-treated rats when compared to the NaF-alone rats (Fig. 5). It is important to note that lower expression of MCR was recorded in rats that received 100 mg/ kg of L-arginine relative to rats that received 200 mg/kg of L-arginine and lisinopril (Fig. 5).
In another experiment, our study revealed higher expression of ACE in renal tissues of rats intoxicated with NaF when compared to the control (Fig. 6). Interestingly, co-treatment with either L-arginine or lisinopril reduced the expression of ACE relative to the NaF-intoxicated rats (Fig. 6).
Discussion
This study showed that L-arginine and lisinopril ameliorated NaF-induced hypertension in male Wistar rats. This can be corroborated by a statistically significant reduction in high SBP, DBP, and MAP across the treated groups when compared with the hypertensive untreated rats. Our findings also confirmed earlier reports on the toxicity of NaF on cardiovascular system (Oyagbemi et al. 2021(Oyagbemi et al. , 2018a(Oyagbemi et al. , 2018b(Oyagbemi et al. , 2018c(Oyagbemi et al. , 2017Omóbòwálé et al. 2018). Administration of NaF alone to rats led to a significant decrease in serum NO bioavailability in the hypertensive group. However, rats in the treated groups (L-arginine or lisinopril) had a noticeable increase in NO availability. The reduction in NO bioavailability has been reported to be involved in the pathogenesis of hypertensive conditions (Elmarakby and Sullivan 2021;Stamm et al. 2021;Travis et al. 2021) and other cardiac complications through generation of ROS (Oyagbemi et al. 2021(Oyagbemi et al. , 2017. L-arginine is a precursor for the synthesis of NO (Almannai and El-Hattab 2021;Ma 2021;Yaremchuk et al. 2021), and the NO produced from vascular endothelium helps to maintain a continuous tone that is essential for the regulation of blood flow, blood pressure, platelet aggregation, and vasodilation (Umnyagina et al. 2021;Pautz et al. 2021). It was evident from our study that L-arginine or lisinopril significantly increased NO bioavailability and reversed high blood pressure precipitated by NaF intoxication. We observed from our study that NaF intoxication caused significant increase in blood urea nitrogen (BUN) and creatinine levels. The increase in BUN and creatinine has been associated with various degrees of renal injuries (Chen et al. 2021a, b;Ni et al. 2021;Nasiruddin et al. 2020). The observed nephrotoxicity by NaF might be due to free radical generation and increased protein catabolism with concomitant systemic oxidative damage. This finding might also suggest extensive glomerular and tubular epithelial cell damage observed in the histopathology are positively correlated with exaggerated levels of BUN and creatinine. Treatment with L-arginine or lisinopril significantly attenuated these deleterious effects by the reduction in BUN and creatinine levels across treated groups in comparison to the NaFintoxicated group. This therefore indicates the nephroprotective effect of L-arginine or lisinopril against nephrotoxicity induced by NaF intoxication. Our study therefore is in support of nephropretective effect of L-arginine against nephrotoxicity and hepatorenal damage (Saka et al. 2021;Abdelhalim et al. 2018). The use of function foods and Crmethionine has documented against oxidative stress in animals (Hoan et al., 2021;Bin-Jumah et al. 2020;Abdelnour et al. 2019).
The ability of L-arginine to mitigate oxidative stress in hypertensive rats was also demonstrated in the present study. Renal markers of oxidative stress including hydrogen peroxide (H 2 O 2 ) generated, MDA, and PCO contents increased significantly in NaF-induced hypertensive rats compared with the control. The exaggerated production of H 2 O 2 as classic example of ROS that has been reported during oxidative stress with ultimate damage to proteins, nucleic acids, cell membranes has been implicated in the development of some diseases (Yang et al. 2021;Yu et al. 2021;. The generated H 2 O 2 can react with superoxide anion radical (O 2 • ) to initiate the Haber-Weis reaction, thereby producing hydroxyl radical ( . OH). It was exciting to observe a significant reduction in H 2 O 2 content in rats co-administered with L-arginine or lisinopril. The ability of L-arginine or lisinopril to reduce the renal content of H 2 O 2 was an indication of free radical scavenging action of L-arginine.
Malondialdehyde (MDA) is one of the final products of peroxidation of polyunsaturated fatty acids (PUFA) in the cell (Wang et al. 2021;Torun et al. 2009;Gawel et al. 2004). The MDA is a toxic aldehyde that can initiate oxidative cellular damage in both target and non-target tissues (Morelli et al. 2021). In this study, NaF intoxication significantly increased the content of renal MDA. However, anti-oxidative action of L-arginine or lisinopril was demonstrated as shown in the reduction in aforementioned exaggerated production of renal MDA. Protein oxidation, and their level in tissues and plasma, has been reported as a relatively stable marker of oxidative damage (Dayanand et al. 2012). In fact, pathogenesis and pathophysiology of many disease conditions have been associated with increased protein carbonyl content (Akinrinde et al. 2021;Marques et al. 2021;Ommati et al. 2021;Rodríguez-Sánchez et al. 2021). From this study, L-arginine's protection against NaF-induced renal protein carbonylation might be associated with the antioxidant activity of L-arginine or lisinopril which prevents protein oxidation. Protein carbonylation, one of the most harmful irreversible oxidative protein modifications, has been considered a major hallmark of oxidative stress-related disorders including aging and several age-related disorders (Fedorova et al. 2014). From this study, we can propose that L-arginine could be found applicable in the management of aging and several age-related disorders against protein oxidation and crosslinking.
Glutathione in its reduced form is an important intracellular antioxidant that protects against a variety of oxidant species (Masella et al. 2005). The protective mechanisms of glutathione against oxidative stress which can be through detoxification of enzymes such as glutathione peroxidase against oxidative stress, scavenging hydroxyl radicals, and singlet oxygen directly (Masella et al. 2005). Glutathione peroxidase (GPx) is a selenium-containing enzyme that catalyzes detoxification of lipid hydroperoxide and hydrogen peroxides to water and oxygen (O 2 ). The reduction in the activity of GPx could lead to a concurrent increase in hydrogen peroxide with subsequent tissue damage (Farhat et al. 2018;Espinoza et al. 2008). Superoxide dismutase (SOD), on the other hand, catalyzes the dismutation of the superoxide anion radical to hydrogen peroxide (Pizzino et al. 2017).
Our data also showed a significant decrease in the activity of enzymatic and non-enzymatic antioxidants such as GPx, SOD, reduced GSH, and vitamin C in NaF-intoxicated hypertensive group, confirming the involvement of oxidative stress in the pathogenesis of hypertension. Treatment of the hypertensive rats with L-arginine at 100 mg/ kg and 200 mg/kg brought about significant improvement in the antioxidant defense system. However, the increase in GSH level in the renal tissues of the hypertensive rats treated with L-arginine was not significant except in the group treated with 10 mg/kg lisinopril. The reduction in the levels of markers of oxidative stress and concurrent increase in antioxidant enzymes might suggest an ability of L-arginine to scavenge free radicals and mitigate oxidative stress associated with NaF toxicity.
The significant decrease in the activity of SOD and GPx in the hypertensive group may subsequently lead to an increase in superoxide anion radical and H 2 O 2 levels, thereby potentiating oxidative stress as a major factor in the progression of hypertension. The accumulation of the superoxide anion radical was also a sequel to the observed decrease in the activity of SOD. Hence, increasing levels of the superoxide anion radical might enhance the uncoupling of eNOS with a resultant reduction in NO bioavailability. Furthermore, superoxide anion radical is also capable of reacting with NO to form peroxynitrite, which is a cytotoxic signaling molecule (Wu et al. 2020;Hu et al. 2019). Thus, the observable increase in the activity of GPx in the kidney tissues is suggestive of antioxidant and ameliorative roles of L-arginine against NaF toxicity.
The over-activation of MCR in animal models of chronic kidney disease (CKD) has been reported to play significant roles in the pathophysiology and pathogenesis of cardiorenal dysfunctions including inflammation and fibrosis in the kidneys and hearts and increased sodium retention and hypertension (Georgianos and Agarwal 2021). MCR antagonists have become novel therapeutic interventions to retard the progression of CKD with attendant improvement in cardiovascular morbidity and mortality (Droebner et al. 2021;Kovarik et al. 2021;Patrono and Volpe 2021). Our study revealed an over-activation of MCR by NaF intoxication as recorded with higher expression of renal MCR. The observed higher expression of MCR could be positively correlated with exaggerated high blood pressure obtained in rats administered only NaF. From our data, L-arginine or lisinopril co-administration with NaF caused a reduction in the expression of MCR. This might be indicative of renoprotective and antihypertensive action of L-arginine and lisinopril, respectively. The amino acid L-arginine could be found applicable for the management of toxicant-induced nephrotoxicity.
Recently, science has taken the advantage of selectively inhibiting ACE as a therapeutic target for preventing CKD and better management of hypertension (Puspita et al. 2021;Bas 2021;Alves-Lopes et al. 2021;Chen et al. 2021a, b). In this study, we also investigated renal immunolocalization of ACE following NaF intoxication. The immunohistochemistry revealed a higher expression of renal ACE in rats administered with NaF relative to the control and rats co-administered with either L-arginine or lisinopril. The increased expression of ACE was similar to that of MCR as stated above, meaning that NaF nephrotoxicity might be through over-activation of MCR and ACE signaling pathways. The over-activation of these pathways could actually be responsible for the nephrotoxicity and hypertension. The ability of L-arginine to block the activities of MCR and ACE could be maximized as a novel therapeutic agent in the management and treatment of kidney damage and associated hypertension.
Conclusion
The results of this study showed that 200 mg/kg of L-arginine normalized high blood pressure, reduced oxidative stress, improved renal antioxidant defense system, offered protection against renal damage and nephrotoxicity, and improved nitric oxide bioavailability thereby serving as a precursor to nitric oxide production. Thus, L-arginine could serve as a potential alternative therapy against toxicantinduced oxidative stress, nephrotoxicity, and hypertension via increase in the supply of endogenous nitric oxide. | 2022-11-03T06:17:32.421Z | 2022-11-02T00:00:00.000 | {
"year": 2022,
"sha1": "ac204cf0042275ab04ead181cd5c9e3a158951a3",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1841462/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "77c8464017228263e958e72298c85a0b27ebdcb5",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253259529 | pes2o/s2orc | v3-fos-license | Potential of montmorillonite modified by an organosulfur surfactant for reducing aflatoxin B1 toxicity and ruminal methanogenesis in vitro
Background Montmorillonite clay modified by organosulfur surfactants possesses high cation exchange capacity (CEC) and adsorption capacity than their unmodified form (UM), therefore they may elevate the adverse impact of aflatoxin B1 (AFB1) on ruminal fermentation and methanogenesis. Chemical and mechanical modifications were used to innovate the organically modified nano montmorillonite (MNM). The UM was modified using sodium dodecyl sulfate (SDS) and grounded to obtain the nanoscale particle size form. The dose-response effects of the MNM supplementation to a basal diet contaminated or not with AFB1 (20 ppb) were evaluated in vitro using the gas production (GP) system. The following treatments were tested: control (basal diet without supplementations), UM diet [UM supplemented at 5000 mg /kg dry matter (DM)], and MNM diets at low (500 mg/ kg DM) and high doses (1000 mg/ kg DM). Results Results of the Fourier Transform Infra-Red Spectroscopy analysis showed shifts of bands of the OH-group occurred from lower frequencies to higher frequencies in MNM, also an extra band at the lower frequency range only appeared in MNM compared to UM. Increasing the dose of the MNM resulted in linear and quadratic decreasing effects (P < 0.05) on GP and pH values. Diets supplemented with the low dose of MNM either with or without AFB1 supplementation resulted in lower (P = 0.015) methane (CH4) production, ruminal pH (P = 0.002), and ammonia concentration (P = 0.002) compared to the control with AFB1. Neither the treatments nor the AFB1 addition affected the organic matter or natural detergent fiber degradability. Contamination of AFB1 reduced (P = 0.032) CH4 production, while increased (P < 0.05) the ruminal pH and ammonia concentrations. Quadratic increases (P = 0.012) in total short-chain fatty acids and propionate by MNM supplementations were observed. Conclusion These results highlighted the positive effects of MNM on reducing the adverse effects of AFB1 contaminated diets with a recommended dose of 500 mg/ kg DM under the conditions of this study.
especially that of aflatoxin B1 (AFB1) would be of higher importance. Changes in the geographic distribution of mycotoxigenic fungi would also be a result of global warming [1][2][3]. Thus it seems that mitigation strategies of both GHG and mycotoxins are likely more suitable for the near future. The sector of livestock contributes up to 18% of the global GHG from anthropogenic origins, in which enteric methane (CH 4 ) represents almost 37% of total GHG from livestock [4]. Methane is constantly increasing and it has 28 times more warming potential as a GHG than carbon dioxide (CO 2 ). Therefore, CH 4 mitigation can result in a fast cooling impact on the atmosphere [5] and may save from 3 to 12% of the dietary digestible feed energy loss [6].
The AFB1 is among the most potent hepatocarcinogenic and immunosuppressive metabolites, and also is considered the most resistant mycotoxin to ruminal microbial degradation [7], as only 10 to 50% of AFB1can be degraded by ruminal microorganisms [8]. Contaminated AFB1 diets usually exhibit symptoms including decreases in ruminal degradability and fermentation characteristics, feed intake, milk production, and growth performance [7][8][9]. In addition, when dairy animals consume diets contaminated with AFB1, aflatoxins M1 can be formed as a result of the metabolic process and excreted in milk [10]. Thus a significant risk can be exposed to human beings (especially children; the high milk consumers) through the consumption of contaminated milk. It seems that protecting the environment (by mitigating ruminal CH 4 emissions) and producing safe animal products are important challenges of animal production.
Clays are generally recognized as safe for both human and animal consumption [11,12]. Recently, modified clays exhibited higher adsorption capacity and antimicrobial activity than their raw clays [13]. Montmorillonite is one of the smectite clays, it has a 2:1 layered structure, each layer consists of two tetrahedral sheets of silicon dioxide sandwiching one octahedral sheet of aluminium oxide, these layers are separated by interlayer spacing containing various exchangeable cations [13]. Montmorillonite has high antimicrobial, aflatoxins adsorptive capacity, and buffering characteristics [14]. However, compared to other kaolinite clays, natural montmorillonite possesses lower inhibiting methanogenesis activity [11] but had higher global availability and lower cost. Therefore, it is widely used as a feed supplement to ruminant diets. Moreover, montmorillonite has a unique character, that is the high suitability to be organically modified compared to other kaolinite clays. This happens via exchanging the interlayer cations with organic cations or onions that increase the interlayer spacing between its layers, thereby enhancing its hydrophobic and adsorptive characteristics [14].
Ruminal modifiers containing sulfur or sulfate are capable of using H+ even at low concentrations to produce hydrogen sulfide (H 2 S), this action consumes eight electrons and thus can be an alternate electron acceptor to methanogenesis [15,16]. Sulfite, in addition to inhibiting CH 4 production by consuming H+, is toxic to methanogens [16]. Therefore, sulfonated montmorillonite (montmorillonite modified by organosulfur surfactants) may enhance the clay's anti-methanogenic activity. Sodium dodecyl sulfate (SDS; CH 3 (CH 2 ) 11 OSO 3 Na) is among the organosulfur anionic surfactant class and consists of a sodium salt of the 12-carbon an organosulfate. It is used widely to modify montmorillonite through intercalation by replacing its exchangeable cations (i.g., Ca 2+ ) to change its surface properties from hydrophilic to hydrophobic, thereby increasing its adsorption capacity [17]. In our previous work, montmorillonite modified by SDS in a nano form possessed high CEC and reduced CH 4 production by 38% when supplemented at 500 mg/ kg dry matter compared to the unsupplemented diet [12]. However, to what extent these modified clays are also operative in diets contaminated with AFB1, to the best of our knowledge, has not yet been tested empirically. Due to the high CEC of the montmorillonite, Ca 2+ in its interlayer can be changed with ions of Na + located in the structure of SDS ions, thereby increasing the intermediate distance between lamellae [17], and consequently may enhancing its affinity for aflatoxin contaminants. Furthermore, mechanical nano grinding has been proven to enhance the clay's physicochemical, stability, and antimethanogenic properties [18]. Therefore it was hypothesized that organo-modified nano montmorillonite by SDS (MNM) may elevate the AFB1 adverse effects while inhibiting CH 4 formation. This is the first investigation to study the impacts of modified clays supplemented with a diet contaminated with AFB1 on ruminal fermentation and nutrient degradability.
to the method of Bujdáková et al. [13], but with some modifications. Sodium dodecyl sulfate (SDS; Sigma Aldrich Co., Irvine, Scotland) was used as an anionic organosulfur surfactant to modify the UM clay [12]. Five g of UM clay was dispersed in 300 ml of distilled water for 24 h at room temperature using a magnetic stirrer and then the desired amount of the SDS (depending on CEC and molecular weight) was slowly added. The reaction mixture was stirred for 5 h at 80 °C. Once the cation exchange reaction occurred, the resulting organoclay suspension was stirred for 12 h at room temperature, filtered, washed three times with distilled water to remove the remains of SDS that did not interact with the montmorillonite, then dried at 90 °C. The dried clay material was grounded using a high-energy planetary ball miller (Retsch PM, VERDER SCIENTIFIC, North Rhine-Westphalia, Haan, Germany) for 5 hours, with a reverse rotation speed of 300 rpm and vial rotation speed of 600 rpm with the zirconia balls to powder ratio of 9:1 mass/mass. The clay particle size was measured by a nano-size analyzer (Malvern, Nano series, Worcestershire, UK) and recorded mean values of 98.2 ± 26.3 and 765 ± 20.9 nm for UM and MNM, respectively.
The physicochemical properties of the experimental clays were determined. The CEC of the experimental clays was analyzed according to the method of Rhoades [19] using solutions of 1 M sodium acetate and 0.1 M sodium chloride. The measured CEC for UM and MNM were 77.5 and 117 (meq/100 g), respectively. The surface charge of the experimental clays was determined by Zeta potential analysis (Malvern ZETASIZER Nano series, Worcestershire, UK) with ranges of particle size detection from 0.3 nm to 10 μm at 25.0 ± 1°C, measurement position (mm) 2.0, count rate (kcps) 347.4, and attenuator 7.0, KCl (0.150 g/100 ml solid to solution ratio) was used as an indifferent electrolyte. The measured Zeta potential values were − 23.1 and − 24.0 mV for UM and MMM, respectively.
The nanoparticle's shape of MNM was recorded by using a scanning electron microscope (SEM; Jeol JSM-6360 LA, 3-1-2 Musashino, Akishima, Tokyo, Japan) after coating with gold to improve the imaging of the clay sample [18]. The functional groups of the experimental clays were determined by Fourier Transform Infra-Red Spectroscopy (FTIR) by an infrared spectrometer (Shimadzu-8400S, Osaka, Japan) as described by Soltan et al. [12]. The FTIR analysis was performed on a detector of deuterated triglycine sulfate and a beam splitter of KBr. The scanning rate is 45 scans/ 60 seconds, and the mass ratio of the clay sample to KBr was 1 mg of KBr and 99 mg of clay.
The d-spacing of the UM and OMNM clays was characterized by X-ray diffraction (XRD) using a MeasSrv (D2-208219/ D2) powder diffractometer with CuKα radiation filtered with a graphite monochromator that running at 40 kV and 40 mA as described by Elshazly and Hamdy [20]. The XRD had a fixed source-sample-detector geometry, and samples were measured in reflection mode. An X-ray diffraction data set was collected from 1 to 60 2θ. The tilt angle between the source and the sample was 5.8, and the horizontal slit system was set at 0.14 mm to confine the x-ray beam to pure cobalt Kα1.
In vitro gas production (GP) assay The experimental basal diet
A 500: 500 forage to concentrate was used as an experimental basal diet, the forages were the traditional Egyptian berseem clover hay (Trifolium alexandrinum) and wheat straw. This diet was prepared to fulfill the national research council's nutrient requirements for dairy sheep [21]. The primary ingredients and chemical composition of this diet are presented in Table 1. The basal diet was chemically analyzed following the Association of Official Analytical Chemists [22] for DM, organic matter (OM), crude protein (CP), and ether extract (EE). Fiber contents of neutral detergent fiber (NDF), acid detergent fiber (ADF), and acid detergent lignin (ADL) were sequentially analyzed according to Van Soest et al. [23] using a semi-automatic fiber analyzer (ANKOM, model A2001, Macedon, NY, USA) in filter bags (F57-ANKOM Technology Corporation, Macedon, NY, USA). The total aflatoxins (AFs) of the basal diet were extracted and purified in duplicate by VICAM immunoaffinity columns (VICAM Aflatest, Milford, MA, USA) as described by Hafez et al. [24] and quantified using the VICAM fluorometry method (VICAM Series 4EX Fluorometer, Milford, MA, USA) according to the manufacturer's instructions [25]. We recorded values of 13.8 and 14.4 ppb (an average of 14 ppb) AFs in the basal diet without any supplementations.
Treatments and GP protocol
Eight treatments were evaluated in vitro to test the doseresponse effects of the MNM supplemented on the basal diet contaminated or not with a final concentration of AFB1 20 ppb (produced by Aspergillus flavus, 98% purity, Sigma Chemical Co, Louis, Missouri, USA) using the semiautomatic gas production system. The treatments were: control (basal diet without clay supplementations), unmodified-montmorillonite (UM) supplemented at 5000 mg /kg DM, and OMNM diets at low (500 mg/ kg DM) and high doses (1000 mg/ kg DM). The final AFs concentrations of the basal diet contaminated with AFB1 were 34 ppb. These doses are higher than the Egyptian maximum permissible concentrations for the AFs and AFB1 (20 ppb and 10 ppb, respectively) for dairy animal feeds [26].
The treatments were evaluated using the semi-automatic gas production system as described by Bueno et al. [27] and adapted by Soltan et al. [28]. To prepare the ruminal inoculum for the in vitro assay, ruminal contents were collected separately from three fasted, slaughtered crossbred buffalo calves (450 ± 7 SE kg body weight) from the slaughterhouse of the farm station of the Faculty of Agriculture, Alexandria University, Egypt [29].
The ruminal contents were transferred immediately into pre-warmed thermo-containers (39 °C) under carbon dioxide (CO 2 ) flushing. The ruminal inoculum was prepared by blending the ruminal contents of slaughtered calves in equal proportions (1:1:1) for 10 s, then squeezed by three layers of cheesecloth, and kept in a water bath (39 °C) under continuous flushing of CO 2 . Twelve in vitro incubation glass bottles (120 ml; Arab Pharmaceutical Glass Company, Suez, Egypt) were prepared for each treatment.
An amount of 500 mg of each experimental diet was weighed into an incubation bottle and incubated with 15 ml of the prepared ruminal inoculum and 30 ml of Menke's buffer solution, thus the headspace was 75 ml [12,28]. The incubation bottles were tightly closed by 20 mm butyl rubber stoppers and sealed with aluminum seals. The incubation was done for 24 h at 39 °C in a forced air incubator (FLAC STF-N 52 Lt, Lombardy, Italy). The same process was done for blank bottles (containing buffer solution and ruminal inoculum) and internal standard bottles (containing buffer solution, ruminal inoculum, and the Egyptian berseem clover hay) to get the net GP values and correct for sensitivity variations induced by the inocula, respectively.
Experimental parameters Gas and CH 4 productions
The gas pressure of the incubation bottles was recorded at 3, 6, 9, 12, and 24 h from the start of the incubation using a pressure transducer and data logger (Pressure Press Data GN200, Sao Paulo, Brazil). The GP volumes (ml) were calculated as 4.97 × measured gas pressures (psi) + 0.171 (n = 500 samples; r 2 = 0.99) [28].
To determine the CH 4 production, one ml of the headspace gas of each bottle was sampled at each gas pressure measuring time by a 3 ml syringe and was accumulated in a 5 ml vacutainer tube (Vacutainer ® Tubes, Jersey, USA). Concentrations of CH 4 were determined by gas chromatography (GC, Agilent GC Analyzers Greenhouse Gas Analyzer, with Power Supply of 2 kW) provided by Agilent Technologies, Inc., Santa Clara, California, USA.
Ruminal fermenattion parametrs and protozoal count
At the terminal of the incubation period, all the incubation bottles have been set on ice to stop the ruminal microbial actions. Values of pH were measured by a pH meter (CRISON GLP2, Barcelona, Spain). The ammonia concentrations were determined using a commercial kit (Biodiagnostic Inc., Giza, Egypt). The concentrations of short-chain fatty acids (SCFAs) were measured following Palmquist and Conrad [30] using gas chromatography (GC; Scion 456-GC/FID, Netherland). The GC was equipped with a capillary Rt-2560 column (100 m × 0.25 mm ID, 0.20 μm df, Restek) with a constant flow of 1.2 ml/ min helium as carrier gas. A SCFAs standard mix (Sigma Aldrich Co., Irvine, Scotland) was used to obtain absolute quantification of each SCFAs.
Ruminal nutrient degradability
The contents of the incubation bottles were treated with the neutral detergent solution for 3 hours at 90 °C to determine the nutrient degradability Blümmel et al. [32]. The residuals non-degraded of the contents of the bottles were collected in pre-weighed free crucibles, washed with hot distilled water and acetone, dried at 70 °C for 48 hrs, and then ashed at 600 °C for 2 hrs. The truly degraded organic matter (TDOM) and truly degraded neutral detergent fiber (TDNDF) were calculated by the differences between the incubated and non-degraded organic matter amounts and between the amount of incubated NDF and the non-degraded NDF amounts, respectively [28].
Statistical analysis
The in vitro experiment was completed in one run (1 day) for all the experimental diet treatments. The experimental unit was the incubation bottle, thus 12 statical repetitions were obtained for each treatment. All results were analyzed by one-way ANOVA using the MIXED procedure of SAS [33] (SAS Institute Inc., Cary, USA, version 9.0). The experimental parameters were processed as a completely randomized design with repeated measures using the following model: where Yijkl the observation; μ was the overall mean; Di fixed effect of experimental diet; Tj fixed effect of AFB1 supplementation; Iik random effect of the diets; (D × T) ij interaction effect between diet and AFB1 supplementation, and eijkl the residual error. In addition, orthogonal contrast statements were designed to test linear and quadratic responses of each dependent in vitro parameter to the increasing levels (0, 500, 1000 mg/kg DM) of OMNM. The effects were declared significant at P ≤ 0.05, and the trends at P ≤ 0.10.
Physicochemical properties of the experimental clays
The SEM micrographs of the modified clay are illustrated in Fig. 1. The SEM analysis showed a cracked and rough appearance of the MNM. The FTIR bands of the experimental clays are represented in Table 2 and Figs. 2 and 3. Bands of the OH-group were shifted from 3417 cm − 1 in UM to higher frequencies at 3435 cm − 1 in MNM, also for the medium bands, they shifted from lower frequency at 1633 cm-1 in UM to higher frequency 1640 cm-1 in MNM. Only for the modified clay, a band at 778 cm-1 appeared which attributed to the Si-O stretching vibrations, while it was absent in the UM clay. In the lower frequency range (450 and 550 cm-1), an extra band at 475 cm − 1 only appeared just in MNM.
The XRD patterns of UM and MNM clay are presented in Fig. 4. The XRD spectrograms showed that UM consists mostly of picramide or 2,4,6-trinitroaniline (48.2%) and bis(1,8-bis (dimethylamino)naphthalene) squarate (19%), while MNM clay consists mostly of methyl 2-(N-diphenylmethylene amino)-3-phenyl-3-phenyl amino propanoate (39.2%), and erythrityl tetranitrate (40.3%). Table 3 showed that MNM modified by SDS affected GP and CH 4 productions differently compared to the unmodified clay. Treatment, MNM dose, and MNM dose × AFB1 interaction affected (P < 0.01) the GP values, while no effects were detected by Among the experimental treatments, diets supplemented with the low dose of MNM either with or without AFB1 supplementation resulted in lower (P = 0.015) CH 4 production compared to the control without AFB1. Similarly, the MNM does, AFB1 and dose × AFB1 interaction affected significantly (P < 0.05) the CH 4 production. Methane produced by diets supplemented with low MNM does was declined (P = 0.030) compared to the control or high dose. AFB1 also resulted in a CH 4 decreasing (P = 0.032) effect compared to the non-supplemented diets. The contrast analysis showed that the decrease in CH 4 (related to TDOM) by MNM was in a dose-dependent manner; MNM reduced CH 4 in a quadratic (P = 0.016) trend. Neither treatment nor AFB1 or MNM affected the TDOM and TDNDF.
Rumianl fermentaion parametrs and protozaol count
The effects of the feed additives on ruminal pH, ammonia, protozoa, and SCFAs are shown in Table 4. Treatment of UM either with or without AFB1 resulted in a decrease (P = 0.02) of ruminal pH values compared to the control. Contamination of AFB1 increased (P = 0.02) the ruminal pH compared with non contaminated diets. A linear reduction effect (P = 0.01) of ruminal pH was observed by the MNM supplementation.
Treatment, MNM dose, and AFB1 supplementation affected (P < 0.05) ruminal ammonia concentration, while no effects were detected by MNM dose × AFB1 interaction. Among the treatments, the control diet with AFB1 had the highest (P = 0.002) ammonia values, while that treated with MNM without AFB1 had the lowest values. The low MNM dose presented lower (P < 0.01) ammonia concentrations than the high MNM dose, similarly, AFB1 resulted in increasing (P < 0.01) ammonia concentration compared to diets without AFB1. A quadratic increasing effect (P < 0.01) was observed by the MNM supplementation on ammonia concentration. No differences were detected in the protozoal counts either by MNM or AFB1 supplementations. Treatment and MNM dose affected (P < 0.05) all the SCFAs individual molar proportions, while no effects were detected by AFB1 supplementation. Significant (P < 0.05) interaction effects (MNM dose × AFB1) were observed on the butyrate and branched-chain volatile fatty acids (BCVFA; e.g., isobutyrate and isovalerate) molar proportions. The low MNM dose resulted in higher (P < 0.05) total SCFAs concentration, acetate, and propionate, and lower (P < 0.05) BCVFA molar proportions than the high MNM dose. These were in line with the contrast analysis since increases in acetate (linearly; P = 0.003), propionate, and total SCFAs (quadratically; P < 0.05) were observed by MNM supplementation. Similar high (P = 0.007) propionate molar proportions were observed in the low MNM diets with or without AFB1 and the UM diet with AFB1.
Discussion
The measured physicochemical properties of the resultant modified MNM differed from the UM. The literature confirms the distinctive feature of UM clay is the soft and tight layers' edges of the flakes [34,35]. In the current study, the analysis of SEM showed that the edges of the MNM flakes became cracked and had a rough appearance after modification by SDS. This mainly happened due to the separation between the clay layers caused by the settled locations of SDS between octahedral alumina layers between two silica tetrahedron layers [35]. Similar to our findings observed by Bayram et al. [35], when modified UM by SDS but without the nano grinding). Thus it can be suggested that the nano grinding did not affect the localization of SDS between the octahedral layers. The measured CEC of the UM was high (77.5 meq/100 g) and became higher after modifications (117 meq/100 g) for the MNM clay confirming the exchange of the ions between the UM clay with our experimental ionic surfactant [17,35]. It was proved that ions of Na + located in the structure of SDS can be changed with ions of Ca 2+ in the UM interlayer, thereby SDS anions can enter the interlayer space of Na + and Ca 2+ of UM as counter ions [17]. Bayram et al. [35] reported that the cracked appearance of montmorillonite only appeared with the presence of sulfur (2.83%) the interlayer structure of the modified clay by SDS, while it was absent in the UM clay. Therefore, it can be suggested that our MNM successfully contained sulfonate groups (RSO 3 − ), in other words, the experimental montmorillonite (UM) became sulfonated montmorillonite (MNM). The analysis of FTIR may partly confirm our suggestion since a band located at around 475 cm − 1 corresponding to S − S stretching bonds [36] only appeared in MNM, while it was absent in UM. However, no extra bands crossbanding to S=O bonds were detected in MNM but regions of the medium and high bands that correspond to sulfonate and hydroxyl H-O-H bond groups were shifted from lower frequency in UM to higher frequency in MNM. Rather than, a band at 778 cm-1 appeared which was attributed to the Si-O stretching vibrations, while it was absent in the UM clay. Thus it can be suggested that anions (SO 3 ) of SDS might be adsorbed and contributed to the high negative observed by MNM [35]. The high negative charges indicated by the Zeta potential of MNM may partly confirm such a suggestion. A similar profile of FTIR analysis was observed by Soltan et al. [12] that may confirm the efficiency of the modification process in producing MNM.
The results of the FTIR analysis were compatible with the higher CEC found for MNM than the UM, which indicated the high number of metal hydrolysates and ions that can be intercalated into the MNM interlayer space improving the clay activity compared to UM [12].
The XRD analysis allows the determination of phases and crystallographic properties of any experimental materials [17], it is based on each crystal phase breaking the X-rays transmitted onto it in a specific characteristic order, depending on their unique atomic sequences as a kind of fingerprint [35]. The XRD results in the current study showed changes in the crystalline structure of the montmorillonite after the modification by SDS since different crystals were observed by the two experimental clays. These results also confirmed that the MNM is a different product with different physicochemical properties from the parent clay. Recently, the compounds containing S − S bonds exhibited high biological activities like antimicrobial, antitumor, antifungal, and cytotoxic activities [36] therefore it would be expected that both UM and MNM can affect the ruminal microbial fermentation profiles differently.
Based on the results of the in vitro GP experiment, it appears both experimental clays can inhibit the GP compared to the control, but MNM generally was more efficient to reduce GP values than the UM in a dosedependent manner. The UM is known to have an ability for capturing CO 2 (the major component of GP) through a reaction between CO 2 molecules and UM interlayer -OH functional groups to form -HCO 3 , which can in turn react with more interlayer cations thereafter [37]. The higher reduction in GP caused by MNM may indicate the higher absorbance efficiency to capture CO 2 than the UM. The increases in interlayer spacing, hydrophobic surface, and CEC in addition to the observed shifts in the frequency of the hydroxyl bonds found by FTIR analysis of MNM compared with UM may enhance the absorptive efficiency of MNM to capture CO 2 and thus reduce GP [12].
It is worth noticing that the control diet supplemented with AFB1 produced lower GP than the control diet without AFB1, it seems that AFB1 has an inhibitory effect on some rumen microorganisms [9]. The literature showed inconsistency in the impact of AFB1 on GP values. Khodabandehloo et al. [9] reported that AFB1 at concentrations up to1.5 μg/ml did not affect GP values, while when added at higher doses (5 and 10 μg/ml), a significant reduction in GP was noted. Similarly, Mojtahedi et t al [38]. reported that increasing the addition level of AFB1 from 0 to 900 ng/ ml decreased GP from 196 to 166 ml/g DM, respectively, while no effects on GP were observed by Jiang et al. [39] when used AFB1 at similar concentrations. Most of these studies even did not mention the AFs concentration in the diets before AFB1 addition, which can affect the obtained results. Thus these contradictions may mainly be due to the difference in aflatoxin source, ruminal inocula, or animal diet, in addition to the AFB1 experimental doses.
The low MNM supplemented to diet with or without AFB1 presented promising inhibition in CH 4 production without adverse effects on OM or fiber degradability compared to the control without AFB1. Hydrogen (H+) is the main metabolite in the microbial degradation of OM and NDF that methanogens mainly use to reduce CO 2 into CH 4 [5]. Thus, results suggested that the reductions in GP or CH 4 by the low MNM were not a result of the general inhibition of the microbial activities, but might be specifically related to the methanogenesis process. However, this possibility has yet to be proved, but the shifts of the absorption bands of the hydroxyl and sulfate groups in the high-frequency range in the FTIR analysis in addition to the high CEC of the MNM compared to UM would indicate the higher ability to bind the H+. Moreover, the presence of a band of S-S of MNM may also interfere in the anti-methanogenesis processes. Components containing sulfur and sulfate are capable of using H+ at low concentrations to produce hydrogen sulfide (H 2 S), thereby combating methanogens to produce CH 4 [15,16]. Reducing sulfite to produce H 2 S consumes eight electrons and thus can be an alternate electron acceptor as well as nitrate [15]. Sulfite, in addition to inhibiting CH 4 production, is toxic to ruminal bacteria and methanogens [16]. Methane was determined per unit of the TDOM in the current study, however, no significant differences were detected in the TDOM among the experimental treatments, but the outstanding of the low MNM dose to reduce CH 4 compared to the high dose might be due to the numerical decreases in TDOM happened by the high MNM. Recently, Soltan et al. [12] found that supplementation of MNM modified by SDS at 500 mg/kg DM to a basal diet containing lower NDF (395 g/ kg DM) reduced CH 4 (38%) production at a higher value than what we obtained in the current study (22.3%). In both studies, the MNM had similar physiochemical properties except for the higher particle size of the current MNM (98.2 nm) than that was used by Soltan et al. [12] (59.8 nm). It seems that reducing CH 4 emission was primarily induced by the diet forage type (fiber content) and MNM particle size. Methane reduction by low MNM was combined with increases in propionate concentration either with or without AFB1 contamination compared to the control diet contaminated with AFB1. These results indicated that MNM may elevate the adverse effects of AFB1 on ruminal fermentation, as the affinity of MNM to cationic matters might improve its AFB1adsorption capacity. Enhancing propionate production might partly explain the reduction in CH 4 , as it also served as an alternative hydrogen sink in the rumen [40].
The addition of AFB1 reduced CH 4 production, no clear explanations can be provided here. Studies of AFB1 effects on CH 4 emission are rare, however, most AFB1 studies concur that it has selective inhibitory effects against ruminal microorganisms including cellulolytic bacteria [9], however, AFB1 did not affect the TDNDF in the current study. The total protozoal count was not affected by the clay treatments or AFB1, may an incubation period of 24 h in vitro assay was not an adequate time to reveal the effect of these additives on the protozoal count. It can be suggested that protozoa did not interfere in the CH 4 reduction achievements in this study, this is because protozoa and methanogens are in a synergistic relationship, as the former can provide the required metabolites (including H 2 ) for methanogenesis [41].
The effects of the experimental clays and AFB1 additions on ruminal pH can be attributed to ammonia production. Decreases in the pH by UM were combined with decreases in ammonia concentration, while the pH increases observed in AFB1 contaminated diets were combined with ammonia increases. Ammonia is the end-product of ruminal protein fermentation, thus high ammonia concentration is an indicator of the high degradation of dietary protein [41]. However, no differences were detected neither AFB1 nor MNM and UM on the TDOM, but the reductions observed in ammonia concentration were combined with reductions in BCVFA (the end products of amino acid deamination in the rumen) by low MNM dose. The presence of the acidic functional groups (SO 3 − ) of MNM rather than the clay pore structure might enhance ammonia capture capacity in MNM [12,16]. This partly explains the reduction in ammonia by the low dose of MNM, but unexpectedly, the high supplementation level of MNM resulted in increases in ammonia and BCVFA concentrations. It seems that not only the MNM capture capacity for ammonia can be the sole indicator of rumen ammonia reduction. The balance between ammonia and BCVFA releases and uptakes by the specific rumen microbes may also affect their final concentrations [42]. It seems that the high dose of MNM adversely affected the growth of specific microbes that consume ammonia and BCVFA. This together with the decreases in GP, total SCFA, and acetate concentrations observed by the high MNM might confirm such a suggestion. Moreover, ruminal bacterial cells are known to have negatively charged sites, thus the modified montmorillonite might bind them through extracellular polysaccharides of the bacterial cell wall due to the presence of positively charged interlayer ions (e.g., Na + ) in the clay [11]. These findings suggested that great attention has to be taken into consideration to select the proper dose of MNM to obtain the most nutritional benefits from its addition.
The results of this study are consistent with those of Khodabandehloo et al. [9] who observed high ammonia concentration in AFB1-contained cultures due to the decreased growth of cellulolytic bacteria, rumen proteolytic activity, or high microbial lysis caused by AFB1. In the current study, we only measured the total protozoal count, thus quantification information of the cellulolytic, proteolytic, or sulfate-reducing bacteria would be needed to explain our results, this is the major limitation of this study, that has to be considered in further MNM in vitro/in vivo studies.
Conclusion
The modified nano montmorillonite by SDS exhibited exceptional physicochemical properties compared to the unmodified clay (UM), like high cation exchange capacity which might improve its adsorption capacity. Diets contaminated with AFB1 adversely affected the ruminal fermentation process, while MNM can stimulate them. Supplementation of MNM at 500 mg/kg reduced CH 4 with increases in propionate concentration either with or without AFB1 contamination compared to the control diet contaminated with AFB1. The consideration of MNM as a ruminal fermentation modifier was dosedependent since the high MNM supplementation dose adversely affected the ruminal fermentation profile. It seems that modifications of the clays are a new potential approach as feed additives to elevate the adverse effects of mycotoxins while reducing the GHG emission from the livestock sector, but such in vitro experiments did not account for the absorption of AFB1 and/or its metabolites into the bloodstream after ingestion. Therefore the modified clays are required to assess in vivo (with various diet types) for recommendations and practical applications. | 2022-11-03T17:37:40.436Z | 2022-11-03T00:00:00.000 | {
"year": 2022,
"sha1": "ad3f501376da492d49b8a970af06468f2c4f53db",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "83f86823b39f4fd788fd7074c07d61871c1a4659",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253147784 | pes2o/s2orc | v3-fos-license | Advances in multi-omics study of biomarkers of glycolipid metabolism disorder
Graphical abstract
Introduction
Metabolic disease, such as type 2 diabetes mellitus (T2DM), hyperlipidemia, and obesity, are increasingly common owing to the change of lifestyle and the rapid development of industrialization. As an important feature of metabolic diseases, glycolipid metabolism disorder is silently threatening human health. It is estimated that in 2040, 642 million adults worldwide will have diabetes, and the vast majority of them will be T2DM [1]. Overweight is a risk factor for diabetes, dyslipidemia, and nonalcoholic fatty liver disease. According to an ecological study, about a third of the global population has been determined to be obese or overweight since 1980 [2]. The epidemiological findings of dyslipidemia are equally discouraging. Several researchers have found that total cholesterol decreased the most in high-income western regions and in central and eastern Europe, while increased the most in east and southeast Asia. Particularly, the population with high cholesterol has increased significantly in China, which now has one of the highest cholesterol levels in the world [3].
The harm of glycolipid metabolism disorder lies in the damage of general organ caused by long-term abnormal blood glucose and lipid levels, leading to the gradual decline of its function. Meanwhile, microvascular and macrovascular injury are regarded as the important cause of disability and mortality in patients. As reported, patients with concurrent T2DM, hypertension, and dyslipidemia are 6 times more likely to have cardiovascular disease compared with those with T2DM alone [4]. Currently, a single intervention of hyperglycemia or hyperlipidemia cannot effectively regulate multiple metabolic disorders, resulting in suboptimal lipid control and poor glycemic control. Glycolipid metabolism disorder are more complicated than single factor metabolic abnormalities, because of multi-factorial interaction [5]. Based on the complexity of the metabolic regulatory network, simultaneous regulation of different metabolic pathways can be more potent than regulation of a single pathway in the treatment of glycolipid metabolism disorder [6]. Combination therapy has been demonstrated to be superior to monotherapy in metabolic abnormalities [7,8].
Owing to the complexity of the pathogenesis of glycolipid metabolism disorder, its underlying molecular mechanisms are so far unknown. A better understanding of the pathophysiology of glycolipid metabolism disorder using multi-omics will improve preventive, diagnostic, therapeutic and reparative strategies. In the pages that follow, we consider the progress made in genomics, transcriptomics, proteomics, metabolomics and gut microbiomics, and discuss how bringing data from these techniques together through integromics and systems biology. Multi-omics research sheds new light on revealing the potential pathogenic targets, pathophysiological mechanisms and biomarkers of therapeutic intervention for the occurrence and development of glycolipid metabolism disorder, and offers a fresh perspective on controlling the progression of it.
Pathophysiology of glycolipid metabolism disorder
Glycolipid metabolism disorder is a complex and systemic disease caused by multiple metabolic organ dysregulations. The pathogenesis of these conditions involves interactions among core pathological mechanisms such as neuroendocrine axis dysfunction, insulin resistance, oxidative stress, chronic inflammatory response, and gut microbiota dysbiosis. The foregoing processes are implicated in the occurrence and progression of these diseases.
The human body controls the release of neurotransmitters, hormones, and cytokines through the nervous and endocrine systems to maintain glycolipid metabolism homeostasis. The hypothalamus regulates energy metabolism by detecting signals from the peripheral tissues [9]. Tanycytes and 5-HT neurons are key cells and leptin are important signaling molecules regulating glycolipid metabolism along the neuroendocrine axis [10][11][12][13]. Insulin resistance (IR) is characterized by declines in glucose uptake and insulin utilization efficiency and is a common symptom of glycolipid metabolism disorder. It occurs primarily in muscle, fat, and liver tissues [14]. The mechanism underlying the development of IR is associate with impaired insulin signal transduction, including accumulation of specific lipid mediators, abnormal features of mitochondrial function, and increases in stress-activated protein c-Jun-N-terminal-kinase (JNK) and inflammatory pathways [15,16]. Oxidative stress is a central factor in the initiation and progression of glycolipid metabolism disorder and is characterized by the augmented generation or diminished elimination of reactive oxygen species (ROS) and reactive nitrogen species (RNS) because of an imbalance between pro-oxidant and antioxidant levels. Glucotoxicity and lipotoxicity promote IR through oxidative stress by damaging pancreatic islet cells, adipocytes, and their signaling pathways [17]. Chronic low-grade inflammation is a critical feature of glycolipid metabolism disorder and is characterized by massive infiltration of immunocytes including macrophages, natural killer (NK) cells, mast cells, and others [18,19]. In chronic metabolic inflammation, inflammatory factors and cells regulate glycolipid metabolism in the liver, fat, muscle, pancreas, and other tissues and organs through an extensively interwoven immune network, thereby inducing IR and glycolipid metabolism disorder. Several recent studies showed that gut microbiota or the 'second genome' are implicated in the occurrence and development of multiple metabolic diseases. Gut microbiota can modulate nutrient metabolism upon dietary intake and produce many metabolites to interact with the host in a variety of ways, including regulating glucose and lipid metabolism pathways, influencing the differentiation and function of immune cells, affecting insulin sensitivity and so on [20]. In-depth research on the interactions between the gut microbiota and the host has revealed that the former execute vital functions in metabolic regulation via the gut-liver-brain axis [21].
Biomarkers of glycolipid metabolism disorder in multi-omics research
Biomarkers can be powerful tools in the management of diseases. For glycolipid metabolism disorder, plasma glucose (measured after fasting or a glucose tolerance test), glycylated hemoglobin, and plasma lipids are regarded as clinical biomarkers for diagnostic and screening. The future of medicine lies in individualized therapies, prospective tracking of individual health indicators, and critical attention on preventive measures [22]. Based on this, individualized and multidimensional biomarkers are urgently needed to reveal the prediction, diagnosis and prognostic features of glycolipid metabolism disorder, and provide more valuable reference information for drug development, clinical diagnosis and personalized treatment. Recently, with the development of omics technologies and bioinformatics, multi-omics research on glycolipid metabolism disorder has gradually increased, which is conducive to understanding the molecular mechanism of disease occurrence and evaluating biomarkers, promoting the process of precision medicine for glycolipid metabolism disorder (Supplementary Table 1).
Genomics
Genomics is a subdiscipline of genetics that characterizes and quantifies organism's complete genome and studies the relationship between genes and their effects on organisms [23].
The etiology of glycolipid metabolism disorder is known to have a considerable genetic component. Over the past two decades, linkage analyses, candidate gene approaches, and large-scale genomewide association study (GWAS) have successfully identified more than 100 genes that confer susceptibility to glycolipid metabolism disorder. Genomics explains the key genetic variants associated with the risk of glycolipid metabolism disorder, provides guidance for future studies, and helps formulate efficacious preventive and therapeutic measures for disease. Pharmacogenomics optimizes treatment schedules, improves the effectiveness of personalized therapy, and minimizes potential side effects during clinical treatment.
Genes associated with susceptibility to glycolipid metabolism disorder
Genes determine individual susceptibility to diseases. Most genes determining susceptibility to glycolipid metabolism disorder regulate insulin secretion and sensitivity and pancreatic b cell function. These include TCF7L2 (transcription factor 7-like 2), PPARs (peroxisome proliferator-activated receptors), KCNJ11 (potassium inwardly rectifying channel subfamily J member 11), SLC30A8 (solute carrier family 30 member 8), FTO (fat mass and obesityassociated), and so on [24] (Fig. 1a). TCF7L2 is strongly associated with T2DM [25], is a major transcription factor (TF) in the canonical Wnt signaling pathway, regulates intrapancreatic glucose homeostasis, and is essential for glucose-stimulated insulin secretion (GSIS) maintenance and pancreatic b cell survival [26]. In 2006, Grant et al. reported that variation in TCF7L2 expression was closely associated with T2DM risk in a case-control study on Caucasians in Iceland, Denmark, and the United States [27]. The association of T2DM with single-nucleotide polymorphisms (SNPs) in TCF7L2 has raised global concern and was confirmed in ethnically diverse populations [28][29][30][31].
PPARc (peroxisome proliferative-activated receptor, gamma) is a member of the nuclear hormone receptor superfamily of TFs and the first screened candidate gene associated with glycolipid metabolism disorder [32]. PPAR activation regulates gene networks controlling various homeostatic processes involving inflammation, adipogenesis, lipid and glucose metabolism, and insulin resistance [33]. SLC30A8 is correlated with pancreatic function and is predominantly expressed in that organ. It encodes the endocrine pancreasrestricted zinc transporter ZnT8. Abnormal SLC30A8 and ZnT8 function affect insulin biosynthesis, storage, and secretion and hinder normal glucose metabolism. In 2007, SLC30A8 was identified as a novel T2DM susceptibility gene [34]. Subsequent studies verified the association between SLC30A8 SNPs and T2DM in different racial and ethnic groups [35][36][37]. FTO is the first candidate obesity gene to be recognized in the general population. It is highly expressed in hypothalamic nuclei and homeostatically controls the energy balance. Variations in FTO are associated with the risks of obesity and T2DM [38][39][40][41]. Kir6.2 and SUR1 (sulfonylurea receptor 1) form the pancreatic b cell K ATP channel. SUR1 is the site of sulfonylurea binding while Kir6.2 is an ion channel encoded by ABCC8 (ATP binding cassette subfamily C member 8) and KCNJ11 [42,43]. Variations in KCNJ11 and SUR1 impede K ATP channel function, impair insulin secretion, and increased susceptibility to T2DM [44,45].
As deep sequencing technologies continued to evolve, the focus of glycolipid metabolism disorder research gradually shifted from common genetic variations (GWAS (genome-wide association studies) era) to rare genetic variations (post-GWAS era). To date, the exact number of genes determining glycolipid metabolism disorder susceptibility and the precise mechanisms of their interactions have not been established. However, the results of recent GWAS have been encouraging. Xue et al conduct a meta-analysis of GWAS with 16 million genetic variants and identify 139 common and 4 rare variants associated with T2DM, 42 of which (39 common and 3 rare variants) are independent of the known variants [46]. Fuchsberger et al. reported that 126 variants in four genes (TCF7L2, ADCY5, CCND2, EML4) were significantly associated with the risk of T2DM [47]. In a large-scale genome-wide association study, Spracklen et al. identified new genetic links correlated with T2DM in 433,540 East Asians. They identified 301 distinct association signals at 183 loci, and 61 loci among that are newly implicated in predisposition to T2DM [48]. Locke et al conduct a genome-wide association study and Metabochip meta-analysis of body mass index (BMI) in up to 339,224 individuals. This analysis identifies 97 BMI-associated loci, 56 of which are novel [49]. Genetic variants associated with glycolipid metabolic disorders. a. Overview of canonical signaling mechanisms involved in beta-cell glucose sensing and responses to secretory potentiators or inhibitors. TCF7L2 (Transcription factor 7-like 2) has a role in the canonical Wnt (Wingless-type MMTV integration site) pathway. SLC30A8 (Solutelinked carrier 30, member 8) encodes ZnT8 (Zinc transporter 8) which regulates the influx of zinc into intracellular vesicles of insulin is presumed to be critical for insulin storage and secretion. KCNJ11 (K + inwardly rectifying channel, subfamily J, member 11) and SUR1 (Sulfonylurea receptor 1) encode K ATP channel together, thus indirectly sensing blood glucose concentrations and controlling insulin release. PPARs (Peroxisome proliferator-activated receptors) in glycolipid metabolism disorder. BAT, brown adipose tissue; FFA, free fatty acids; ROS, reactive oxygen species; T2DM, type 2 diabetes mellitus; TG, triglycerides; WAT, white adipose tissue. b. Pharmacogenomics and its targets in glycolipid metabolic disorders. MATE1, multidrug and toxin extrusion 1; OCT, organic cation transporters; SLC47A1, solute carrier family 47 member 1; SUR1, Sulfonylurea receptor 1; KCNJ11, K + inwardly rectifying channel, subfamily J, member 11; ABCC8, ATP-binding cassette, subfamily C, member 8; GLP-1, glucagon-like peptide-1; GLP-1R, glucagon-like peptide-1 receptor; DPP-4, dipeptidyl peptidase-4; SGLT2, sodium-glucose cotransporter 2; MGT, magnesium transporter; SLC5A2, solute carrier family 5 member 2; PNPLA3, patatin-like phospholipase domain-containing protein 3; MDR1, multidrug resistance gene 1; CYP450, cytochrome P450; HMG-CoA, 3-hydroxy-3methylglutaryl coenzyme A.
It is well known that glycolipid metabolism disorder are polygenic disorders, but even combining with exons and wholegenome sequencing data, genetic variants might explain only about 10 % of the phenotypic variability in patients with glycolipid metabolism disorder [50]. Environmental factors such as lifestyle modification, nutritional imbalance and behaviour change might be more critical in the development of glycolipid metabolism disorder. Aging and genetic variation are both important contributors to the epigenetic variability seen in individuals affected by obesity or T2DM. Furthermore, the in utero environment and external factors such as physical activity and availability of nutrients affect the epigenome [51]. Consequently, further exploration of epigenetic factors and their mechanisms of glycolipid metabolism disorder will bring new ideas and opportunities for the prevention and treatment of glycolipid metabolism disorder.
Pharmacogenomics of glycolipid metabolism disorder
Metformin is prescribed as a first-line therapy for T2DM, as it is low-cost, has high efficacy, and is unlikely to induce hypoglycemia or other adverse reactions. When metformin monotherapy fails to provide satisfactory efficacy or provokes adverse effects, other hypoglycemic agents can be combined with metformin or substitute for it altogether. The latest American Diabetes Association/ European Association for the Study of Diabetes (ADA/EASD) consensus reports indicate that sulfonylureas, thiazolidinediones, dipeptidyl peptidase 4 (DPP-4) and sodium-glucose cotransporter 2 (SGLT2) inhibitors, and glucagon-like peptide-1 (GLP-1) receptor agonists are reasonable as second-line treatment options [52]. Metformin is a hydrophilic organic cationic drug and it depends upon organic cation transporters (OCTs) to enter hepatocytes and renal epithelial cells where it is excreted through bile and urine, respectively, via multidrug and toxin extrusion protein 1 (MATE1). SLC47A1 (solute carrier family 47 member 1) encodes MATE1 and plays a key role in metformin transport and excretion [53]. Most OCT polymorphisms except MATE1 affect metformin pharmacokinetics and pharmacodynamics [54]. Sulfonylureas are insulin secretagogues and comprise an important class of oral hypoglycemic agents. They stimulate pancreatic b cells to release insulin by binding high-affinity plasma membrane receptors conjugated with K ATP channels. The latter are regulated by SUR1, KCNJ11, and ABCC8 and their polymorphisms are associated with sulfonylurea efficacy [55][56][57]. GLP-1 receptor agonists exert their hypoglycemic effect by binding glucagon-like peptide 1 receptor (GLP-1R). GLP-1R polymorphisms are correlated with GLP-1 receptor agonists efficacy [58,59]. DPP-4 inhibitors upregulate GLP-1 by retarding DPP-4 inactivation, activate intestinal GLP-1R, promote insulin release, and reduce glycemia [60]. Hence, DPP-4 inhibitor efficacy may be influenced by GLP-1R and DPP-4 polymorphisms [61,62]. SGLT2 inhibitors constitute a novel class of antidiabetic drugs that lower plasma glucose by inhibiting renal glucose reabsorption and promoting urinary glucose excretion. Genes related to the clinical efficacy of SGLT2 inhibitors include MGT (magnesium transporter), SLC5A2 (solute carrier family 5 member 2), PNPLA3 (patatin-like phospholipase domain-containing protein 3), and others [63][64][65].
Polymorphisms associated with statins have become the focus of pharmacogenomics studies on lipid-lowering drugs. Candidate genes associated with the differential statin efficacy are divided into two main categories. Members of the first class such as CYP450 (cytochrome P450) and MDR1 (multidrug resistance mutation 1) regulate pharmacokinetics and encode drug-metabolizing enzymes and drug transport [66]. Members of the second class such as apolipoprotein and HMG-CoA (hydroxymethylglutaryl-coe nzyme A) regulate pharmacodynamics and encode drug targets and lipid metabolism [67][68][69]. (Fig. 1b).
Transcriptomics
Transcriptomics is a discipline that studies the gene transcription and transcriptional regulation in cells at the overall level, which contributes to understand the gene expression profiles of diseases, and then reveals the metabolic network and regulatory mechanisms of life course from the transcriptional level. As is well known, there are some non-protein-coding genes within the organism, and the transcription products of these genes are known as non-coding RNAs (ncRNAs), mainly including long non-coding RNAs (lncRNAs), micro RNAs (miRNAs), circular RNAs (circRNAs). Non-coding RNAs plays a substantial regulatory role in the occurrence and development of glycolipid metabolism disorder, and could be useful as early molecule marker for the diagnosis of glycolipid metabolism disorder (Fig. 2).
LncRNAs
LncRNAs account for more than 80 % of all non-coding RNAs, and its transcripts are widely involved with every aspect of cellular biological function. They regulate related protein-coding genes in numerous ways, and complement DNA bases to form stable triple-helix complexes, thus impairing the expression of target genes [70]. Altered expression of lncRNAs has been associated with poor glycemic control, insulin resistance, accelerated cellular senescence, and inflammation in diabetes patients [71]. Morán et al comprehensively reported the lncRNA expression profiles in human pancreatic b-cells, uncovered a high-confidence set of 1128 human islet-cell genes, and showed that they are an integral component of the b-cells differentiation and maturation program [72]. Several researchers have found that downregulation of lncRNA TUG1 (taurine upregulated gene 1) expression affected apoptosis and insulin secretion in pancreatic b-cells in vitro and in vivo, resulting in the occurrence of diabetes [73]. More recent research has revealed that the downregulation of the lncRNA GAS5 (growth arrest-specific transcript 5) is significantly associated with the occurrence and development of diabetes. Its downregulation can affect cell cycle and insulin secretion in pancreatic b-cells [74,75]. Alvarez-Dominguez et al established the transcripome of mouse adipose tissues by RNA sequencing, identified 1500 lncRNAs, and located lnc-BATE1 (brown adipose tissue enriched long non-coding RNA 1) is the key lncRNA regulating brown fat, providing a new target for the treatment of obesity [76]. Recent study determined that a new lncRNA, lncRNA suppressor of hepatic gluconeogenesis and lipogenesis (lncRNA SHGL), is a novel insulin-independent suppressor of hepatic gluconeogenesis and lipogenesis [77].
MiRNAs
MiRNAs are small non-coding RNAs composed of 19-22 nucleotides that modulate gene expression by binding to the 3 0 untranslated region of specific messenger RNAs (mRNAs) [78]. Impaired insulin secretion from the pancreatic b-cells is central in the pathogenesis of T2DM, and miRNAs are fundamental regulatory factors in this process [79]. The most abundant miRNA in the islet, miR-375, was also the first miRNA detected in pancreatic islet and may constitute a novel pharmacological target for the treatment of diabetes as a regulator of insulin secretion [80]. The miR-7 and the miR-200 family are other examples of islet abundant miRNAs. b-cell-specific overexpression of miR-7a in mice results in reduced insulin secretion [81]. Overexpression of miR-200 in mice is sufficient to induce beta cell apoptosis and lethal T2DM in mice [82]. High expression of miR-29 in liver, fat, and muscle tissue may trigger insulin resistance [83]. A growing body of evidence suggests that obesity-related and adipose tissue-derived circulating miRNAs are promising as novel therapeutic targets for obesity and related diseases [84]. The research group of Prof. Hu revealed that miRNAs of the miR-17 92 family inhibit the inflammatory response of macrophages by maintaining the expression of IL-10, thus maintaining the homeostasis of adipose tissue macrophages and inhibiting obesity [85]. Several studies have shown that miR-802 is increased in the pancreatic islets of obese mouse models and inducible transgenic overexpression of miR-802 in mice causes impaired insulin transcription and secretion [86].
CircRNAs
Circular RNAs are covalently closed transcripts mostly generated from precursor-mRNA by a non-canonical event called backsplicing. They are highly stable, evolutionarily conserved, and widely distributed in eukaryotes [87]. Recently, mounting evidence suggests that the misregulation of circRNAs is among the first alterations in various metabolic disorders including obesity and diabetes mellitus (DM). So far, the best known endogenous cir-cRNA related to diabetes is CDR1as (also termed as hsa_-circ_0001946, ciRS-7) which can promotes islet b-cells proliferation and insulin secretion in diabetes as a powerful miR-7 inhibitor [88,89]. Zhao et al found has_circ_0054633 differentially expressed in peripheral blood of patients with T2DM and healthy control [90]. Another study revealed that has_-circ_0054633 can regulates high glucose-induced human vascular endothelial cell dysfunction [91]. Hence, hsa_circ_0054633 may be involved in the pathogenesis of diabetes and could be used as a biomarker for the diagnosis of T2DM. The past two years have witnessed a significant increase in the number of studies determining the function of circRNAs in human adipogenesis and obesity [92]. Researchers analyzed the transcriptome of human and mouse visceral and subcutaneous fat by RNA sequencing methods, found that the silencing of circArhgap5-2 in vivo resulted in inhibition of lipid droplet accumulation and downregulation of adipogenic markers [93]. However, the mechanism by which circArhgap5-2 modulates adipogenesis remains to be determined. Zhu's experiments suggest that knockdown of hsa_circH19 promotes hADCSs (human adipose-derived stem cells) adipogenic differentiation via targeting of PTBP1 (polypyrimidine tract-binding protein 1), high levels of hsa_circH19 is an independent risk factor for the metabolic syndrome [94].
Proteomics
The essence of proteomics is to study proteins on a large scale, including protein expression, post-translational modifications, and protein-protein interactions [95]. The study of proteins, as the final product of genetic transcription and posttranscriptional modifications, has also played a pivotal role in the understanding of disease. At present, the proteomics research of glycolipid metabolism disorder mainly use two-dimensional gel electrophoresis (2-DGE), high performance liquid chromatography (HPLC), time-of-flight mass spectrum (TOF-MS) and other methods to explore the biomarkers for diagnosis and the pathways involved in disease pathogenesis (Fig. 3).
Shono et al. revealed the possible pathogenesis of T2DM by proteomics methods, which was specifically caused by alterations of protein secondary structure domains and post-translational modifications after the body received multiple pathogenic signals [96]. C reactive protein and a2-macroglobulin are clinically sensitive biomarkers of T2DM. Riaz et al. compared changes in serum differential protein levels in diabetic patients and healthy populations. Levels of C reactive protein (CRP) was found to increased by 872 % in the diabetic patients as compared to the controls, which supporting for the viewpoint that the occurrence of diabetes is related to inflammation [97]. Takada et al. found that serum monomeric a2-macroglobulin is highly expressed in many diabetic subjects by mass spectrometry analysis, and it might become an important biomarker for diagnosis of T2DM [98]. A Swedish study identified cathepsin D and confirmed six proteins (leptin, renin, interleukin-1 receptor antagonist [IL-1ra], hepatocyte growth factor, fatty acid-binding protein 4, and tissue plasminogen activator [t-PA]) as IR biomarkers [99]. Huth et al. identified proteins related to the prediction and early diagnosis of T2DM [100]. Mannosebinding lectin-associated serine protease 1 (MASP) levels were positively associated with both incident type 2 diabetes and predi- abetes. Adiponectin was inversely associated with incident type 2 diabetes. MASP, adiponectin, apolipoprotein A-IV, apolipoprotein C-II, C reactive protein were associated with individual continuous outcomes. A recent example of MS (Mass Spectrometry) proteomics analysis, paired with 2-dimensional gel electrophoresis, showed higher levels of Alpha-1-antichymotrypsin, Alpha-1antitrypsin, apolipoprotein A-I, haptoglobin, retinol-binding protein 4, transthyretin, and zinc-alpha-2-glycoprotein in those with abdominal adiposity or insulin resistance compared with normal individuals [101].
Benabdelkamel et al. compared the protein expression of mature adipocytes within subcutaneous adipose tissues and found that, compared to the healthy individuals, a total of 23 proteins specifically expressed were identified in obese subjects, which are mainly involved in glucose and lipid metabolism, energy regulation, cytoskeleton structure and redox reactions [102]. Bae et al. proposed that individuals who are obese harbor a large number of differential proteins in insulin-sensitive tissues such as liver, skeletal muscle and adipose tissue. Of these, leukocyte common antigen-related phosphatase, PTP-a (protein tyrosine phosphatase a), PTP-1B (protein tyrosine phosphatase 1B) are highly expressed.
Follow-up studies demonstrated that these enzymes participate in insulin signaling [103]. It has been established that cofilin-1 (CFL1) has an inhibitory effect on brown adipocyte differentiation. The overexpression of CFL1 inhibited the brown fat deposition and repressed the brown marker genes UCP1, PRDM16, PGC-1a and PPARc [104].
Metabolomics
Metabolomics often utilizes approaches based on nuclear magnetic resonance (NMR) and/or various MS techniques to analyse the metabolites in biological samples, including low-molecular-weight compounds such as amino acids, organic acids, lipids, nucleotides, and sugars [105]. A review of recent research revealed that many studies found correlations between glycolipid metabolism disorder and metabolomics characteristics [106]. Thus, metabolomics research can be used to describe abnormal metabolism during the progression of glycolipid metabolism disorder, provide insight into disease mechanisms, and explore disease-associated biomarkers to assess the severity of disease and potential metabolic pathways (Fig. 4).
Amino acids metabolomics of glycolipid metabolism disorder
In recent years, numerous studies found that branched chain amino acids (BCAAs) are potential biomarkers of glycolipid metabolism disorder, including valine, leucine and isoleucine. Newgard et al confirmed that BCAAs in particular were higher in individuals that were obese in a cross-sectional metabolomics analysis of obese and lean individuals [107]. Guasch-Ferré et al metaanalyzed results from eight prospective studies that reported risk estimates for metabolites and T2DM, including 8,000 individuals of whom 1,940 had T2DM [108]. The results showed that BCAAs was positively associated with the risk of T2DM. Growing experimental evidence have posited potential mechanism of glycolipid metabolism disorder caused by up-regulation of BCAAs. BCAAs and their corresponding branched chain keto acids (BCKAs) can activate mTOR signaling, induce oxidative stress, cause mitochondrial dysfunction, and potentially contributing to the development of further insulin resistance [109,110]. Analyses in a smaller cohort utilizing Mendelian randomization suggested that higher BCAAs levels do not have a causal effect on insulin resistance while increased insulin resistance drives higher circulating fasting BCAAs levels [111]. This findings point to elevated BCAAs as a downstream effect of adiposity and insulin resistance. Several studies have also found positive associations of aromatic amino acids, Fig. 3. General scheme of current proteomic, for clinical metabolic research. Human body fluids (i.e. serum, urine and blood) have to be properly stored and prepared with optimised protocols. Subsequently, the proteins should be purifified and/or isolated to get digested peptides. The adequate 2-DE (two-dimensional gel electrophoresis), HPLC (high-performance liquid chromatography) or TOF-MS (time-of-flight mass spectrometer) based strategy is applied, and once we get the data (potential biomarkers), validation assays (i.e. ELISA and/or western blotting) can be carried out choosing specifific antibodies to identify real protein-biomarkers.
including tyrosine and phenylalanine, with future development of T2DM [112], while glycine and glutamine were negatively correlated with the development of T2DM [113,114]. Higher levels of 2-aminoadipic acid (2-AAA), a lysine degradation product, were also found to be associated with increased risk for incident diabetes mellitus [115]. 2-AAA is associated with adipogenesis and insulin resistance, and can serve as a diabetes risk marker.
Lipids metabolomics of glycolipid metabolism disorder
Blood levels of free fatty acids (FFAs) rise slowly with elevated body mass, which is considered as an important feature of obesity-related metabolic diseases. Elevated clinical measures of lipids, specifically of bulk triglycerides, are considered a traditional risk factor for T2DM. The intracellular accumulation of fatty acid (FA) oxidation products such as diacylglycerols, triacylglycerols, and ceramides is linked with insulin resistance [116]. A large population-based study showed that fasting serum levels of glycerol, FFAs, monounsaturated FAs, saturated FAs, and n-7 and n-9 FAs are biomarkers for an increased risk of development of hyperglycemia and T2DM [117]. Lu et al. conducted metabolomics analysis of serum in the Chinese population and found that partial free fatty acid (palmitic acid, stearic acid, oleic acid and linoleic acid) and some ketone bodies (acetone and acetoacetic acid) in T2DM patients were significantly higher than those in healthy controls [118]. Ketone bodies are products of fat catabolism that are used as alternative substrates to glucose as sources of energy when carbohydrate intake is low and there is a surplus of circulating FFAs. It is considered to be a key metabolites in metabolism disruption. Several reports have shown that total ketone bodies were mildly elevated in patients with T2DM and were associated with fasting FFAs and inversely associated with triglycerides and insulin resistance [119]. Phospholipids are critical components of the cell lipid bilayer.
Carbohydrate metabolomics of glycolipid metabolism disorder
Elevated glucose level is an important metabolic feature of glycolipid metabolism disorder. Hexose sugars are the most frequently analyzed carbohydrate in metabolomics studies of incident diabetes mellitus. A prospective study revealed that hexose sugars was positively correlated with T2DM, whereas a species of mannitol and several deoxyhexose sugars were found to be inversely associated with diabetes mellitus risk [121]. Mack et al. conducted oral glucose tolerance test (OGTT) on healthy control populations, prediabetic populations and diabetic participants, and found that maltose, trehalose, fructose and mannose in plasma of prediabetic populations and diabetic participants were higher than those in healthy populations [122]. At the onset of glycolysis, glucose is converted to pyruvate inside the cell. Lu et al. found that serum pyruvate concentration was significantly higher in T2DM patients compared with normal controls, indicating increased glycolysis in T2DM patients [123]. A recent study based on 1 H NMR found elevated levels of pyruvate, lactate, and citric acid in T2DM patients, as well as elevated serum levels of tricarboxylic acid (TCA) cycle intermediates, such as succinic acid, creatine, and creatinine, compared with healthy controls [124].
The regulatory role of metabolites of gut microbiota in glycolipid metabolism disorder
Gut microbiota is involved in the catabolism and anabolism of nutritional elements in daily foods. About 10 % of the circulating Fig. 4. Schematic representation of the metabolic pathways in in the events of glycolipid metabolism disorder. Purple represents fatty acids metabolism; Flesh color represents choline metabolism; bisque represents phospholipids metabolism; green represents ketone metabolism; blue represents carbohydrate metabolism; yellow represents the tricarboxylic acid (TCA) cycle; pink, gray, and brown represent amino acid metabolism. metabolites in the human body come from bacteria and participate in metabolic regulation in human [125]. The metabolic products of gut microbiota, such as short-chain fatty acids (SCFAs), BCAAs, trimethylamine oxide (TMAO), tryptophan and indole derivatives, are intimately correlated with the pathogenesis of glycolipid metabolism disorder. In the proximal colon, gut microbiota ferment carbohydrates to produce SCFAs (such as acetate, propionate, and butyrate). Numerous in vitro and in vivo studies revealed that SCFAs, as beneficial microbial metabolites for prevention and treatment of glycolipid metabolism disorder, participate in the maintenance of intestinal mucosa integrity, improve glycolipid metabolism, control energy expenditure and regulate the immune system and inflammatory responses [126,127]. In contrast, in the distal colorectum, protein hydrolysis and fermentation can yield various harmful metabolites such as BCAAs (valine, isoleucine and leucine), phenols, and ammonia. BCAAs are involved in various bioprocess such as protein metabolism, gene expression, insulin resistance and proliferation of hepatocytes [128]. Gut microbiota also can directly modulate bile acid (BA) metabolism through the enterohepatic FXR-FGF15-FGFR4 axis. BA regulates cholesterol and triglyceride metabolism and maintains glucose and energy homeostasis [129,130]. Additionally, the gut microbiota can metabolise choline and L-carnitine from dietary sources (eg, red meat, eggs and fish) to produce trimethylamine (TMA), and then convert into TMAO [131]. In humans, the level of TMAO increased in patients with diabetes or at risk of diabetes and in obesity [131][132][133]. Tryptophan is an essential aromatic amino acid acquired through common diet sources, including oats, poultry, fish, milk and cheese. In addition to kynurenine and serotonin, tryptophan can also be directly metabolized into indole and its derivatives by gut microbiota, some of which are available as aromatic hydrocarbon receptor (AhR) ligands [134]. It has previously been observed that metabolic disorders are characterised by a reduced capacity of the microbiota to metabolise tryptophan into AhR agonists [135]. It was recently shown that IMP (imidazole propionate), a metabolite produced by histidine utilisation of gut microbiota, was enhanced in T2DM and associated with insulin resistance [136].
Other metabolites
1,5-anhydroglucitol (1,5-AG) is the major polyol in vivo, which structure is similar to glucose. Most notably, 1,5-AG level is reflective of short-term glucose status, postprandial hyperglycemia, and glycemic variability which are not captured by HbA1c assay [137]. Studies found evidence that a single-nucleotide polymorphism in the CYP7A1 coding region associated with deoxycholic acid levels that was also associated with T2DM in published GWAS metaanalyses, and the metabolism of bile acids and phospholipids shares some common genetic origin with T2DM [138]. Ferrannini et al identified a-hydroxybutyrate (a-HB) and linoleoyl-glycero phosphocholine (L-GPC) as joint markers of IR and glucose intolerance [139]. a-HB is an organic acid positioned at an interesting crossroad of intermediary metabolism-amino acid catabolism and glutathione synthesis-and upstream to the TCA cycle. Prior studies that have noted that a-HB is derived from a-ketobutyrate and has the potential to identify IR and risk of impairment of glycemic control and conversion of prediabetes to an evident diabetic state [140,141]. Metabolomics and lipidomics delineation by liquid/gas chromatography mass spectrometry was conducted on 115 middle-aged Dutch individuals (50 with MetS; 65 controls) in the Leiden Longevity Study [142]. They found that 9 metabolites were negatively correlated and 26 metabolites (mostly acyl carnitines, amino acids and keto acids) were positively correlated with the metabolic syndrome score. In addition, the metabolic syndrome (score) was associated with multiple individual metabolites (e.g., valeryl carnitine, pyruvic acid, lactic acid, alanine) and lipids in the univariate analyses [143]. Mainly, these molecules were intertwined with the metabolism of glucose, amino acid, and lipid.
Gut microbiomics
Intestinal flora is the microbiota colonized the gut. Its composition and function can be influenced by many factors, such as inheritance, living circumstances, lifestyle, dietary habits and drugs, and thus may impact the glucose and lipid metabolism in the host through inflammation and immune responses and metabolic pathways [144]. Genomics technologies, such as shotgun metagenomic sequencing and high-throughput sequencing of 16S rRNA gene, are by far commonly employed techniques of microbiome sequencing to determine the diversity, composition, structure distribution and function of gut microbiota. It has been reported that the gut microbiota of patients with glycolipid metabolism disorder is comprised primarily of opportunistic pathogens, accompanied by decreasing in beneficial microbes (Fig. 5).
The gut microbiota plays a predominant role in host nutrient metabolism, xenobiotic and drug metabolism, maintenance of structural integrity of the gut mucosal barrier, immunomodulation, and protection against pathogens [145]. Perturbations in gut microbiota can have negative health consequences. Particularly, the gut microbiota has advanced as an important contributor to the development of glycolipid metabolism disorder. In diabetic humans, there is a lack of uniformity in gut microbiota profiles. A number of researchers have demonstrated that the relative abundances of the genus Lactobacillus is positively correlated with T2DM [146,147]. Notably, the association of several species of Lactobacillus with T2DM is species-specific. For example, in T2DM patients, Lactobacillus acidophilus and Lactobacillus gasseri are decreased, while Lactobacillus xylosus is increased [148,149]. These results suggested that this bacterial genus' influence on host metabolism present highly diversity. Studies in different population have also shown that diabetic gut microbiota have lower concentrations of Roseburia intestinalis and Faecalibacterium prausnitzii (both butyrate-producing bacteria), and higher levels of Streptococcus mutans and members of Clostridiales [150].
In recent years, the influence of gut microbiota as a potential mechanism driving factors of obesity and its related comorbidity has become the focus of attention. Up to now, most studies displayed that obesity leads to low richness and diversity of gut microbiota [151][152][153]. Turnbaugh et al. demonstrated for the first time that transferring the gut microbiota from genetic obesity model (ob/ob mice) to germ-free mice by fecal microbiota transplantation (FMT) resulted in body fat accumulation and body weight gain in the latter [154]. In addition, comparisons of the distal gut microbiota of genetically obese mice and their lean littermates, as well as those of obese and lean human volunteers revealed that obesity is associated with changes in the relative abundance of the two dominant bacterial divisions, the Bacteroidetes and the Firmicutes. Liu et al found that the abundance of Bacteroides thetaiotaomicron, a glutamate-fermenting commensal, was markedly decreased in obese individuals [155]. Obesity was associated with notable changes in microbiome composition, such as Akkermansia, Faecalibacterium, Oscillibacter, and Alistipes, which show a significant decrease [156].
Recent advances in joint multi-omics analyses of glycolipid metabolism disorder
The initiation and progression of human diseases involve several pathological processes at the genome, transcriptome, proteome and metabolome levels. Single-omics data only reflect changes at one disease level and have limited effectiveness at screening disease targets. The comprehensive analysis of multilevel omics data is a more integrative and accurate approach towards individualized treatment, elucidation of the molecular mechanisms of disease, early clinical diagnostics, prognostics, and drug dosage and administration.
Transcriptomics and proteomics combination
Transcriptomics and proteomics combination simultaneously measure overall RNA and protein status, clarify their roles in various physiological processes, and reveal their mutual regulation and association. Transcriptomics does not fully reflect all biological characteristics while proteomics does not dynamically reflect gene expression. However, integrating both analytical methods mutually overcomes these deficiencies. Using a combination of transcriptomics and proteomics, Haythorne et al find significant dysregulation of major metabolic pathways in islets of diabetic bV59M mice. Multiple genes/proteins involved in glycolysis/gluconeogenesis are upregulated, whereas those involved in oxidative phosphorylation and branched chain amino acid metabolism are markedly downregulated. Indeed, aldolase B was the most upregulated of all proteins (65-fold) and there was also a dramatic increase in both mRNA (246-fold) and protein levels (40-fold) of the fructose/glucose transporter SLC5A10 [157]. Losko et al. revealed the role of MCPIP1 in adipogenesis and adipocyte metabolism by proteomics and transcriptomics [158]. RNA-Seq analysis followed by confirmatory Q-RT-PCR revealed that elevated MCPIP1 levels in 3T3-L1 adipocytes upregulated transcripts encoding proteins involved in signal transmission and cellular remodeling and downregulated transcripts of factors involved in metabolism. These data are consistent with proteomic analysis, which showed that MCPIP1 expressing adipocytes exhibit upregulation of proteins involved in cellular organization and movement and decreased levels of proteins involved in lipid and carbohydrate metabolism. Moreover, MCPIP1 adipocytes are characterized by decreased level of insulin receptor, reduced insulin-induced Akt phosphorylation, as well as depleted Glut4 level and impaired glucose uptake.
Metabolomics and proteomics combination
Metabolomics and proteomics identify disease biomarkers. A combination of proteomics and metabolomics may be used for simultaneous mechanistic and phenotypic studies, systematically describes the regulation of protein synthesis and metabolism, discloses the upstream and downstream regulatory pathways of key proteins and metabolites, and helps explain the signaling pathways and mechanisms associated with disease development. Researchers have used proteomics and metabolomics to elucidate the mechanism of food-induced cholesterol biosynthesis. They found that elevated postprandial blood glucose and insulin levels activate mTORC1 (mechanistic target of rapamycin complex 1), which stabilizes HMGCR (3-hydroxy-methylglutaryl coenzyme A reductase), phosphorylates USP20 (ubiquitin-specific peptidase 20), and upregulates cholesterol biosynthesis. Long-term high-sucrose, high-fat diets induce USP20 phosphorylation, stabilize HMGCR, increases serum cholesterol, and cause metabolic diseases. By contrast, USP20 inhibition promotes HMGCR degradation, reduces lipid biosynthesis, enhances succinate production, and promotes heat generation. Therefore, USP20 inhibition is potentially an effective therapeutic approach for metabolic disorders including hyperlipidemia, non-alcoholic fatty liver disease, obesity, and T2DM [159]. Wang et al. harvested small intestine tissue and collected serum samples from T2DM model Chinese hamsters, analyzed them by LC-MS/MS (liquid chromatography-tandem mass spectrometry) proteomics and GC-MS/MS (gas chromatography-tandem mass spectrometry) metabolomics, respectively, and performed joint analyses of the differentially expressed proteins and metabolites. Annotation by bioinformatics analysis revealed that these differentially abundant proteins in the small intestine were commonly associated with abnormal glucose and lipid metabolism, IR, impaired insulin secretion, amino acid metabolism disorders, and inflammatory dysregulation. Moreover, differentially abundant metabolites in the serum were amino acids and were related to diabetic IR. Combined analysis of metabolomics and proteomics revealed significant changes in glutathione metabolism, biosynthesis of phenylalanine, tyrosine and tryptophan, and arginine and proline metabolism in T2DM model Chinese hamsters [160].
Gut microbiomics and metabolomics combination
Gut microbiota are vital to human metabolism. They provide enzymes for various biochemical and metabolic pathways in the host, participate in amino acid, bile acid, and carbohydrate metabolism, and form co-metabolic relationships with the host. Metabolomics is based on high-throughput analysis and bioinformatics technology and investigates variations and trends in overall endogenous metabolism. It can detect metabolites in gut microbiota, reflect the changes in gut microbiota function that occur under specific conditions, intuitively examine the relationships among gut microbiota and disease development and progression, and provide a research basis for disease prevention and treatment. The analysis of gut microbiota diversity based on shotgun metagenomic and 16S rRNA gene sequencing plus metabolomics comprehensively explores the relationships among gut microbiota disease occurrence, drug metabolism and pharmacodynamics, and gut microbiota structure and function. Pedersen et al found that the serum metabolome of insulin-resistant individuals is characterized by increased levels of BCAAs, which correlate with a gut microbiome that has an enriched biosynthetic potential for BCAAs and is deprived of genes encoding bacterial inward transporters for these amino acids. Prevotella copri and Bacteroides vulgatus are identified as the main species driving the association between biosynthesis of BCAAs and IR [161]. A metagenomic and targeted metabolomic analysis is conducted in 182 lean and abdominally obese individuals with and without newly diagnosed T2DM. The abundance of Akkermansia muciniphila (A. muciniphila) significantly decreases in lean individuals with T2DM than without T2DM. Its abundance correlates inversely with serum 3b-chenodeoxycholic acid (b CDCA) levels and positively with insulin secretion and fibroblast growth factor 15/19 (FGF15/19) concentrations [162].
Joint multi-omics analyses
To identify the early stages of T2DM, researchers obtained samples from 106 healthy and prediabetic individuals over approximately-four years and profiled their transcriptomes, metabolomes, cytokines, proteomes, and changes in their microbiomes [163]. Regression analyses of steady-state plasma glucose (SSPG) in insulin-resistant and insulin-sensitive subjects disclosed that IR was associated with elevated inflammation. It was also associated with altered lipid metabolism, and several long-chain polyunsaturated fatty acids were positively correlated with SSPG. Researchers also analyzed the relationships among the gut microbiota and host metabolites. In insulin-sensitive but not insulin-resistant subjects, Barnesiella spp. were positively correlated with IL-1b and Faecalibacterium spp. were negatively correlated with TNF-a. (tumor necrosis factor alpha). Butyricimonas spp. were negatively correlated with four lipids in insulin-resistant subjects. Multi-omics profile analyses revealed molecules that were unique to each individual and different from the cohort mean. One subject had abnormal levels of various metabolites and cytokines relative to the cohort average. Ten months after the final medical visit, the subject was diagnosed with T2DM and the multi-omics data indicated the dysregulation of T2DM related pathways. IL-1ra and high-sensitivity C-reactive protein (hsCRP) were highly elevated during the last three medical visits prior to the T2DM diagnosis. Researchers detected exogenous substances such as methyluric acid and methylxanthine among the molecules strongly associated with IL-1ra. The aforementioned substances are metabolites associated with glucose tolerance dysfunction and gut dysbiosis and were closely associated with the expression of host factors in the complement system, acute immune response signaling, and the lipopolysaccharide (LPS)-stimulated mitogen-activated protein kinase (MAPK) pathway. All of these are associated with the development of T2DM. Loss of gut microbial diversity and gain of body weight were observed even when subjects were diagnosed with T2DM. Several researchers used linear mixed models to examine the underlying relationships among glucose (FPG, HbA1C), inflammation (hsCRP) levels, and multi-omics measurements in healthy-baseline models and the relative changes at all time points in dynamic models [164]. The study indicated that both HbA1C and hsCRP were positively associated with total white blood cell (WBC), monocyte and neutrophil counts. Hepatocyte growth factor (HGF) was also associated with HbA1C and hsCRP which is consistent with its role in glucose metabolism and modulation of the inflammatory response. The authors also reported that FPG and HbA1C are associated with ''leukotriene biosynthesis" which contributes to inflammation and leads to insulin resistance. HbA1C is also associated with other lipid metabolism-related pathways including ''plasma lipoprotein assembly" and ''chylomicron assembly". The foregoing findings underscore the connections among inflammation and lipid and glucose metabolism as well as the regulation of these processes.
Discussion
The incidences of disorders of glycolipid metabolism such as T2DM, obesity, and hyperlipidemia have risen to epidemic proportions and pose serious threats to human health. Important objectives in medical research are the elucidation of the pathogenesis of glycolipid metabolism disorder and the development and implementation of efficacious prevention and treatment strategies. Neuroendocrine axis dysfunction, IR, oxidative stress, chronic inflammatory response, and gut microbiota dysbiosis are now considered the main pathological mechanisms of glycolipid metabolism disorder. They mutually interact in an interwoven network and initiate and cause the progression of disease. For most patients, existing glycolipid metabolism disorder prevention measures such as altering dietary habits or increasing exercise are mostly ineffective. Available therapeutic measures can improve patient health status to a certain extent. It is nonetheless difficult to restore metabolic levels to normal. Thus, further exploration into the pathogenesis of glycolipid metabolism disorder is necessary. Technological advances have led to the 'omics era', which is enabling the collection and integration of data and information at different molecular levels. The information obtained through omics techniques will contribute to a better understanding of glycolipid metabolism disorder pathophysiology, offer new opportunities for diagnosis and prognosis and lead to improved management of patients with glycolipid metabolism disorder. However, owing to the limitations of the development of omics technologies and the complexity of the research on glycolipid metabolism disorder, the multi-omics research on glycolipid metabolism disorder still faces numerous challenges.
Multi-omics studies reveal the pathophysiology of glycolipid metabolism disorder
Multi-omics studies usually examine genes (genomics), RNA (transcriptome), proteins (proteomics), and downstream metabolites (metabolomics) produced during DNA replication, transcription, translation, and post-translational modification, respectively. Multi-omics data provide evidence for pathogenesis, identify biomarkers, and reveal therapeutic targets for glycolipid metabolism disorder.
Research into the genes regulating susceptibility to glycolipid metabolism disorders is crucial. These include TCF7L2, PPARG, KCNJ11, SLC30A8, FTO. and others. Genetic polymorphisms affect glycolipid metabolism disorder mainly by decreasing pancreatic b cell function or increasing IR. GWAS have disclosed diseaserelated targets by contrasting genomic data for cases and controls at the population level. GWAS have returned encouraging results and helped direct and focus future research. The prediction of glycolipid metabolism disorder by screening susceptibility genes is in its infancy and few studies have been conducted in this area. They can nonetheless provide clues for exploring pathogenesis and searching for drug targets. Pharmacogenomics is the study of the interrelationships among genetic polymorphisms and drug effects and is based on genomics. Pharmacogenomics helps improve drug efficacy and safety, guides the research and development of new drugs, and provides a reference for the clinical administration of individualized medicine. Several studies demonstrated that individual differences in pharmacological glycolipid metabolism disorder therapy are closely associated with genetic polymorphisms in drug transporter and targets, drug catabolic enzymes, and genes related to the risk of developing glycolipid metabolism disorders.
Non-coding RNAs (ncRNAs) regulate gene expression at the transcriptional and post-transcriptional levels and affects the progression of glycolipid metabolism disorder. LncRNAs affect several molecular signaling pathways and participate in glycolipid metabolism disorder. Lnc-BATE1 establishes and maintains brown fat and its thermogenic capacity. Downregulation of lncRNA TUG1 and lncRNA GAS5 is connected to the occurrence of glycolipid metabolism disorder. MiRNAs regulate target gene expression at the post-transcriptional level and may serve as clinical diagnostic biomarkers. MiR-375 is specific to pancreatic b cells and its overexpression inhibits glucose-induced insulin secretion in them. Overexpressed miR-7, miR-200, miR-29 and miR-802 play vital roles in the pathogenesis of glycolipid metabolism disorder. CircRNAs regulate genes, compete with miRNAs for binding sites, and control glycolipid metabolism disorder. In islet cells, CDR1-as overexpression interferes with miR-7 function and improves the insulin level. Silencing circArhgap5-2 may inhibit lipid droplet accumulation and downregulate adipogenic markers.
Proteomics effectively identifies dysregulated proteins and pathways in cells under pathological conditions and helps discover disease-specific mutations and epigenetic alterations. C-reactive protein and a2-macroglobulin are sensitive markers of T2DM.
MASP is positively correlated with T2DM and prediabetes mellitus, Adiponectin is negatively correlated with T2DM onset. Cathepsin D, leptins, renins, IL-1ra, and t-PA are IR biomarkers. Leukocyte common antigen-related phosphatase, PTP-a and PTP-1B are upregulated in the livers, skeletal muscle, and adipose tissue of obese persons. The proteomics of glycolipid metabolism disorder is in its infancy. Nevertheless, progress has been made in the proteomics study of b cells, skeletal muscle, and adipose tissue. Other novel biomarkers will eventually be discovered, and they might play pivotal roles in the analysis of clinical serum, plasma, and urine samples.
Metabolic markers are downstream genome outputs and upstream environmental inputs. Studies on metabolites and metabolomics can disclose gene-environment interactions [165]. Several studies demonstrated that BCAAs, tyrosine, phenylalanine, 2-AAA, FFAs, ceramides, TAGs, DAGs, PEs, hexose, maltose, trehalose, fructose, mannose, deoxycholic acid, 1,5-AG and LPS are associated with numerous disease pathways in glycolipid metabolism disorder. Gut microbiota produce various metabolites that function as signaling molecules and substrates for host metabolic responses. They affect both physiological and pathological processes in the host. Previous research showed that the gut metabolites SCFAs, BCAAs, bile acids, and TMAO are closely associated with the development of glycolipid metabolism disorder.
The gut microbiota are novel potential drivers of the pathophysiology of glycolipid metabolism disorder, interact with obesity, low-grade inflammation, IR, and T2DM, and might function as hubs. Human gut microbiota also affect the brain function and alter host behavior via the microbe-gut-brain axis [166]. They can promote host metabolic health by facilitating weight loss, improving blood glucose control and IR, and so on. Hence, gut microbiota are promising drug targets in the treatment of glycolipid metabolism disorder. Many patients with glycolipid metabolism disorder have moderate gut dysbiosis. The abundances of the Lactobacillus spp., Lactobacillus gasseri and Streptococcus mutans are elevated while those of Roseburia intestinalis, Faecalibacterium prausnitzii, Bacteroides thetaiotaomicron, Akkermansi spp., Faecalibacterium spp., Oscillibacter spp., and Alistipes spp. are reduced in patients with glycolipid metabolism disorder. As gut microbiota play vital roles in human health and disease, antibiotic, probiotic, and prebiotic administration might regulate the gut microbiota and, by extension, glycolipid metabolism. Several studies showed that reasonable probiotic or prebiotic supplementation can regulate the host gut microbiota, thereby ameliorating energy metabolism and controlling chronic low-level inflammation [167]. Probiotic therapy improved glucose intolerance, hyperlipidemia, and hyperinsulinemia ina glucose-induced diabetic mouse model [168].
Single-omics research lacks multilevel integration and has limited utility in determining the etiology of complex diseases. For these reasons, multi-omics is now widely applied in glycolipid metabolism disorder research. Multi-omics confirms pathogenesis through both macro-and micro-etiology, comprehensively and systematically investigates the roles of the environment and genetics in glycolipid metabolism disorder, and elucidates the pathogenic factors and molecular mechanisms of these diseases. Multiomics also explores the factors mediating the association between the environment and lifestyle in glycolipid metabolism disorder and could, therefore, clarify pathogenic mechanisms. Multi-omics could also help develop risk prediction models for use in precision medicine, predict disease in high-risk individuals, and screen drug treatment subjects to monitor drug efficacy and adverse reactions.
Multi-omics studies exist limitations
Multi-omics studies of glycolipid metabolism disorder are gradually becoming more profound with the development of omics technology and bioinformatics. The precise diagnosis and treatment of glycolipid metabolism disorder based on joint multiomics analyses could eventually dominate the field. However, several limitations remain. Early and timely diagnosis of glycolipid metabolism disorders could improve the control and prognosis of these diseases. GWAS expanded the identification of relevant gene loci associated with glycolipid metabolism disorder. In clinical practice, however, the applicability of known susceptibility polymorphisms is limited. Furthermore, existing gene loci only explain part of the genetic variation in glycolipid metabolism disorder and require validation in clinical samples and trials. Stable, detectable ncRNAs could serve as molecular markers for the clinical diagnosis and prognosis of glycolipid metabolism disorder. Nevertheless, there are few comprehensive studies on ncRNAs, their mechanisms are unclear, and big sample data are lacking for them. Proteomics has been widely applied in the study of diseases related to glycolipid metabolism disorder. As proteomics is relatively new, it is not yet optimally reproducible. Certain rare proteins are difficult to identify and studies on them are time-consuming and expensive. Current research still compares tissue and serum samples between healthy individuals and those diagnosed with glycolipid metabolic disorders. Metabolomics is an important technological approach in the study of glycolipid metabolism disorder. Its integration with multi-omics data has begun to elucidate the complex relationships among gut microbiota, host metabolism, and the pathogenesis of glycolipid metabolism disorder. This investigative strategy has also led to novel diagnostic and therapeutic approaches and has laid a solid foundation for precision medicine. However, metabolomics lacks a universal analytical platform and mature consistent operational methods. Moreover, its results are conflicting. Most studies have not been analytically or clinically validated. Hence, there are few examples of the highly efficient application of metabolomics in clinical research. Gut dysbiosis has been implicated in obesity, diabetes, other diseases, and their progression. Ameliorating gut dysbiosis might help treat glycolipid metabolism disorder. However, the precise components and metabolic activity of the gut microbiome associated with glycolipid metabolism disorder are unknown. Evidence from animal experimentation suggests that the gut microbiota is the key to the development of obesity, inflammation, insulin resistance, and intestinal barrier dysfunction. Nevertheless, there are few human clinical mechanistic studies, and randomized, large-sample, multicenter clinical trials are required. Though joint multi-omics analyses expand investigations into glycolipid metabolism disorder, they may lead to false discoveries because of the combined effects of multiple factors and high variability among individual datasets. Thus, it is difficult to interpret multi-omics data or use them to identify biologically relevant molecules. As international data become publicly available and analytical platforms and collaborative groups increase, available resources for multi-omics studies will become more abundant and the cost of research will dramatically decrease. On the other hand, long-term follow-up and laboratory tests are required for ongoing research and the ethical and data sharing issues related to the research are of great concern. GWAS has high throughput and low genome detection costs and has, therefore, been widely used in large-scale cohorts. By contrast, metabolomics and proteomics have relatively low throughput and high cost. For these reasons, it is still comparatively uncommon to apply omics in large-scale cohort assays. Integrative multi-omics data analysis is still in its infancy and universal data integration and analytical methods must be developed to make full and effective use of available multi-omics data.
Future perspectives
The innovation of high-throughput technologies and omics data including genomes, transcriptomes, proteomes and metabolomes, and so on has disclosed risk factors and helped develop novel biomarkers associated with glycolipid metabolism disorder. Disease biomarkers reveal specific pathological features and detect changes in the status of various medical conditions. Though they may have high predictive efficacy in clinical studies, they nonetheless have certain limitations. The reliability of a biomarker depends upon the genetic background of the study subject, the treatment regimen, the composition of the gut microbiome, and the diagnosis and intervention times. Moreover, biomarkers do not possess universal clinical value. Therefore, multicenter, large-scale, standardized clinical studies are required to improve the diagnostic efficacy and practical applicability of glycolipid metabolism disorder biomarkers.
The development of personalized treatment options for glycolipid metabolism disorder requires clinical feature information, the integration of complex multi-omics data, and a metabolite network map. In this manner, the expression patterns of key functional genes and signaling pathways regulating glycolipid metabolism disorders may be precisely plotted. For these reasons, multidimensional data sources must be integrated into big data studies to develop precision medicine for glycolipid metabolism disorder. A systematic and comprehensive understanding of the risk factors associated with glycolipid metabolism disorder is required to predict and prevent these medical conditions.
Prospective cohorts, multi-omics studies, high-quality baselines, follow-up information, biological samples, and multi-omics detection should be applied. In this way, novel risk factors associated with glycolipid metabolism disorders may be identified and their pathogenesis may be elucidated. Future multi-omics studies on glycolipid metabolism disorder, will require more clinical samples for verification and must develop and validate stratified risk models. The latter apply bioinformatics analysis and integrate high-throughput genomics, transcriptomics, proteomics, and metabolomics data to obtain comprehensive information regarding susceptibility genes, mechanistic pathways, and disease stage markers of glycolipid metabolism disorder. This information will facilitate early, precise intervention and enable accurate diagnosis and treatment in populations with high risk and incidence of glycolipid metabolism disorders.
Authors' contributions
Jiaxing TIAN developed the review question. The initial literature review was performed by Xinyi FANG. The first draft of the manuscript was written by Xinyi FANG with all authors commenting on subsequent versions of the manuscript. All authors read and approved the final manuscript.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Human and animal rights and informed consent
This article does not contain any studies with human or animal subjects performed by any of the authors. | 2022-10-27T15:02:02.058Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "51f6af0b0bc58ad62b0efa535f695125db5db025",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.csbj.2022.10.030",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b1caf7d7c5d1a5b4b5d886ce89e717e916cfb20d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
240126155 | pes2o/s2orc | v3-fos-license | Cattle and Pigs Are Easy to Move and Handle Will Have Less Preslaughter Stress
Previous research has clearly shown that short-term stresses during the last few minutes before stunning can result in Pale Soft Exudative (PSE) pork in pigs or increased toughness in beef. Electric prods and other aversive handling methods during the last five minutes are associated with poorer meat quality. Handlers are more likely to use aversive methods if livestock constantly stop and are difficult to move into the stun box. Factors both inside and outside the slaughter plant contribute to handling problems. Some in-plant factors are lighting, shadows, seeing motion up ahead, or air movement. Non-slip flooring is also very important for low-stress handling. During the last ten years, there have been increasing problems with on-farm factors that may make animals more difficult to move at the abattoir. Cattle or pigs that are lame or stiff will be more difficult to move and handle. Some of the factors associated with lame cattle are either poor design or lack of adequate bedding in dairy cubicles (free stalls) and housing beef cattle for long periods on concrete floors. Poor leg conformation in both cattle and pigs may also be associated with animals that are reluctant to move. Indiscriminate breeding selection for meat production traits may be related to some of the leg conformation problems. Other on-farm factors that may contribute to handling problems at the abattoir are high doses of beta-agonists or cattle and pigs that have had little contact with people.
Introduction
Many research studies have previously shown that short-term stresses during the last few minutes before stunning may result in both poorer meat quality and severely compromised animal welfare. In pigs, multiple shocks from electric prods within five minutes before stunning resulted in higher lactate levels and more Pale Soft Exudative meat (PSE) [1][2][3]. Shocking pigs multiple times with an electric prod greatly increased lactate and glucose levels compared to low-stress handling and no use of electric prods. In cattle, short-term stresses shortly before stunning such as the use of electric prods or agitated behavior were associated with increased toughness in the meat [4,5]. The purpose of this commentary is to discuss the author's observations of the increase in on-farm factors that may make animals more difficult to move at the abattoir.
When cattle or pigs are difficult to move and constantly keep stopping, handlers are more likely to use aversive methods to drive them such as electric prods or tail twisting [6]. Pigs that move easily also require fewer touches, slaps, or pushes. In this article, the author discusses some of the factors that are associated with animals that are difficult to move at the slaughter plant. Correcting these problems will help improve both welfare and meat quality.
There are factors both inside and outside the abattoir that can have an effect on the ease of animal movement. Major factors inside the plant are distractions such as sharp shadows on the floor, reflections on shiny metal, or a noisy vehicle near the lairage (stockyards) that may cause animals to stop [6][7][8][9]. Illuminating a dark restrainer entrance facilitated the movement of cattle and it reduced vocalization associated with electric prod use [7].
Other factors that can slow down animal movement are air blowing out through the stun box entrance towards approaching cattle or layout mistakes in the design of facilities [9]. Abattoir yards and lairages should also have non-slip flooring in pens, alleys, and stun boxes. If animals slip and fall on a slick floor, they are more likely to become stressed. It is also essential to have well-trained stock people who understand and use behavioral principles of moving livestock [9,10]. Training of stock people will improve livestock handling and reduce aversive methods of driving livestock [11].
The main emphasis of this article is to discuss on-farm factors such as housing problems, growth promotants, or overselection for production traits that may be associated with lame cattle and pigs that are reluctant to move. It contains both scientific studies and observations from the author's experiences with handling livestock. Since the early 1970s, the author has consulted on improving livestock handing in abattoirs in the U.S., Europe, South America, Australia, and many other countries. High preslaughter standards for animal welfare are difficult to maintain if animals are lame, stiff, or have reduced mobility. These problems must be corrected at the farm of origin. The author has observed that handling problems at the abattoir are increasingly associated with breeding, feeding, or housing practices on the farm.
On-Farm Factors Associated with Handling Problems at the Abattoir
Within the last ten years, the author has observed that problems with cattle and pigs that are less willing to move have increased. Recently, a lairage manager at a large beef abattoir told the author that cattle from certain feedlots would immediately lie down after arrival. They were so reluctant to move that he had to get them up a few minutes before they went to the stunner. There are a variety of conditions that may have contributed to these handling problems. Over the years, the author has observed that these handling problems have slowly become worse. Issues with poor mobility of pigs and cattle have increased slowly and people did not notice it. The author calls this "bad becoming normal" [12].
In the U.S., the percentage of lame grain-fed cattle has increased. In 2020, only 74.5% of grain-fed beef cattle were free of lameness [13]. These data were collected during the months of July to October on 16,262 fed feedlot cattle that arrived at a large abattoir located in the Central Plains of the U.S. This area is in the heart of the U.S. feedlot industry. In the previous years of 2016-2019, 96.19% to 89.32% of the fed feedlot cattle that arrived at an abattoir were free of lameness [13]. There are also some dairies with high percentages of lame cows. Lame livestock that are reluctant to move may be more likely to have stressful aversive handling methods used on them at the slaughter plant. A recent Brazilian survey of 50 dairies showed that 41% of the cows were lame [14]. The percentage of lame cows varied from 13.8% in the best dairy to 64.5% in the worst dairy [14]. Recent studies conducted in the UK and Canada indicated that 31.8% of the dairy cows in the UK were lame [15] and 21% of the dairy cows kept in cubicles were lame [16]. Research also clearly shows that dairy producers will often greatly underestimate the percentage of lame dairy cows [17]. When lameness is actually measured, they will discover that the percentage of lame dairy cows may be double their estimate.
Poor Structural Leg Confirmation
The author has observed that grain-fed market cattle or pigs, which are indiscriminately bred for growth, are more likely to be lame and reluctant to move. At one large abattoir, 50% of the incoming market weight pigs were lame. Approximately half of the pigs had poor leg conformation and exhibited traits such as legs too straight (post legged), collapsed ankles, or the feet were rotated. The problem probably starts with the sow herd. Breeding stock with poor leg conformation had a higher rate of culling due to lameness [18]. Pork and beef producer organizations have now recognized the problem and they have distributed leg conformation charts for producers to use when they select breeding stock [19,20]. The American Angus Association has an EPD for leg conformation [21]. It was created in response to producer reports that leg conformation was worsening in Angus cattle selected for rapid weight gain and large muscles. In Thailand, the author observed severe lameness issues in pigs that had been selected to have small feet.
Deficiencies in Housing Associated with Lameness or Swollen Joints
Poorly designed housing or lack of management of housing is associated with injuries to legs that may lead to lameness in both dairy cows and beef cattle. Housing fattening beef cattle on concrete for long periods of time can lead to swollen joints [22]. In dairy cows, free stalls (cubicles) that are either too small or poorly bedded are associated with more leg problems and swollen joints [23,24]. Farms with better bedding management had fewer cows with swollen joints [23]. Other factors that were associated with increased lameness were slippery floors and poor body condition [24]. In one survey, 40% of the skinny dairy cows with a body condition score under 2.5 were lame [24]. Improvements in flooring and maintaining cow body condition may also help reduce lameness. This shows the importance of good management on the farm for reducing lameness. Charolais bulls housed on a concrete slatted floor had significantly more lameness than bulls housed on deep litter [25]. The author has observed that lameness in cattle housed on concrete can be reduced by shortening the period of time they are kept on concrete. Covering the slats with rubber mats may also help reduce lameness. More research is needed to determine guidelines for the maximum length of time that fattening cattle should be housed on concrete floors.
Excessive Use of Growth Promotants and Handling Problems
Research has clearly shown that pigs fed high doses of beta-agonists are more likely to become fatigued and non-ambulatory [26]. Non-ambulatory pigs increase labor requirements at the abattoir and their welfare is severely compromised. Ractopamine and zilpaterol are feed additives that are used to increase the amount of muscle [27]. High doses of ractopamine were associated with greater difficulty in handling pigs [28,29]. Hot weather is also more likely to increase death losses in cattle fed ractopamine [30,31]. Handling problems observed by both the author and reports in the scientific literature both indicate that high doses, combined with hot weather over 32 • C, caused the most problems [8,29]. Pigs fed beta-agonists must be handled in a low-stress manner to prevent downed non-ambulatory animals [32]. Researchers at a Colorado feedlot also reported that feeding zilpaterol predisposed grain-fed cattle to heart problems [33]. Behavioral observations of cattle indicate that they may also have muscle stiffness. Feeding zilpaterol at the recommended label dose to grain-fed cattle resulted in 31% of the animals lying in an abnormal posture [34]. It is likely that the abnormal lying posture is related to attempts to reduce muscular discomfort. In one case, a group of cattle fed a high carbohydrate potato by-product diet combined with high doses of zilpaterol presented some cases where the outer hoof sloughed off [35,36]. Some specialists who are concerned about both meat quality and animal welfare may ask if transportation practices contribute to this problem. It is likely that transportation is not the main cause of this problem. Both the location of the abattoirs and the feedlots that supply the cattle had not changed. Both before and after the appearance of the handling problems, the cattle were transported the same distances from most of the same feedlots. The use of beta-agonists is banned in Europe and China [37]. They are legal in the U.S., Canada, Brazil, and many other countries [37]. It is important for readers in countries where beta-agonists are legal to be aware of possible handling and welfare problems. These problems are more likely to occur when higher doses are used.
The Concept of Biological System Overload
The author has been in the livestock industry for many years. In the 1970s through the 1990s, most welfare and handling problems in an abattoir were due to either poorly designed facilities, lack of equipment maintenance, or rough abusive handling by people [9]. Today, in a well-managed U.S. slaughter plant, handling problems with grain-fed cattle or pigs are more likely to be associated with on-farm factors. The problem may also be due to pushing the livestock to gain weight fast. This is accomplished by both genetic selection for production traits and feeding practices. The animal's biology is pushed to the point where it starts to break down. Cardiac problems in cattle used to occur only at high altitudes. Researchers have found that they are now occurring at lower altitudes [38]. Heart problems in cattle associated with high altitude are heritable [39,40]. It is possible that heart problems are related to a greater emphasis on breeding cattle for large amounts of muscle mass. In 2015, veterinarians described a condition in cattle called fatigued cattle syndrome [36]. This is similar to problems with weak fatigued pigs. There are four factors that may have led to the relatively recent observations of more problems with both cattle and pigs that are reluctant to move: (1) Cattle fed to heavier weights at a younger age and more cattle fattened for highly marbled USDA prime beef [41,42]; (2) Indiscriminate breeding and selection for growth and muscle mass in both species [42]; (3) Feeding high-grain diets to cattle and a lack of roughages [25]; (4) High doses of beta-agonists fed to both cattle and pigs [28,29].
It is the author's opinion that pushing the animal's biology until it starts to break down may be one of the most serious animal welfare problems [43]. These problems will also contribute to meat quality problems. The author has tracked non-ambulatory pigs through to the meat cutting floor and they had high levels of PSE. Increasing muscle growth with beta-agonists also resulted in increased beef toughness [44]. Producers should strive for optimum performance and not maximum growth and muscle. Producing animals that convert feed more efficiently into muscle is good from a sustainability standpoint because they eat less feed. There is a point where it is both not sustainable and bad for animal welfare. An animal that dies shortly before it is time to slaughter it wastes all the feed it has eaten.
On-Farm Behavioral and Management Factors
The discussions in the previous sections of this paper covered physical problems that made animals more difficult to handle. In this section, factors that are purely behavioral will be covered. The author has observed that pigs and cattle will move more easily during handling at the abattoir if they have become accustomed to people walking through them. An animal's experiences on the farm will affect its behavior during handling in the future. When people regularly walk through the finishing pens several times each week, pigs will move more easily at the slaughter plant. Finishing-market-weight pigs that have been moved several times on the farm will be easier to move and drive through alleys in the future [45][46][47][48]. Pigs differentiate between a person walking in the aisle and a person walking through their pens. From the author's experience on farms, pigs will be easier to handle if they become accustomed to quietly moving away when a person walks through their pens. The author has observed that cattle that have been extensively raised are sometimes difficult and dangerous to handle at the slaughter plant. Cattle can tell the difference between a person walking on the ground and a person riding a horse. Extensively raised cattle that have been handled with horses may have a greatly increased flight zone when they first encounter a person walking on the ground. The horse and rider were perceived as familiar and safe, and the person walking is new and novel [49]. Handling at the abattoir will be safer and cattle will be less stressed if they become accustomed to moving in and out of pens by people on foot before they arrive at a slaughter plant.
Two Observational Case Histories Where On-Farm Practices Had Significant Effects on Ease of Handling
The author consulted with a large pork plant that had severe problems with downed non-ambulatory finisher pigs and pigs that were difficult to move. To handle all the fatigued non-ambulatory pigs required five or six full-time people to stun the pigs in the lairage and transport the stunned animals to the bleeding area. After three changes were made on the farm, the numbers of downed non-ambulatory pigs dropped to the point where only one half-time person was required to handle downers. The three things the farms changed were (1) eliminated breeding to a boar line that had poor leg conformation, (2) reduced or eliminated ractopamine use, and (3) started a program that required producers to walk through the finishing pens every day. This trained the pigs to quietly get up and walk away from the person walking amongst them. These observations clearly showed how on-farm factors can have detrimental effects on both handling practices and animal welfare.
There has been much discussion about designing and building better vehicles to reduce stress on pigs during loading and transport. In Europe, many new vehicles have power lifts and movable decks to eliminate ramps for loading and unloading pigs. In the eastern U.S., the height of almost all trucks is restricted to 13 feet 6 in. (4.11 m) due to low bridges [50]. The vehicle will be too tall if two decks of cattle are positioned above the wheels on a level floor. Two decks of cattle are accommodated in a compartment between the axles [51]. This design has internal ramps to load and unload the animals on and off the two decks. These trailers work well for cattle, but some pigs have difficulty negotiating the ramps. This has resulted in a straight trailer design for pigs that have two decks located over the top of the wheels. Use of these trailers is limited to pigs, sheep, or other small animals. Many independent truckers in the U.S. who own their own vehicles need to have the flexibility of transporting both cattle and pigs in the same trailer. These owner-operators will usually use a cattle trailer that has internal ramps to transport pigs. Some people who are concerned about animal welfare believe that this trailer design should not be used for pigs.
In the spring of 2021, the author visited an abattoir that processed pigs that lived outdoors. All of the pigs arrived in cattle trailers that had the internal ramps. The author watched four trailers unload at this abattoir. The pigs moved easily up the ramp from the belly compartment and down the ramp from the top compartment. There was zero use of electric prods and none of the animals fell down during unloading. For these pigs, the cattle trailers with the internal ramps were satisfactory. People who are concerned about animal welfare or meat quality need to think about how to improve handling. The question is: Do you improve the pig, or should you improve the design of the vehicle? This recent experience reinforced my opinion that many of the intensively raised pigs have become very difficult to handle. It is the author's opinion that the pig needs to be improved by changing breeding, feeding, and production practices. The author is not suggesting raising all finishing pigs outside. What is being suggested is that the emphasis needs to be on improving the pigs, so they are stronger and more willing to move. One study showed that exercising pigs improved ease of handling [52]. Possible ways of doing this are genetic selection and regular moving of pigs on the farm. During the springtime observation of many pigs in the stockyard (lairage), there was only one group that had poor leg conformation. The author warned the company that they need to keep working with producers to breed pigs that have good leg conformation.
Conclusions
Previous research clearly shows that to preserve meat quality and maintain good animal welfare, cattle and pigs should move easily with a minimum use of aversive driving methods such as electric prods, tail twisting, or hitting. The meat industry needs to address increasing problems with on-farm factors that may make cattle and pigs more difficult to move at the abattoir. Lame animals that have difficulty walking are more difficult to handle. Some of the on-farm factors that may contribute to these problems are poor leg conformation in both pigs and cattle, housing finishing cattle for long periods on concrete, poor management of dairy cow cubicles, or feeding high doses of beta-agonists. To improve both animal welfare and meat quality, producers need to correct these problems. Institutional Review Board Statement: Not applicable because the author was not conducting a research study. The observations were made during normal commercial operation during previous work with livestock clients.
Informed Consent Statement: Not applicable because no human subjects were involved.
Data Availability Statement:
There is no data associated with this paper.
Conflicts of Interest:
The author declares no conflict of interest. The author does work as a consultant to the livestock industry. | 2021-10-29T15:21:25.978Z | 2021-10-26T00:00:00.000 | {
"year": 2021,
"sha1": "fafabff078aba49c2ce6fc09528e8bf9cd51406e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/10/11/2583/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad8e25b0768d2cecd9a9a22ccf10efb6ead344b5",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261555272 | pes2o/s2orc | v3-fos-license | The Transcription Factor Ets1 Influences Axonal Growth via Regulation of Lcn2
Transcription factors are essential for the development and regeneration of the nervous system. The current study investigated key regulatory transcription factors in rat spinal cord development via RNA sequencing. The hub gene Ets1 was highly expressed in the spinal cord during the embryonic period, and then its expression decreased during spinal cord development. Knockdown of Ets1 significantly increased the axonal growth of cultured spinal cord neurons. Luciferase reporter assays and chromatin immunoprecipitation assays indicated that Ets1 could directly bind to the Lcn2 promoter and positively regulate Lcn2 transcription. In conclusion, these findings provide the first direct evidence that Ets1 regulates axon growth by controlling Lcn2 expression, and Ets1 may be a novel therapeutic target for axon regeneration in the central nervous system.
Introduction
The intrinsic regenerative capacity of neurons, which is lost in a development-dependent manner, is crucial for spinal cord regeneration [1].It is strongest during the embryonic stage, decreases in infancy, and almost disappears in adulthood [2].Therefore, one potential therapeutic strategy for spinal cord injury is to enhance the intrinsic regenerative capacity of spinal cord neurons [3][4][5].
Understanding the mechanisms that regulate spinal cord development may provide insight into spinal cord regeneration [6].We have previously performed bulk spinal cord mRNA sequencing from the embryonic stage to adulthood to determine the temporal expression patterns of key genes in rat development [7].Interestingly, we found that 100 transcription factors were highly expressed at embryonic day 11 during spinal cord development.Recent studies suggest that transcription factors are involved in axon regeneration, indicating that they may modulate neuronal functions.
The E-26 transformation-specific (Ets) family of transcription factors consists of 85-amino acid DNA-binding domains that are evolutionarily conserved throughout the metazoa [8].They are primarily involved in crucial biological processes, including cell proliferation, cell migration, development, cell differentiation, angiogenesis, and cell cycle [9][10][11][12].Ets1, a member of this family, plays a critical role in many normal physiological processes, such as promoting embryonic vasculogenesis and angiogenesis in zebrafish [13].In the immune system, Ets1 suppresses the differentiation of type 2 T follicular helper cells, thereby halting the onset of systemic lupus erythematosus [14].Ets1 is also a crucial regulator of human natural killer cell development and terminal differentiation [15].However, whether Ets1 is expressed in the nervous system and the functions of Ets1 involved in the development and regeneration of the nervous system remain elusive and require further Miao Gu and Xiaodi Li contributed equally.investigation.Our previous RNA sequencing data indicated that Ets1 mRNA is highly expressed in rat spinal cord [7].
The primary aim of the current study was to investigate the temporal expression and biological functions of Ets1 in detail.Changes in Ets1 expression in the rat spinal cord during development were evident, indicating that Ets1 plays a suppressive role in axon regeneration via interaction with Lcn2.This study highlights the crucial role of Ets1 in axon regeneration.Knockdown of Ets1 may be a novel molecular therapy for axon regeneration in the central nervous system.
Bioinformatic Analysis
Previously archived RNA sequencing data of the spinal cord at different developmental stages (E11d, E14d, E18d, P1d, P1w, P8w) were downloaded from the NCBI database PRJNA505253 and screened for essential transcription factors involved in central nervous system development.The Ingenuity Pathway Analysis (IPA) online software (https:// digit alins ights.qiagen.com/, Ingenuity Systems, USA) was used to construct a regulatory network of transcription factors.
Quantitative Real-Time PCR
Total RNA was isolated from the spinal cords and neuronal cells using an RNA-Quick Purification Kit (Yishan Biotechnology Co., China), then reverse transcribed using the PrimeScript RT reagent Kit with gDNA Eraser (TaKaRa, China).Quantitative real-time PCR (qRT-PCR) was conducted using FastStart Essential DNA Green Master (Roche, USA) on a StepOne real-time PCR system (Applied Biosystems, USA).Specific primers were synthesized by Genscript Biotech (Nanjing, China) and are listed in Table 1.Gene quantification was performed using the comparative 2 −ΔΔCt method with glyceraldehyde-3-phosphate dehydrogenase (Gapdh) as the internal control.
Microfluid Chamber
Neurons were seeded onto the somatic side of a poly-l-lysine-coated microfluidic chamber (SND 150, Xona Microfluidics, Temecula, USA) at a density of 5 × 10 4 cells per cm 2 and incubated for 4 h.After cell attachment, 200 nM Ets1 siRNA or control siRNA was added to the somatic side of the microfluidic chamber.After 4 days of culture, axons entering the axonal side were injured using 0.08 MPa vacuum suction (GL-802A, Kylin-Bell, China) three times for 30 s each time.Injured axons were allowed to grow for 24 h, then Tuj1 immunofluorescence staining was performed and images were acquired via a fluorescence microscope.
RNA Sequencing
Transcriptome sequencing of neuronal cells transfected with control siRNA or Ets1 siRNA was performed using Illumina HiSeq2500 by Novogene Biotechnology Co. (Beijing, China).Gene expression levels were determined using the fragments per kilobase of transcript per million mapped reads (FPKM) method.Fold changes of > 2 or < − 2 were deemed to indicate differentially expressed genes, and a false discovery rate of < 0.05 compared to corresponding controls was set.Differentially expressed genes were categorized via Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis using OmicShare bioinformatic tools (https:// www.omics hare.com/ tools/, China).Sequencing data were deposited in the database with the accession number PRJNA938264.
Prediction of Binding Site
To identify potential binding sites of Ets1 transcription factors, the promoter region approximately 2000 bp upstream of the transcription start site of the Lcn2 gene was predicted using the JASPAR online database (https:// jaspar.gener eg.net/).
Luciferase Reporter Assay
The coding sequence of Ets1 was inserted into the GV141 vector, and the promoter sequence 2000 bp upstream of the Lcn2 transcription start site was inserted into the GV238 vector (GeneChem, Shanghai, China).After transfection of HEK-293 T cells, luciferase reporter assay detection was conducted 48 h later using the Dual-Luciferase Reporter Assay System (Promega) on a BioTek Synergy 2. Renilla luciferase activity was used as an internal control.
Chromatin Immunoprecipitation Assay
Chromatin immunoprecipitation (ChIP) assays were performed using a SimpleChIP® Plus Enzymatic Chromatin IP Kit (Cell Signaling Technology) according to the manufacturer's instructions.Briefly, B35 cells were lysed, and chromatin immunoprecipitation was performed using an anti-Ets1 polyclonal antibody (14069, Cell Signaling Technology).Eluted DNA fragments were analyzed via PCR and quantitative PCR.The primers for the Lcn2 promoter (5′-3′) were as follows: GAG CTA CAA GGG GCT GGA A (forward) and TCC CTG GAT GAT GAA AGA ACA (reverse).
Statistical Analysis
Numerical data are presented as means ± standard error of the mean.Student's t-test was used for comparisons between two groups, and comparisons between multiple groups were performed using one-way analysis of variance (ANOVA) followed by Dunnett's multiple comparisons test.p < 0.05 was considered statistically significant.
Ets1 Is Highly Expressed During Spinal Cord Development in Rats
A total of 100 transcription factors showed high expression in the spinal cord at E11d (Fig. 1A).GO analysis revealed enrichment of transcription factors involved in neuron fate commitment, spinal cord motor neuronal cell fate specification, and spinal cord-associated neuron differentiation (Fig. 1B).These findings suggest that these transcription factors may play a crucial role in spinal cord development.For further investigation, we selected the hub gene Ets1, which was predicted by Ingenuity Pathway Analysis (Fig. 1C).RNA sequencing analysis confirmed that Ets1 had high expression levels at E11d, which decreased during spinal cord development (Fig. 1D).qRT-PCR also indicated a similar expression pattern (Fig. 1E).Immunofluorescence results demonstrated that Ets1 was primarily observed in spinal cord neural stem cells identified by Nestin during the embryonic period and in neurons identified by NeuN after birth (Fig. 2).
Ets1 Inhibits Axonal Growth of Spinal Cord Neurons
Immunofluorescence indicated that Ets1 was predominantly localized in the nucleus of neurons (Fig. 3A).In qRT-PCR analysis of cultured spinal cord neurons transfected with three Ets1 siRNAs or control siRNA, Ets1 expression was significantly decreased in the Ets1 siRNA group compared to the control group (Fig. 3B).Neurite outgrowth assays showed that transfection with Ets1 siRNA markedly promoted axonal growth of neurons (Fig. 3C).The mean length of axons in the Ets1 siRNA group was 56% greater than that in the control group (n = 300, p < 0.01).Axonal growth phenotype after Ets1 siRNA treatment was analyzed.Neurons were divided into three categories based on axonal length [16,17]: 20-50 µm, 50-100 µm, and > 100 µm.Compared to the control group, neurons in the Ets1 siRNA group exhibited higher ratios in the > 100 µm and 50-100 µm categories (Fig. 3D).In experiments with injured axons, the regenerated lengths of injured axons increased approximately threefold after 24 h of culture and treatment with Ets1 siRNA (Fig. 3E).Taken together, these results indicate that Ets1 suppresses axonal growth of spinal cord neurons.
Ets1 Modulates Neuronal Cell Metabolism
To decode the molecular changes induced by Ets1 knockdown, neuronal cells transfected with control siRNA or Ets1 siRNA were subjected to RNA sequencing.A total of 103 differentially expressed genes were identified between the control group and the Ets1 siRNA group, with 46 genes upregulated and 57 genes downregulated (Fig. 4A).The expression levels of differentially expressed genes were shown in the heatmap (Fig. 4B).GO analysis indicated that differentially expressed genes were enriched in various processes, including regulation of steroid metabolic processing (GO:0019218), positive regulation of coenzyme metabolic processes (GO:0051197), arginine metabolic processing (GO:0006525), negative regulation of coenzyme metabolic processing (GO:0051198), glutamate metabolic processing (GO:0006536), and regulation of cholesterol metabolic processing (GO:0090181) (Fig. 4C).The KEGG pathway analysis indicated that differentially expressed genes were enriched in insulin resistance, the JAK-STAT signaling pathway, 2-oxocarboxylic acid metabolism, and arginine biosynthesis (Fig. 4D).We selected high fold change genes (Lcn2, Apob, Psd4, and Nags) from the 103 differentially expressed genes.In qRT-PCR experiments, the mRNA levels of Lcn2, Apob, Psd4, and Nags were significantly decreased after Ets1 knockdown, compared to the control group.These results were consistent with the expression trends determined by RNA sequencing (Fig. 4E).
Ets1 Targets Lcn2 and Inhibits Axonal Growth of Spinal Cord Neurons
JASPAR analysis was performed using RNA sequencing data, predicting the presence of Ets1 binding sites in the promoters of the Lcn2 gene (Fig. 5A).qRT-PCR analysis indicated that Lcn2 mRNA levels were decreased dramatically after Ets1 knockdown in neuronal cells (Fig. 4E).
Luciferase assays revealed that the luciferase activity of the Lcn2 reporter was significantly upregulated by Ets1 overexpression, indicating that the Ets1 binding sequences serve as positive regulatory elements for Lcn2 transcription (Fig. 5B).ChIP assays confirmed the direct association of Ets1 with Lcn2 promoters (Fig. 5C).To investigate whether reduced Lcn2 expression due to Ets1 knockdown influenced axon growth in neuronal cells, Lcn2 siRNA was transfected into neuronal cells.qRT-PCR results showed reduced Lcn2 mRNA expression in neuronal cells transfected with Lcn2 siRNA (Fig. 5D-a).Knockdown of Lcn2 promoted the growth of neuron axons (Fig. 5D-b, 5D-c), with a significant increase observed in the proportion of axons longer than 100 µm (Fig. 5D-d).Overall, these findings indicate that the downregulation of Ets1 in neuronal cells negatively affects Lcn2 expression, thereby promoting axonal growth in neurons.
Discussion
Transcription factors play a crucial role in spinal cord injury.Recent studies have suggested that transcription factors, such as ATF3 [18,19], Sox11 [20], STAT3 [21], and Smad1 [22][23][24], are involved in axon regeneration, indicating their potential importance in neuronal functions.The current study aimed to investigate the effects of the transcription factor Ets1, which shows differential expression during spinal cord development, on the axonal growth of neuronal cells.Ets1 has been associated with various conditions, including hepatocellular carcinoma [25,26], healthy aging [27], congenital heart defects [28], and arthritis [29].Our bioinformatic analyses of transcription factors during spinal cord development identified Ets1 as a critical upstream regulatory gene, indicating its potential influence on neuronal cells.In the current study, we analyzed the expression profiles of Ets1 in the spinal cord during rat development and found that Ets1 expression was significantly downregulated during spinal cord development.To investigate the role of Ets1, we comprehensively investigated the influence of Ets1 on spinal cord neurons by transfecting primary cultured neuronal cells with Ets1 siRNA.We identified an inhibitory effect of Ets1 on axonal growth in spinal cord neurons.Future studies may explore the application of Ets1 antagomir as a potential treatment for central nervous system injury.
Mechanistic studies revealed that Ets1 knockdown reduced the expression levels of Lcn2, suggesting that Lcn2 is a downstream target of Ets1.Lcn2 is involved in multiple cellular processes, including cellular uptake of iron, apoptosis, suppression of invasiveness and metastasis, and glial activation.It also plays a crucial role in brain injury and recovery after ischemic and hemorrhagic stroke [30,31].Another study has shown that Lcn2 may exert damaging effects after cerebral ischemia by inducing classical activation of astrocytes [32].Reactive astrocytes are reported to use NOX signaling to stimulate Lcn2 expression and secretion, and blocking astrocytic NHE1 activity promotes the reduction of Lcn2-mediated neurotoxicity after stroke [33].Recent studies have investigated the effects of Lcn2 on Alzheimer's disease [34,35], spinal cord injury [36], perioperative neurocognitive disorders [37], and diabetic retinopathy [38].In the current study, luciferase assays and ChIP assays revealed that Ets1 bound to the Lcn2 promoter and regulated its activity.Knockdown of Lcn2 promoted axonal growth in spinal cord neurons and had the same effects as Ets1 knockdown on axonal growth in neuronal cells.These findings of the present study suggest that Ets1 regulates Interestingly, although Ets1 expression is low in the adult spinal cord, the axonal growth capacity is also low.The expression of genes in adulthood does not correspond to axonal growth capacity.For example, deletion of Krüppel-like factor-4 (Klf4) has been reported to promote axonal regeneration in retinal ganglion cells, despite low Klf4 expression in adulthood [39].This is similar to the situation with Ets1 in the current study.On the other hand, inhibition of Gas5 expression promoted axonal growth in dorsal root ganglion (DRG) neurons, and the Gas5 expression was high in adult dorsal root ganglia [40].The role of Ets1 was only demonstrated in vitro in the present study, and its role in vivo still needs to be confirmed.Furthermore, the ability of Ets1 to suppress axon elongation could be further investigated to gain a more comprehensive understanding of its functional roles.
Fig. 1
Fig. 1 Expression profile of Ets1 in rat spinal cord development.A Hierarchical clustering of transcription factor expression profiles in rat spinal cord development at different time points.B Top terms yielded by Gene Ontology analysis of transcription factors.C Ingenuity Pathway Analysis of Ets1 as a hub gene in rat spinal cord development.D Fragments per kilobase of exon model per million mapped reads (FPKM) Ets1 expression trends derived from RNA sequencing during rat spinal cord development at different time points.E Relative Ets1 mRNA expression during rat spinal cord development at different time points normalized to E11d.n = 3 independent experiments.**p < 0.01 vs. E11d control ◂
Fig. 2
Fig. 2 Expression of Ets1 in rat spinal cord.A Ets1 and Nestin double staining in rat spinal cord at E11d, E14d, and E18d.Red indicates Ets1, green indicates Nestin, and blue indicates DAPI.Boxed areas within the panels on the left are displayed at higher magnification in the boxes on the right.Scale bars = 25 µm.B Ets1 and NeuN double staining in rat spinal cord at P1d, P1w, and P8w.Red indicates Ets1, green indicates NeuN, and blue indicates DAPI.Scale bars = 25 µm | 2023-09-07T06:17:11.751Z | 2023-09-06T00:00:00.000 | {
"year": 2023,
"sha1": "335f374e7323faa33b5714ee25189ff02176dcd3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12035-023-03616-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ab8206c7404a09e839e6f98e1552bab7d03e4da",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201616936 | pes2o/s2orc | v3-fos-license | Early Left Ventricular Involvement Detected by Cardiovascular Magnetic Resonance Feature Tracking in Arrhythmogenic Right Ventricular Cardiomyopathy: The Effects of Left Ventricular Late Gadolinium Enhancement and Right Ventricular Dysfunction
Background Left ventricular (LV) involvement is common in arrhythmogenic right ventricular cardiomyopathy (ARVC). We aim to evaluate LV involvement in ARVC patients by cardiovascular magnetic resonance feature tracking. Methods and Results Sixty‐eight patients with ARVC and 30 controls were prospectively enrolled. ARVC patients were divided into 2 subgroups: the preserved LV ejection fraction (LVEF) group (LVEF ≥55%, n=27) and the reduced LVEF group (LVEF <55%, n=41). Cardiovascular magnetic resonance with late gadolinium enhancement (LGE) and cardiovascular magnetic resonance feature tracking were performed in all subjects. LV global and regional (basal, mid, apical) peak strain (PS) in radial, circumferential and longitudinal directions were assessed, respectively. Right ventricular global PS in three directions were also analyzed. Compared with the controls, LV global and regional PS were all significantly impaired in the reduced LVEF group (all P<0.05). However, only LV global longitudinal PS as well as mid and apical longitudinal PS were impaired in the preserved LVEF group (all P<0.05), and all these parameters were significantly associated with right ventricular global radial PS (r=−0.47, −0.47, and −0.49, respectively, all P<0.001). The reduced LVEF group showed significantly higher prevalence of LGE (95.10% versus 63.00%, P=0.002) than the preserved LVEF group. Moreover, LV radial PS was significantly reduced in LV segments with LGE (33.15±20.42%, n=46) than those without LGE (41.25±15.98%, n=386) in the preserved LVEF group (P=0.016). Conclusions In patients with ARVC, cardiovascular magnetic resonance feature tracking could detect early LV dysfunction, which was associated with LV myocardial LGE and right ventricular dysfunction.
A rrhythmogenic right ventricular cardiomyopathy (ARVC) is an inherited cardiomyopathy characterized by progressive fibro-fatty myocardial replacement, ventricular tachycardia, and ventricular dysfunction, preferentially affecting the right ventricle (RV). 1 However, left ventricular (LV) involvement has been increasingly recognized across a broad spectrum of ARVC severity. 2 Detection of LV involvement is of clinical importance because the presence of biventricular dysfunction was reported to be a stronger predictor of adverse outcomes including sudden cardiac death and heart failure than isolated RV disease. 3 Cardiac magnetic resonance (CMR) plays a prominent role in the diagnosis of ARVC because of its unique capability in the detection of fibro-fatty tissue by late gadolinium enhancement (LGE) and high reproducibility in the assessment of biventricular morphology and function. 4 Left ventricular ejection fraction (LVEF) is a widely used parameter in clinical practice although it has been proven to be an insensitive marker of regional systolic dysfunction. 5 Currently, LV strain analysis by CMR feature-tracking (CMR-FT) has been introduced for the quantitative evaluation of global and regional myocardial contraction. 6 Recent studies suggest a potential role of stain analysis in early detection and objective quantification of contractile abnormalities in a variety of cardiovascular diseases. [7][8][9][10] Therefore, we aimed to investigate the diagnostic value of this novel technique for detection of LV involvement in ARVC patients with or without preserved LVEF, and to evaluate their relationships with LV myocardial LGE and RV dysfunction.
Methods
The data, analytic methods, and study materials will be made available to other researchers upon reasonable request.
Study Population
This study was approved by hospital institutional review board and informed consents were acquired in all subjects. From January 2016 to July 2017, 75 consecutive ARVC patients were prospectively enrolled in this study. All patients underwent systemic clinical evaluation and CMR examinations. According to the revised Task Force Criteria, the diagnosis of ARVC was made when 2 major, or 1 major plus 2 minor, or 4 minor criteria from different categories were present. 11 Seven patients were excluded from further analyses because of inadequate image quality. Finally, a total of 68 patients with ARVC were included. Thirty ageand sex-matched healthy subjects were recruited as controls.
CMR Imaging Protocol
CMR imaging was performed on a 3-T scanner (Discovery MR750, GE Healthcare, Milwaukee, USA) with a phasedarray cardiovascular coil, using electrocardiographic and respiratory gating. The protocol mainly consisted of: (1) cine imaging; (2) fat imaging; (3) first-pass perfusion imaging; and (4) LGE imaging. The average acquisition time was %40 minutes. All images were acquired with breath holding. Cine images were acquired in 3 long-axis views (LV 2chamber, 4-chamber, and LV outflow tract) and series of short-axis planes covering the entire LV using balanced steady state free precession sequence (b-SSFP). Typical imaging parameters were: field of view=3209320 mm, matrix=1929224, repetition time (TR)=3.3 ms, echo time (TE)=1.7 ms, flip angle=50°, temporal resolution=46 to 60 ms, slice thickness=8 mm, slice gap=2 mm. Fat-suppressed and non-fat-suppressed fast spin-echo sequences were applied in the LV short-axis views with doubleinversion recovery blood suppression pulses. Typical imaging parameters were: field of view=3209320 mm, matrix=1929224, TR=1 to 2 R-R intervals, TE=10 ms, slice thickness=8 mm, slice gap=2 mm. The LGE images were obtained 10 to 15 minutes after intravenous administration of gadolinium-DTPA (Magnevist, Bayer, Berlin, Germany) at a dose of 0.2 mmol/kg, using a segmented phase-sensitive inversion recovery Turbo Fast Low Angle Shot sequence at the same position as cine images in end diastole. Typical imaging parameters were: field of view=3809320 mm, matrix=2569162, TR=8.6 ms, TE=3.36 ms, flip angle=25°, slice thickness=8 mm, slice gap=2 mm, nominal TI=300 to 350 ms.
CMR Analysis
All CMR images were analyzed using CVI42 (version 5.0, Circle Cardiovascular Imaging Inc., Calgary, Canada) by 2 radiologists with 8-and 10-year experience of CMR imaging, respectively. The endocardial and epicardial contours of LV myocardium were manually traced at end-diastole and end-systole on short-axis b-SSFP cine images. Papillary muscles were excluded from volumes. Cardiac volumetric and functional parameters, including left/right ventricular end-diastolic volume, left/right ventricular end-systolic volume, and left/right ventricular
Clinical Perspective
What Is New?
• This is the first study to apply cardiovascular magnetic resonance feature tracking (CMR-FT) to assess left ventricular (LV) dysfunction and its association with late gadolinium enhancement in arrhythmogenic right ventricular cardiomyopathy patients with preserved LV ejection fraction (LVEF). • Impaired LV global and regional peak strain could be detected by CMR-FT in arrhythmogenic right ventricular cardiomyopathy patients with preserved LVEF. • LV dysfunction detected by CMR-FT was associated with LV myocardial late gadolinium enhancement and right ventricular deformation mechanics.
What Are the Clinical Implications?
• Findings of this study indicate that CMR-FT provides a novel method for the evaluation of the global and regional myocardial contraction and could be more sensitive to detect early LV dysfunction than LVEF in arrhythmogenic right ventricular cardiomyopathy patients. • LV global and regional dysfunction detected by CMR-FT was associated with late gadolinium enhancement and RV dysfunction. Since the presence of biventricular dysfunction was a stronger predictor of adverse outcomes than isolated right ventricular diseases, the early detection of LV involvement by CMR-FT could be of clinical significance in risk stratification of arrhythmogenic right ventricular cardiomyopathy patients with preserved LVEF. ejection fraction (LVEF/RVEF), were automatically generated. All the volumetric parameters were indexed to body surface area. The presence of LGE in LV myocardium was visually analyzed using the American Heart Association (AHA) 16-segment model by consensus reading of 2 independent observers. The regions of LGE were finetuned by the operator to reduce false-positive when necessary.
Myocardial Strain Analysis by CMR-FT
The CMR-FT analysis was performed on the acquired b-SSFP cine images using CVI42 (Version 5.0). For right ventricle (RV), the endo-and epicardial contours of RV free wall in 4-chamber and short-axis cine images were manually drawn at end-diastole with subsequent automatic tracking throughout the cardiac cycle. The RV global peak strain (PS) in radial, circumferential and longitudinal directions were analyzed, respectively. The basis of LV strain algorithms has been previously described and their validity has been demonstrated. 12,13 Briefly, a set of cine images in short-axis and three longaxis views (2-chamber, 4-chamber, and LV outflow tract) were loaded into the feature tracking module. All endocardial and epicardial borders of LV at end-diastole were manually delineated ( Figure 1) with subsequent automatic tracking throughout the cardiac cycle. The quality of automatic tracking was checked and manually adjusted if needed. After defining the RV insertion points within the LV in short-axis images, the LV global and regional (basal, mid and apical) PS in radial, circumferential and longitudinal mode were automatically derived by the software. The LV segmental strain parameters were also provided according to the American Heart Association 16-segment model. The algorithm of LV regional PS was based on the average PS values of its corresponding segments. Lengthening or thickening of LV myocardium was defined as positive value (radial strain), while shortening or thinning was defined as negative value (circumferential and longitudinal strain). 14
Reproducibility Analysis
For the assessment of the inter-observer and intra-observer reproducibility, a randomly selected set of 20 ARVC patients were assessed by two experienced investigators. The same subjects were assessed by each investigator independently for inter-observer analysis. To determine the intra-observer variability, the measurement was repeated by one of the investigators 1 month later.
Statistical Analysis
Continuous variables were given as meanAESD or median values with interquartile range depending on normality of the variables. Categorical variables were presented as percentages. Comparisons for continuous data were performed using Student t-test or one-way ANOVA. Categorical variables were compared using the chi-square test or Fisher exact test. Correlations were assessed by the Pearson or Spearman rank correlation coefficient. Receiver operating characteristics analysis was used to define the optimal cut-off values. The intraclass correlation coefficient was used to assess the interand intra-observer variability. All statistical analyses were conducted by using a statistical software package SPSS, Version 24.0 (IBM, SPSS Statistics). A 2-tailed P<0.05 was considered statistically significant.
Baseline Characteristics
There were 68 ARVC patients (mean age 39.28AE13.88 years, 45 men) and 30 healthy controls (mean age 40.20AE 12.42 years, 17 men) in this study. The ARVC patients were further divided into 2 subgroups: the preserved LVEF group (LVEF ≥55%, n=27) and the reduced LVEF group (LVEF <55%, n=41). The baseline characteristics of the study population were presented in Table 1. No significant differences were observed in terms of baseline characteristics among 3 groups.
Conventional CMR Parameters
Compared with the controls, right ventricular end-diastolic volume index and right ventricular end-systolic volume index were significantly higher whereas RVEF was remarkably lower in ARVC patients (all P<0.05) ( Table 1). The reduced LVEF group showed significantly lower LVEF and larger left ventricular end-systolic volume index (all P<0.05) than the controls, while no significant differences were observed in the preserved LVEF group. In contrast, the preserved LVEF group had the lowest left ventricular enddiastolic volume index compared with ARVC patients with LVEF <55% and the controls (P=0.002). LGE was present in 46 (67%) of the 68 ARVC patients, and the reduced LVEF group showed higher prevalence of LGE (95.10% versus 63.00%, P=0.002) than the preserved LVEF group. The mean number of LGE-positive segments per patient was 4.44AE2.53, and there was no significant difference between these 2 subgroups (4.78AE2.79 versus 3.57AE1.45, P=0.053) (Table 1). Besides, LGE was most frequently detected in the LV mid anteroseptal segments (n=21; 31%) ( Figure 2).
Global and Regional Strain Analysis by CMR-FT
ARVC patients showed significantly reduced RV global longitudinal, circumferential and radial PS than those of the controls (all P<0.05). Compared with the controls, the LV global and regional PS were all significantly impaired in the reduced LVEF group (all P<0.05). However, only LV global longitudinal PS as well as mid and apical longitudinal PS were impaired in the preserved LVEF group (all P<0.05) ( Table 2). Among LV strain parameters, the receiver operating characteristic curve analysis demonstrated that LV global longitudinal PS (cut off value: À13.60%, sensitivity: 73.53%, specificity: 80%, area under curve: 0.822), longitudinal PS at apical (cut off value: À15.22%, sensitivity: 80.88%, specificity: 85%, area under curve: 0.836) and mid-level (cut off value: À16.67%, sensitivity: 67.65%, specificity: 95%, area under curve: 0.820) were all good discriminators between ARVC patients and controls (Figure 3).
Intra-Observer and Inter-Observer Variability
The intraclass correlation coefficients of the global strain parameters for both ventricles and LV regional PS were summarized in Table 4. All the strain parameters showed good to excellent inter-observer (0.851-0.915) and intraobserver (0.876-0.947) agreement.
Discussion
The main findings of this study were as follows: (1) CMR-FT could detect impaired LV global and regional longitudinal PS in ARVC patients with preserved LVEF; (2) LV radial PS was reduced in segments with LGE compared with those without LGE in ARVC patients with preserved LVEF; (3) Impaired LV strain was associated with RV deformation mechanics in ARVC patients.
Myocardial Strain Derived by CMR-FT
Myocardial deformation is a sensitive marker of myocardial dysfunction in a broad of cardiovascular diseases. 9,10,15,16 CMR-FT is a rapidly emerging approach for quantification of global and regional myocardial deformation. Several studies have validated its accuracy against CMR tagging or echocardiographic speckle tracking. 17,18 The major advantages of CMR-FT are that it can be applied to routine cine CMR images and the post-processing analysis is relatively easy. Preliminary studies have confirmed the feasibility of this technique for evaluation of LV myocardial strain. 19,20 All the strain parameters showed good to excellent inter-observer and intra-observer reproducibility, which was consistent with previous studies. 21,22
LV Strain of ARVC Patients With Preserved LVEF
Genotype/phenotype studies have demonstrated that ARVC, which was initially described as an isolated or predominant RV disease, may exhibit frequent LV involvement. 2,23 Regional dysfunction because of patchy fibro-fatty displacement may occur before the onset of global changes. Recently, Mast et al found that only 16% of ARVC patients had reduced LVEF, whereas 55% had reduced strain derived by echocardiographic speckle tracking. 24 Similarly, in our study, abnormal global and regional LV longitudinal strain were found to be significantly impaired in ARVC patients with preserved LVEF, implying that minor LV systolic dysfunction could be identified by CMR-FT before the presence of LVEF reduction. LV apical longitudinal strain outperformed LVEF to be the best marker for early detection of LV involvement. These findings suggested that regional strain analysis was potentially more sensitive than global strain for detection of minor LV involvement. This was theoretically supported by previous reports, that ARVC begins as a regional rather than a global disease. 2,25 In short, LV strain is more sensitive than LVEF in detecting early LV involvement and may complement the conventional parameters for comprehensive evaluation of ARVC.
Association Between LV Strain and LGE
In accordance with previous studies, 4,23 LGE in LV myocardium was frequently detected in ARVC patients, even in those with preserved LVEF. Unlike other cardiomyopathies (eg hypertrophic cardiomyopathy) in which LGE mainly represents myocardial fibrosis, 26 LGE positive area in ARVC could also be the consequence of fibro-fatty change. In addition, we found that the LV segments with LGE showed impaired radial, circumferential and longitudinal strain in ARVC patients compared with those without LGE. However, only radial strain was significantly reduced in LGE-positive segments in the preserved LVEF group. This was conceivable because radial strain was appreciated to be more representative of the outer fibers of LV myocardium. 5 Besides, LGE was frequently observed without visual wall motion abnormality assessed by conventional echocardiographic or CMR. 27 The possible explanation was, in contrast to objective quantification by regional strain analysis, the diagnosis of visual wall motion abnormality was more subjective and dependent on personal expertise with significant inter-observer variability. 15 Thus, mild regional abnormalities were likely to be overlooked.
Association of Biventricular Strain Parameters
RV-LV coupling has already been recognized in selected populations. 28,29 In this study, RV global radial strain parameters were significantly associated with LV global and regional longitudinal PS, indicating that in addition to LGE, the impaired RV mechanics may also contribute to LV dysfunction in patients with ARVC. There were several limitations in this study. Firstly, the sample size of the current study cohort was relatively small; further validation of our results in studies containing large numbers of patients might be warranted. Secondly, a direct comparison between CMR-FT and CMR tagging, which has been considered as the reference standard of myocardial strain analysis, was not feasible because the tagging sequence was not included in the present study protocol. Thirdly, the lack of longterm follow-up data prevented definite conclusion of prognostic value of LV strain analysis in ARVC patients.
Conclusions
CMR-FT could detect impaired LV global and regional deformation in ARVC patients with preserved LVEF. The LV dysfunction could be associated with LV fibro-fatty replacement and RV dysfunction.
Acknowledgments
Dr Xiuyu Chen made contributions to conception and design of study, critical revision of the manuscript; Dr Lu Li drafted the manuscript, and was responsible for statistical analysis of the data; Dr Yanyan Song collected conventional CMR data; Dr Keshan Ji, Lin Chen and Gang Yin were in charge of postprocessing of CMR-FT analysis. Dr Huaibin Cheng collected and analyzed clinical data. Prof Minjie Lu made critical revisions of the manuscript; Prof Shihua Zhao did substantial contribution to conception and design and critical revision of the manuscript. All authors read and approved the final manuscript.
Sources of Funding
This study is supported by National Natural Science Foundation of China (grant no. 81701659 and grant no. 81620108015) and | 2019-08-24T13:05:10.483Z | 2019-08-23T00:00:00.000 | {
"year": 2019,
"sha1": "d77c89c9f0fefd7997a560cf4ca1eb72d31fcd11",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1161/jaha.119.012989",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e56fd6ef73d1611da810415d3f35092618c0da30",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225453815 | pes2o/s2orc | v3-fos-license | Edu-Kit “Our Coffee” Development on Problem Based Learning Model for Vocational Agribusiness and Agrotechnology Programs on Material Separation Mixture
Student achievement in chemistry is relatively low because chemistry topics are not integrated with students’ expertise programs. The lack of variation in teaching material and learning model also reduces the motivation to study chemistry. One of the science-chemistry materials in vocational high schools is separation of mixture, taught in the agribusiness and agro technology specialization programs. Many business sectors such as cafes are starting to stand in the community, requiring coffee serving skills and also knowledge about coffee that can be viewed from the chemistry side. Chemical changes due to coffee cultivation and the processes through which coffee can be consumed can be learned through interactive and communicative teaching materials. One way to present and integrate coffee topics in learning is to develop an Edu-Kit which contains problems in coffee that are integrated with separation of mixture topic. Separation of mixture is contained in the 2013 Curriculum. The choice of problem-based learning models is assessed according to the characteristics of vocational learning, which is to present real problems and solve them while learning the concepts needed. The Edu-Kit development uses the research and development (R&D) method of Borg and Gall. E-book was chosen as an application in the development of Edu-Kit which includes text, sound, picture and video. Validation data were collected from 2 validators and readability tests by students. Data were analyzed using quantitative and qualitative techniques. The effectiveness of the Edu-Kit uses quasi experimental, pretest and posttest designs. There were 62 samples determined by the saturation sampling technique. The sample is divided into two classes, namely the experimental and control classes. Edu-kit effectiveness from posttest scores, analyzed using the Mann Whitney U Test using SPSS. The results of Edu-kit development obtained an average percentage of validation of 91.90% with very decent criteria. While the results of the effectiveness test showed that the use of Edu-kit made students' understanding better in mixed separation material. Keywords—Edu-Kit, Coffee, Problem Based Learning, Separation of mixture topic, Vocational High School iJIM ‒ Vol. 14, No. 12, 2020 41 Paper—Edu-Kit “Our Coffee” Development on Problem Based Learning Model for Vocational ...
Introduction
Vocational High School (VHS) is an educational institution that aims to prepare students to work in a field, equipping with science, technology and art to be able to develop themselves independently or at a higher level [1] [21]. Specific subjects at VHS, including normative, adaptive, and productive [1]. One productive subject is the production of plantation and herbal commodity processing, which requires basic knowledge of chemistry. Studying chemistry in vocational schools is related to nature so that it is not only the mastery of knowledge about facts, concepts or principles. But it also becomes a means of applying knowledge in daily life [21]. One of the chemistry materials in VHS that is related to daily life is mixed separation. Although related to daily life, students still have difficulty in understanding mixed separation. First, due to students' understanding in learning the basic principles of the separation method and how the separation process occurs is still low [2]. For example, in the distillation separation technique, students only know the basics of distillation and distillation tool drawings without understanding how the processes and instruments are used in the distillation technique. Second, learning in schools is still teacher-centered, while students are less actively involved in the concept discovery process [3]. Third, in the learning process teachers only prioritize the final product as the only aspect of assessment without regard to other aspects such as aspects of attitudes and processes that are in accordance with the nature of science [4].
The material of mixed separation is related to the subject matter of the production of seasonal plantations such as cocoa, coffee, pepper, and candlenut which are learned in vocational agribusiness and agrotechnology programs, from land preparation to post-harvest handling. However, the delivery of mixed separation material is less integrated with plantation crop production subjects [5]. This causes students to be less interested in chemical materials because they think chemistry is not related to agribusiness expertise programs. The topic of cocoa or coffee in the subject of seasonal estate crop production is a topic that's familiar to students. However, they assume that the content of coffee is only caffeine, even though coffee have other ingredients that cause different tastes. Differences in regional origin, type of coffee, caffeine content, and how to brew coffee can produce different flavors and aromas of coffee [6]. This proves that chemistry is related to agribusiness expertise programs, especially in mixed separation materials.
Submitting the previous material such as differences in elements, compounds and mixtures can make it easier for students to learn the separation of the mixture [7]. Providing apperception related to seasonal crop production material, such as the work of a barista can increase student motivation. And the existence of experimental activities that can provide direct learning experience in learning to separate the mixture. Learning experiences like this are still lacking in the teaching materials used at school. The majority of teaching materials available are still limited to the material description and drawing of the separation tools without further explanation related to the process of the separation [22]. In addition, teaching materials do not link between chemical materials and seasonal plant production materials. This causes students to feel that chemistry is not important, because it is not related to productive agribusi-ness subjects [8]. Therefore, it is necessary to develop chemical teaching materials that support and cover a subject in an agribusiness and agrotechnology expertise program. One form of supporting teaching materials is Coffee Edu-Kit.
Edu-Kit (Education Kit) is one of the educational tools to facilitate certain topics in the student environment that can be included in the school curriculum. The 2013 curriculum policy has elaborated students' abilities on the dimensions of life skills, collaboration, critical thinking and creative thinking [9]. But the reality explains that many curriculum implementations are out of context and are not oriented towards achieving abilities in understanding chemistry. But it is more focused on achieving competency targets that are described in academic values alone [10]. In addition, the 2013 curriculum is also related to the demands of the Industrial Revolution 4.0. The Industrial Revolution 4.0 is a strategic and drastic change about the production patterns that collaborate people, technology/machines and big data [11].
Therefore, if learning is based on the 2013 curriculum and the Industrial Revolution 4.0, teachers must make the learning process more interesting and enjoyable and can improve life skills, collaboration, critical thinking and creativity [23]. One way that can be used is the use of technology-based teaching materials that aim to increase learning motivation, reduce boredom, and can increase understanding [24]. One form of technology-based teaching material is e-book [12]. E-book is an application that can be used to develop teaching materials with a digital display that is equipped with text, image, and sound.
Besides the importance of using teaching materials, the use of learning methods also needs to be considered. Learning methods that must be used are student-centered and constructivist. One of the models is Problem Based Learning, consisting of five stages, namely problem orientation, organizing student learning, independent or group investigations, developing and presenting results, and evaluating problem solving processes [13]. The Problem Based Learning model is suitable for use in learning in Agribusiness Vocational Schools, because the focus of Problem Based Learning (PBL) is to present real problems or simulations to students. Then students are asked to find a solution through a series of research and investigation activities (identifying problems, collecting data, using data) based on theories, concepts, principles learned from various fields of science [14]. PBL links hard skills, critical thinking skills, collaborates, seeks information, obtains and evaluates data, organizes and maintains data, interprets and communicates findings [25]. Thus, students' cognitive abilities in reasoning and critical thinking can develop. Likewise, the affective development of students has increased with the ability to work in teams, empathy and respect for the viewpoint of others [15].
Based on these problems, it is very important to develop coffee Edu-kit teaching materials on mixed separation material with technology-based problem-based learning models. This research and development is expected to increase students' understanding of mixed separation material and integrated with expertise program material.
Development of Edu-Kit
The outline of the research and development (R&D) model in this study is explained according to Figure 1. Before teaching materials are used for readability tests for students and trials in class, content validation is done first. The validators of teaching materials are experienced chemistry lecturer and teachers. The instrument used was a validation questionnaire with an assessment component in the form and appearance of teaching materials. Quantitative validation questionnaire analysis uses a Likert scale with values 4, 3, 2, and 1. Assessment by the validator is transformed on a scale of values adjusted to the eligibility criteria for teaching materials in Table 1. While qualitatively obtained from the results of suggestions and comments from the validator. The percentage of eligibility for teaching materials can be calculated using the formula.
Description: P = percentage of score validator's answer ∑ = total score of the validator's answer ∑ = highest score For validation of competency test questions, the assessment is reviewed from 3 domains namely: material domain, constructive, and language domain assessment.
The competency test questions are tested first to determine the level of difficulty, the difference test, validity, and reliability. The trial was conducted on students of class XI Vocational High School program.
Effectiveness of Edu-Kit
This study uses a quasi-experimental design, posttest only involves two classes, the experimental class and the control class. The experimental class uses teaching materials that are developed, while the control class uses teaching materials at school. The purpose of this design is to determine student achievement in the experimental class, test results are analyzed to determine differences in the average value of students in the experimental class and the control class. The experimental research design is shown in Table 2. To determine the effectiveness of teaching materials developed by comparing posttest average scores and the percentage of achievement of minimum graduation criteria between the experimental and control classes. The minimum graduation criterion is the lowest criterion for stating student achievement. To find out significantly that the two classes showed differences in learning outcomes, a t-test was used using SPSS for windows programming. Calculation of the average post-test score and the percentage of achievement of the minimum passing criteria using the following formula.
Result and Discussion
Basic competence of mixed separation material includes prerequisite material (material classification), definition and purpose of separation, qualitative and quantitative analysis, separation methods covering basic principles, tools and processes of separation, and application of separation in agriculture and other fields. This mixed separation material is associated with productive subjects namely the production and processing of plantation and herbal commodities. The results of developing Edu-kit in mixed separation material with problem-based learning model for vocational agribusiness and agrotechnology programs can be described as follows: Student orientation to problems. The first activity carried out in this stage is to explain the learning objectives to be achieved by the teacher, then convey related logistical needs, propose a problem that must be solved by students, motivate students to be directly involved to carry out problem solving activities of their choice. The result of the problem orientation phase is presented in Figure 2. Organizing students to learn. The teacher can perform his role to help students in defining and organizing learning tasks related to the problem presented. The result of the organizational problem phase is presented in Figure 3. Guide individual and group investigations. The teacher makes an effort to encourage students to gather relevant information, encourage students to conduct experiments, and to get enlightenment in problem solving. The result of the investigation phase is presented in Figure 4.
Develop and present the work. The teacher helps the students in planning and preparing appropriate work for example reports, videos or models, and the teacher helps students to share assignments between members in the group. The result of present the work phase is presented in Figure 5. Analyze and evaluate the problem-solving process. The teacher helps students reflect or evaluate their investigation in every process they use. The result of the evaluation phase is presented in Figure 6.
Validity of edu-kit development
Data obtained from the validator in the form of quantitative and qualitative, while from students in the form of quantitative data obtained from the readability test questionnaire.
Validation by validator: Quantitative data were obtained by calculating the percentage score of each aspect assessed by the validator then described in accordance with the eligibility criteria of teaching materials. Aspects in the questionnaire evaluation of teaching materials include the appropriateness of appearance and presentation and appropriateness of the contents in Edu-kit. Aspects of the learning plan assessment questionnaire include aspects of identity and competence, development of learning materials and resources, scenarios of learning activities, and assessment.
Assessment of the appropriateness of appearance and presentation is related to the format and layout of teaching materials. An interesting display of teaching materials can affect students' learning motivation to learn material on teaching materials. There are 9-10 aspects of evaluating the appropriateness of the appearance and presentation of each learning activity. Quantitative data analysis of the feasibility aspects of the appearance and presentation of the Edu-kit is presented in Table 3. The validator's evaluation of the appearance and presentation aspects of the Edu-kit obtained an average percentage of 94.75%. Therefore, in terms of appearance and presentation the Edu-kit that was developed was very well used in learning.
The assessment of the appropriateness of content in the Edu-kit relates to the accuracy of the delivery of material at each stage of problem-based learning. There are 8-19 aspects of the assessment of the appropriateness of contents in each learning activity. Quantitative data analysis of the content feasibility aspects of the Edu-kit is presented in Table 4. The validator's evaluation of the Edu-kit content assessment aspect obtained an average percentage of 96.13%. Therefore, in terms of the assessment of the contents of the Edu-kit that is developed is very feasible to use in learning.
The assessment of learning plans contained in the Edu-kit developed with the problem based learning model, has 4 aspects of assessment. Aspects of identity and competence have 4 assessment criteria. The aspect of developing material and learning resources has 7 assessment criteria. Scenarios aspects of learning activities have 11 assessment criteria. The evaluation aspect has 3 evaluation criteria. Quantitative data analysis on the learning plan is presented in Table 5. The validator's evaluation of the learning plan's assessment obtained an average percentage of 88.37%. Therefore, the learning plan developed from the Edu-kit is very feasible to use in learning.
Effectiveness of edu-kit development
Edu-kit effectiveness test results are done by calculating the average value obtained and calculating the percentage of the number of students who meet the minimum completeness criteria values in the two classes tested and also the assessment sheet on aspects of attitude, cognitive and psychomotor during the learning process.
The test results of the students' pretest value analysis. The experimental class and the control class must have the same ability when used as an Edu-kit effectiveness test class. Data from the analysis of the pretest scores of the two classes are presented in Table 6. Based on Table 6, obtained a significance value of normality of 0.114 and 0.200 which is greater than 0.05, it can be concluded that the pretest value is normal. For the homogeneity significance value of 0.251 > 0.05, it can be concluded that the pretest value is homogeneous. For the significance value of the two similarity tests of an average of 0.951 > 0.05, it can be concluded that there is no difference in initial ability between the two classes.
Test results of students' posttest value analysis. Posttest mixture separation material was held at the end of the learning process at the fifth meeting. Posttest is a written test with 20 multiple choice questions. Posttest scores indicate the learning outcomes of students in the experimental class and the control class. Data on students' pretest results from both classes are presented in Table 7. Based on Table 7, the significance value of normality is 0,000 and 0.002 which is less than 0.05 so it can be concluded that the posttest value is not normal. For the significance value of homogeneity is 0.958 > 0.05, it can be concluded that the posttest value is homogeneous. For the results of hypothesis testing used non-parametric Mann Whitney U Test, the significance value is 0.048 < 0.05, it can be concluded that there are differences in learning outcomes between the two classes. The experimental class has a higher average pretest value than the control class, so the Edu-kit development results can be categorized effectively for use in learning activities.
The implementation of the learning process in the experimental class and the control class were observed during the learning process. The implementation of the learning process in both classes was observed using an observation sheet from the assessment of attitude, knowledge and skills. This assessment is carried out by the research-er by giving a grade according to the assessment rubric that has been made. Data on the average value of the feasibility of the learning process in the experimental class and the control class are presented in Table 8. Observation results from the assessment of attitudes, knowledge and skills show that the experimental class has a higher value than the control class. This is due to the teaching material used in the experimental class related to productive material that is arranged systematically following the stages of problem-based learning that makes it easy for students to understand the learning material.
Conclusion
Based on the description above, it is concluded: • The development of a mixed separation Edu kit with a problem-based learning model for Vocational High Schools is appropriate for use in learning. • The development of a mixed separation Edu kit with a problem-based learning model for Vocational High Schools is effectively used in learning. • The development of Edu-kits integrated with productive subjects in Vocational High Schools can lead to more meaningful learning and improve learning outcomes in aspects of attitudes, knowledge and skills. | 2020-08-06T09:09:13.830Z | 2020-07-31T00:00:00.000 | {
"year": 2020,
"sha1": "e474cce04ee401db9bee2c2ad344a12c99408660",
"oa_license": "CCBY",
"oa_url": "https://online-journals.org/index.php/i-jim/article/download/15571/7565",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9c2d4c5eca916a4edf654f5793e7843a270993a3",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
257220043 | pes2o/s2orc | v3-fos-license | Propagation Constant Measurement Based on a Single Transmission Line Standard Using a Two-Port VNA
This study presents a new method for measuring the propagation constant of transmission lines using a single line standard and without prior calibration of a two-port vector network analyzer (VNA). The method provides accurate results by emulating multiple line standards of the multiline calibration method. Each line standard was realized by sweeping an unknown network along a transmission line. The network need not be symmetric or reciprocal, but must exhibit both transmission and reflection. We performed measurements using a slab coaxial airline and repeated the measurements on three different VNAs. The measured propagation constant of the slab coaxial airline from all VNAs was nearly identical. By avoiding disconnecting or moving the cables, the proposed method eliminates errors related to the repeatability of connectors, resulting in improved broadband traceability to SI units.
Introduction
The propagation constant is a critical parameter in transmission line analysis, providing valuable information about the electrical properties of materials at different frequencies.
The need for accurate measurement of the propagation constant arises in various applications, such as material characterization [1][2][3][4] or the estimation of the characteristic impedance of transmission lines, which allows impedance renormalization in various vector network analyzer (VNA) calibration methods [5]. Furthermore, knowledge of the propagation constant allows the analysis of losses along a transmission line, which is a critical aspect in signal integrity applications [6,7]. In general, there are many reasons to measure the propagation constant of guided wave structures such as transmission lines.
There are several methods to measure the propagation constant using a two-port VNA, but the most-versatile method because of its broadband applicability is the multiline technique [8]. In this method, multiple lines of different lengths are measured to sample the traveling wave along the line standards in a broadband scheme. However, this approach has several drawbacks, including the need for multiple lines, the possibility of uncertainties in their geometry, and the requirement for accurate repeated connection or probing, all of which contribute to measurement uncertainties [9][10][11].
To address some of the problems of the multiline method, some techniques have been introduced, such as the multireflect method [12,13] and the line-network-network method [14][15][16][17]. The multireflect method uses multiple identical reflect standards with different offsets to provide broadband measurement of the propagation constant. However, because it requires multiple identical independent standards, it is susceptible to repeatability errors due to repeated connection or probing. In addition, the propagation constant must be solved using optimization techniques that could diverge if not well conditioned. The line-network-network method involves moving an unknown symmetric and reciprocal network along a transmission line and solving for the propagation constant using the derived similarity equations. This method has limitations, such as the restriction to three offsets, which limits the frequency range, and the requirement to use symmetric and reciprocal offset networks. The results of relative effective permittivity measurements using this method were presented in [18], highlighting the sensitivity and limitations of this solution.
It is noteworthy that there is a significant amount of literature discussing the broadband measurement of the propagation constant using only two line standards of varying lengths, commonly known as the line-line method [19][20][21][22]. Despite the different mathematical formulations used, all these methods are based on solving the characteristic polynomial of the eigenvalue problem associated with the thru-reflect-line (TRL) calibration [23]. However, because of the use of only two lines, which often have a significant length difference to cover lower frequencies, the result of the propagation constant exhibits multiple resonance peaks, caused by integer multiples of half-wavelength occurrences in the electrical length of the transmission lines. To mitigate this issue, some authors have proposed post-processing techniques to filter the resonance peaks [24,25].
There are several indirect techniques for determining the propagation constant of transmission lines, which involve evaluating the permittivity of materials separately. These methods can be broadly classified into two categories: the resonant method and the transmission/reflection method. The resonant method, described in [26,27], estimates the permittivity from the S-parameters at the resonant frequencies, resulting in measurements only at specific frequencies. In contrast, the transmission/reflection method estimates the permittivity of a sample placed between two waveguides from the measured transmission and reflection coefficients. This method can be implemented using various configurations such as free space, rectangular waveguide, and coaxial line, as discussed in [28][29][30][31][32].
This paper aimed to introduce a novel method for measuring the propagation constant using a single transmission line standard without the need for prior calibration of a two-port VNA. The proposed approach builds on the general idea presented in [14], where an unknown network is shifted along a transmission line. Unlike the previous solutions, our method is not limited by the number of offsets and can handle asymmetric and non-reciprocal networks. A weighted 4 × 4 eigenvalue problem is proposed to combine all offset measurements, inspired by the modified multiline method introduced in [33]. One of the key advantages of our approach is that it requires only one transmission line, which enables equations similar to those of the multiline method. The proposed method eliminates cable reconnection, ensuring high repeatability, with uncertainties mainly due to the dimensional motion of the unknown network and the intrinsic noise of the VNA. The effectiveness of the proposed method is demonstrated on a commercial slab coaxial airline with measurements conducted using three different VNA brands. This paper presents the mathematical derivation of the proposed method and experimental measurements, demonstrating its accuracy and high repeatability, even when different VNAs are used. The proposed method offers a promising alternative to existing methods for measuring the propagation constant.
The remainder of this paper is structured as follows. In Section 2, we provide a detailed explanation of the mathematical derivation of the eigenvalue problem formulation that allows for the adaptation of the multiline method. Subsequently, in Section 3, we discuss the use of normalized eigenvectors to extract the complex exponential terms, which contain the propagation constant, and the utilization of least squares to derive an accurate estimate of the propagation constant. In Section 4, we describe the experimental setup, where we perform measurements using various VNAs and present the measured propagation constant of the slab coaxial airline, as well as a comparison with EM simulation. Finally, a conclusion is given in Section 5.
Formulating the Eigenvalue Problem
The general idea of the measurement setup is to move an unknown network along a transmission line. For each movement of the network, either to the left or to the right, we created two offset elements that are complementary to each other. When the offset length is zero, the offset elements are reduced to a thru connection, which we refer to as the reference plane. An illustration of this concept is shown in Figure 1.
Left error box
Right error box Before proceeding with the mathematical derivation, we need to define the sign convention for the offset shift. In our analysis, we define that moving the network to the right results in a positive offset, while moving the network to the left results in a negative offset. This convention is shown in Figure 2 as modeled by the error box model of a two-port VNA [34].
Measurement plane offset Error box
Complementary offset Error box With the definition of the offsets in Figure 2, the measured T-parameters of the offset network by the length l i are given as follows: where k, A, and B are the error terms of an uncalibrated two-port VNA. The matrices L i and N are given as follows: Here, γ represents the propagation constant of the transmission line and {S 11 , S 12 , S 21 , S 22 } are the S-parameters of the network N. The S-parameters of the offset network are generally unknown, and the network can be asymmetric or non-reciprocal. However, the network must satisfy some basic criteria, which are listed below: 1.
All S-parameters must be non-zero within the considered frequency range (|S ij | > 0).
2.
The S-parameters of the network should not change as the network is moved.
3.
The network should not lead to the generation of additional modes along the transmission line.
Although the first condition is unique to our method's formulation, the remaining two conditions are also similar to the multiline method [8,33], which requires single-mode propagation and repeated error boxes. Fortunately, it is not difficult to design a system that satisfies these requirements. We will show this later in a Section 4, where we used a commercial sliding tuner that was not designed for our application, but met our conditions.
We now define the T-parameters of a new network by taking the difference in the T-parameters of two offset networks of different lengths l i and l j , which is given by where The expression in (3) is very similar to a line standard in multiline calibration, but now, the line standard is described by an antidiagonal matrix and with additional multiplication factors. We define an equivalent measurement of a line standard by Similar to the multiline calibration, we also need an equation that describes the inverse of the measurements. This is given by where the matrix N i,j is given by Given the expressions in (5) and (6), we can construct an eigenvalue problem in terms of A as follows: where the matrix product N i,j N n,m is given by with To have a valid eigenvalue problem, we need at least three unique offsets, where one of the offsets l n or l m can be equal to l i or l j , but l i = l j , or vice versa. However, with three offsets, we have three possible pairs of the eigenvalue problems. In fact, for N ≥ 3 offsets, we have N(N − 2)(N 2 − 1)/8 possible pairs of eigenvalue problems. This is because, for a set of N offsets, we have N(N − 1)/2 pairs, and when we create pairs from N(N − 1)/2 pairs, we substitute the equation into itself, resulting in N(N − 2)(N 2 − 1)/8 pairs of pairs.
To address the issue of multiple eigenvalue problems, we refer to our previous work in [33,35], where a similar problem was presented in the context of multiline calibration. This problem was solved by combining all measurements using a weighting matrix, reducing the problem to solving a single 4 × 4 eigenvalue problem, regardless of the number of lines. This method not only reduced the size of the problem, but also allowed us to express both error boxes A and B simultaneously in a single matrix using Kronecker product notation. By applying the techniques described in [33,35], we obtain the following set of equations: with The details of the definition and properties of the Kronecker product (⊗) and the matrix vectorization (vec()) can be found in [36].
We now formulate the main eigenvalue problem by defining a new matrix W, which we multiply on the right side of (11a). We call this matrix the weighting matrix. In the next step, we constructed the weighted eigenvalue problem by multiplying the new equation on the left side of (11b). This results in
MW M
The expression presented in (13) represents a similarity problem between the matrices F and H, with X as the transformation matrix. The purpose of introducing the weighting matrix W is to transform this similarity problem into an eigenvalue problem by forcing H into a diagonal form. It turns out that if W is any non-zero skew-symmetric matrix, then H takes a diagonal form [33]. However, we do not only want to diagonalize H, but also want to maximize the distance between the eigenvalues, which in turn minimizes the sensitivity in the eigenvectors [37]. For multiline calibration, the optimal form of W was derived in [33], and since the formulation in (13) is similar to that discussed in [33], we used the same choice of W with some scaling modifications. The optimal weighting matrix W can be written as follows, taking into account the scaling factors: where As a result of choosing W as defined in (14), the expression in (13) takes an eigendecomposition form, as given below.
where λ is real-valued and proportional to the square Frobenius norm of the matrix W, given by There are two ways to compute W: The first is the direct method, where we already know the propagation constant γ and the factor κ, which describes the unknown network. Naturally, the first option is not practical since both γ and κ are unknown. The better option is to apply a rank-2 Takagi decomposition to the left side of the following equation, as described in [35] for multiline calibration.
Note that the left side of (18) contains only the measurement data, while the right side describes the model. Furthermore, the error boxes are not present in (18). To determine W, we need to calculate the rank-2 Takagi decomposition. This was performed in two steps. First, we computed the rank-2 of (18) via singular-value decomposition (SVD), and then, we applied the Takagi decomposition to decompose the matrix into its symmetric basis [38]. This looks as follows: Then, the weighting matrix is given by The derivation process of the matrix W is described in more detail in [35]. To resolve the sign ambiguity, one approach is to select the answer that has the smallest Euclidean distance to a known estimate. Such an estimate can be obtained from approximate knowledge of the material properties of the transmission line.
The last step is the solution of the eigenvectors described by X in (16). The solution of the eigenvectors was discussed in [33]. It is worth noting that we cannot solve the matrix X uniquely, but only up to a diagonal matrix multiplication. Therefore, to define a unique solution for X, we normalized its columns so that the diagonal elements are equal to one. This is written as follows: where a 11 and b 11 are part of the error boxes A and B (see (1) and (12a)).
Least-Squares Solution for the Propagation Constant
Knowing X from the eigenvector solutions, we can extract the complex exponential terms that contain the propagation constant. To do this, we first multiplied the inverse of the normalized error terms to all vectorized measurements of the offset network. This is given by where Since we do not know the remaining error terms k, a 11 , b 11 , as well as the S-parameters of the network N, we need to choose a reference offset to eliminate these unknown factors. For simplicity, we chose the first offset, which we define as zero, i.e., l 1 = 0 (any other choice is also valid). As a result, the positive and negative complex exponential terms are given as follows, using the indexing notation based on Python.
Now that we have the complex exponential terms, we can extract the exponents using the complex logarithm function and determine γ using the least-squares method, while taking care of any phase unwrapping. First, since we have both the positive and negative complex exponential terms, we can account for both by averaging them. This was performed by defining a new vector τ: The next step is to calculate the logarithm to extract the exponents, which is given by The phase unwrapping factor n can be estimated by rounding the difference between φ and an estimated value. This is given by where γ est is a known approximation for γ and l is a vector containing all length offsets except the reference zero offset. The initial estimate for γ est can be derived from the material properties of the transmission line. Finally, we can determine γ through the weighted least-squares [39]: where V −1 is given by The matrix I is the identity matrix, and 1 is a vector of ones. The weighting matrix V −1 is necessary because each measurement has a common reference, which is l 1 . Therefore, the correlation between the measurements had to be taken into account by the matrix V −1 [39].
Measurement Setup
For demonstration purposes, we used the slide screw tuner 8045P from Maury Microwave as an implementation of the offset network, where the transmission line was a slab coaxial airline that supported frequencies up to 18 GHz. The tuner is depicted in Figure 4. For our method to work, we required that the unknown network (i.e., the tuner element) be both reflective and transmissive, as the factor κ in (9) can explode to infinity if the network is only reflective and can be zero if the network is only transmissive. Ideally, we wanted κ = 1 to minimize its effect on the eigenvalue problem. However, we also wanted to avoid scenarios where the network causes the generation of additional modes or resonances. Therefore, we adjusted the tuner with an already calibrated VNA to tune the network to a desired response, as shown in Figure 5. It should be noted that this step of tuning the tuner with an existing calibrated VNA was only necessary because the tuner was a commercial product designed for circuit matching applications and not for our purposes. If we were designing the network ourselves, we would not need to measure it with a calibrated VNA because we would have already designed it to meet our frequency specifications. Furthermore, the S-parameters of the network were never explicitly used in the derivation of the propagation constant. As shown in Figure 5, we set the lower frequency to 3 GHz to avoid very low return loss and resonances. We then measured the airline using different uncalibrated VNAs. This was performed to demonstrate that, even if we changed the measurement system, we would still obtain consistent results because the error boxes would not be affected by uncertainties caused by connector and cable movement. For the offset lengths, we chose {0, 21, 66, 81, 84, 93, 117, 123, 171, 192} mm, which ensured that the eigenvalue λ in (16) does not reach zero in the target frequency range.
The VNAs used for the measurements were: Anritsu VectorStar, R&S ZNA, and Keysight ENA. The ENA is limited to 14 GHz. All VNAs were placed in the same room to provide the same room conditions. The power level and IF bandwidth for all VNAs were set to 0 dBm and 100 Hz, respectively. Due to the low loss of the airline, an average measurement of 50 frequency sweeps was calculated to reduce noise. Pictures of the three instruments are shown in Figure 6.
Results and Discussion
All measurements of the different offsets were taken without prior calibration of the VNAs. The collected data were then read in Python using the scikit-rf package [40]. In Figure 7, we show the measured magnitude response of S 11 and S 21 from all three VNAs for the offset 123 mm. From the figure, we can see that all three VNAs give different responses because the error boxes are different for each VNA. After collecting all raw measurements for all the offsets and from all the VNAs, the data were processed to extract the propagation constant according to the discussion in Sections 2 and 3. For an easier and better interpretation of the extracted propagation constant, we plotted in Figure 8 the real part of the relative effective permittivity and the loss per unit length of the slab coaxial airline from all three VNA measurements. The real part of the relative effective permittivity and the loss per unit length are calculated from the propagation constant as follows: where c 0 is the speed of light in vacuum and f is the frequency. The relative effective permittivity and loss per unit length results presented in Figure 8 showed clear agreement between all VNA measurements, demonstrating the high repeatability of the proposed method even when using different VNA setups. We also performed an EM simulation with the dimensional parameters of the airline given in Figure 4. Unfortunately, we did not have information on the metal types of the inner and outer conductors. From the appearance of the inner conductor, we believe it was made of some kind of brass. For the ground plates, we believe they were made of aluminum because they had a black anodized coating, which is typical for aluminum components. The anodized layer is often based on aluminum oxide and typically has a relative permittivity of 8.3 [41]. Since the thickness of the oxide layer and the exact conductivity of brass were unknown, we ran some values for the thickness of the anodic layer and the conductivity of brass. We found that a coating thickness of 15 µm and a relative conductivity of 35% International Annealed Copper Standard (IACS) overlapped with the measurement shown in Figure 8. The value obtained for the thickness of the anodic layer is quite typical to obtain a dark black coating [42]. The conductivity of the inner conductor of 35% IACS (=20.3 MS/m) was within the range of common brass types [43].
The purpose of the simulation was to show that the results obtained from the proposed method of measuring the propagation constant do indeed translate into the realistic properties of the transmission line. In fact, with the proposed method, one could characterize materials in reverse, as in our case the conductivity of the metal.
Another aspect that may be of interest is the quality of the extracted propagation constant by varying the length and number of offsets. In the results shown in Figure 8, we used 10 offsets ranging from 0 to 192 mm. Now, we consider different cases. These cases are listed in Table 1. In Figure 9, we show the results of the relative effective permittivity and the loss per unit length of the slab coaxial airline from the VectorStar VNA measurements for all the cases mentioned in Table 1. Cases 1 and 2 show the results when only three offsets were considered. Case 2 differs from Case 1 in that we replaced the last offset with a much longer offset. The results of both Cases 1 and 2 were poor and showed multiple resonances. For Case 2, we saw more resonances than for Case 1. This was the result of the eigenvalue crossing zero at multiple frequencies (see Figure 10). In Case 3, we spread the offsets further to include five offsets. We can see a clear improvement over Cases 1 and 2. We could further improve the accuracy of the extracted relative effective permittivity and loss per unit length by further spreading the offsets as in Case 4, where we used seven offsets. In Case 4, we obtained results of similar accuracy to the case of using all 10 offset lengths. The quality of the extracted propagation constant depends on the eigenvalue λ as defined in (16). As the eigenvalue approached zero, the eigenvectors became more sensitive, which in turn affects the calculation of the extracted propagation constant. To visualize the differences between different scenarios, we present a scaled representation of the eigenvalue λ for each case. This scaled representation excludes the influence of the network through the common factor κ, which was invariant over all offset lengths. Since κ > 0 was established earlier, variations in the eigenvalues can only be induced by the choice of offset lengths. Accordingly, we define the normalized version of the eigenvalue by dividing it by the absolute value of κ, as shown below: In Figure 10, we present a plot of the scaled eigenvalue normalized to its maximum value, which facilitates a consistent comparison as the number of offsets varies. As illustrated in the figure, for Cases 1 and 2, the eigenvalue exhibited multiple zero crossings at various frequencies. Similarly, in Case 3, the eigenvalue approached zero at several instances, although to a lesser extent than in Cases 1 and 2. In contrast, in Case 4, the eigenvalue never reached zero, but attained values closer to zero at specific frequencies than when all 10 offsets were utilized. Ideally, a flat eigenvalue over frequency would be preferred, but this would necessitate employing even more offsets. This is not different from the multiline calibration approach proposed in [33], where a finer spacing between lines resulted in a flatter eigenvalue over the frequency. Therefore, utilizing a broader range of offset lengths was highly advantageous for enhancing the accuracy of the results across the frequency. It is also noteworthy that the eigenvalue possessed a bandpass characteristic, whereby the lowest and highest frequency limits were bound by the largest and smallest relative offset, respectively. For comparison, it is worth noting that the multiline method necessitates the measurement of multiple line standards of different lengths, a process that can introduce errors due to connector repeatability. Achieving high repeatability in this context poses a significant mechanical challenge, especially concerning connectors, and automating this process represents an even greater hurdle. In contrast, our proposed method eliminates the need for physical contact between the sliding element and the transmission line. Furthermore, although the sliding process was performed manually in the example presented, it could be automated by employing a linear actuator, thus eliminating the need for any user interaction with the measurement system. In Table 2, we summarize the comparison between the proposed method and existing works in measuring the propagation constant of transmission lines. High.
Single line with sweepable symmetric and reciprocal network. Good.
Multiple symmetric reflect at different offsets. Good.
Conclusions
We presented a new broadband method for measuring the propagation constant of transmission lines that does not require the prior calibration of a two-port VNA or the use of multiple line standards. This method provides accurate results by emulating the use of multiple line standards through sweeping an unknown network along a transmission line. The shifted network does not have to be symmetric or reciprocal, but it must exhibit both transmission and reflection properties and remain invariant when moved along the line. The experimental results obtained using different VNAs on a slab coaxial airline with a slider tuner showed consistent agreement with each other and with the EM simulation.
One of the significant advantages of the proposed method is that it uses the same eigenvalue formulation as multiline calibration, but without the need for disconnecting or moving the cables. As a result, it eliminates errors related to connector repeatability and provides improved broadband traceability to the SI units. Moreover, since the offsets are implemented by simply moving the unknown network laterally, the process can be easily automated using an automated linear actuator. Therefore, the proposed method can accurately measure the propagation constant without requiring any physical interaction from the user on the measurement system. Data Availability Statement: Publicly available datasets were analyzed in this study. These data can be found here: https://github.com/ZiadHatab/two-port-single-line-propagation-constant (accessed on 27 April 2023).
Acknowledgments:
The financial support by the Austrian Federal Ministry for Digital and Economic Affairs and the National Foundation for Research, Technology, and Development is gratefully acknowledged. The authors thank ebsCENTER for lending their Anritsu and Keysight VNAs, and Maury Microwave for their support in providing the airline cross-section dimensions of the 8045P tuner.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-02-28T06:42:26.409Z | 2023-02-27T00:00:00.000 | {
"year": 2023,
"sha1": "ac0e1ea6acefa0453e0e4852a42f336dfbbe7873",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "174c537eeed9c27ef5fe02760c31f56128d03fcd",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics"
]
} |
235666387 | pes2o/s2orc | v3-fos-license | Effect of change in moisture content of Sumatra Forest Honey on total sugar, electrical conductivity and color
Sumatra forest honey is a type of forest honey produced by Apis dorsata. High humidity in forest environments and the type of open nest can increase the moisture content of honey because of its hygroscopic. The high moisture content has the potential for fermentation cause a decrease in the physicochemical quality of honey and can cause damage to packaging in long-term storage. This research was to know the effect of reducing the moisture content of Sumatran forest honey from 25% to 22% using a vacuum evaporator. The method used an experimental method with a completely randomized design consisting of four treatments and four replications. The results of these parameters are the total sugar content between 72.5% to 76% which has a very significant effect, electrical conductivity between 0.92 and 1.08. ms/cm which gave a significant effect as well as the color intensity of L * a * b * between 14.67 to 17.93 which did not affect the moisture content level of honey. The conclusion of this research showed that decreasing the moisture content of honey using a vacuum evaporator was able to improve the physical and chemical quality of honey.
Introduction
Sumatran forest honey is honey produced by wild bees Apis dorsata. Sumatra has vast forests scattered in various districts such as Solok, Dharmasraya, Sijunjung, and Pesisir Selatan [1]. Each district has different vegetation characteristics and altitude so it is assumed that it will have an influence on the quality of the honey produced. Unlike honey that comes from grazing bees, Sumatran forest honey has the character of honey which is dark in color, smells slightly sour and watery. The quality of honey is influenced by the surrounding environment because honey is easy to absorb water or hygoscopic. Sumatran forests have a variety of plant vegetation as well as high levels of humidity between 60-90% which can affect the quality of the honey produced.
The form of an open forest honeycomb accompanied by high humidity affects the physical and chemical quality of forest honey, one of which is moisture content. Moisture content greatly affects the quality of honey because it can determine its shelf-life. The moisture content that has been determined by the Indonesian National Standard (SNI of Honey 8664 -2018) is 22%, while the Codex Alimentarius 2001 that the moisture content of honey is not more than 20%. Sumatran forest honey has a moisture content of between 24-26% so that the potential for fermentation is caused by yeast. Yeast will degrade the sugar content in honey and will produce alcohol which will turn into acetic acid and oxalic acid when interacting with oxygen so that this can affect sensory quality and decrease nutrition in honey [2]. Various types of Sumatran forest plant vegetation will produce various types of flower nectar and the mineral content from the soil will determine the mineral content in honey. Soil mineral content in forests is thought to affect the mineral content of honey and color produced.
Efforts to reduce the moisture content of honey can help maintain the physical and chemical qualities of honey during the storage period. One of the methods used is through a heating process using a vacuum evaporator. The reduction in moisture content by the vacuum method can reduce the moisture content and prevent granulation by the sugar content of honey. The use of a vacuum evaporator can maintain the quality of honey because the heating process uses the optimum temperature and the appropriate pressure so as not to damage the physical and chemical qualities of honey. This study aims to determine the effect of levels of levels in honey on the total sugar content, electrical conductivity and color intensity of honey.
Location
This research was conducted at the Laboratory of Animal Products Technology, Faculty of Animal Science, Brawijaya University and PT. Kembang Joyo Sriwijaya. This research was conducted from 15 th June to 23t h August 2020.
Method
This study used an experimental method with a Completely Randomized Design (CRD) consisting of 4 treatments and 4 replications. The treatments consisted of honey moisture content of 25%, 24%, 23% and 22%, each of which consisted of 4 replications. The data obtained will be analyzed with Analysis of Variance (ANOVA) and followed by the Duncan's Multiple Range Test (DMRT) test if a significant difference is produced (P<0.05). The honey quality parameters observed were total sugar content, electrical conductivity and honey color intensity.
Collect and preparation sample
Samples are taken from the same drum container with the amount of honey honey equal to 50 kg for each time the evaporation process is carried out. The tools used are a digital sitting scale with a capacity of 500 kg, a large pot, a cloth filter and a vacuum evaporator machine. Evaporation of honey is carried out by pouring 50 kg of honey which has been weighed into a vacuum machine. Then the honey is heated with a moisture content of 26% until it reaches a temperature of 600 C and the pressure is at 60 atm. Check the moisture content every 10 minutes until it matches the desired moisture content for sampling. The evaporation process is continued when the sample with a certain moisture content has been obtained until it reaches a moisture content of 22%.
Total sugar
The method of testing total sugar in honey was carried out using a brix refractometer [3]. The refractometer used is the manual Atago refractometer by measuring the refractive index of honey and regularly cleaning aquadest. How to use a honey refractometer is to open a light plate and then drop a few drops of honey until the honey covers the entire blue area. The result is the value on the sweetness scale shown in the viewfinder. Sweetness value expressed in percent brix (% brix).
Electric conductivity
The honey conductivity test method is carried out using an EC meter. The tool used is a professional 2 in 1 pH meter EC meter. Determination of the electrical conductivity value of honey is carried out by weighing a sample of 10 grams (20% w/v) using a digital scale, the honey sample is dissolved with 50 ml of distilled water in a beaker glass and homogenized. Electrical conductivity measurements are carried out by immersing the EC meter electrode in the honey sample solution [3]. In this study, the 3 sample volume was used according to the existing ratio, namely 1: 5, 2 mL of honey was used and 10 mL of aquadest was used.
Color intensity
The color intensity of honey is determined in the L*, a*, b* system. Measurement of color intensity is done by attaching the CS-10 colorimeter sensor to the sample that has been placed in a 20 mL volume film pot 3 times and the average values of L*, a* and b* are taken. L is the brightness (lightness) coordinate of the light which has a value range of 0-100. The value of a is the saturation of the red-green axis, if positive indicates red and negative a indicates green. A positive b value indicates yellow and a negative b value indicates blue [4]. Instrumental color grading can be calculated by the formula: ∆E*= (∆L* 2 +∆a* 2+ ∆b* 2 ) 1/2
Total sugar
The results of the total sugar test can be seen in table 1. The results of the Duncan test showed that the decrease in moisture content in Sumatran forest honey had a very significant effect on the total sugar content of honey (P>0.01). Based on the results of sample testing, it shows that the decrease in moisture content in honey is accompanied by an increase in the total sugar content. The highest total sugar content was 76% brix in the treatment with a decrease in the moisture content of honey to reach 22% in accordance with SNI 8664 2018. The lowest sugar content was 72% brix in the treatment of reducing the moisture content of honey to 25%. The average total sugar content in all treatment samples exceeds the standards set by Codex Alimentarius 2001, namely by not exceeding 60 of 100 g.
The main components of honey are sugar and water. Carbohydrates that dominate honey are from reducing sugars, namely glucose and fructose by 70-80%, moisture content of 10-20% and other components, namely organic acids, organic minerals, minerals, vitamins, proteins, enzymes, volatile components and flavonoids [6]. Total sugar is influenced by the sucrose content, which is the main type of sugar from nectar. The components of sucrose can be broken down by the invertase enzyme into simple sugars, namely fructose and glucose, which are reducing sugars [5]. The Indonesian National Standard 8664-2018 has determined that the minimum total reducing sugar content in honey is at least 65%, so that all samples have been categorized as meeting the standard. The total sugar content in honey is influenced by the type of plant origin (nectar), geographical origin, climate, processing and storage [6]. The total sugar content and water content in honey can be used to control the honey granulation process [7]. Crystallization in honey occurs in honey which has high sugar content. Sugar will become crystals of glucose monohydrate and crystals then separate from the water and fructose [6]. The lowest percentage of total honey sugar brix is found in honey with the highest moisture content, namely 25%. The moisture content in honey can trigger yeast activity to grow and develop, causing the fermentation process. Yeast that causes fermentation in honey comes from the genus Zigosac charomyces which is resistant to high sugar concentrations so that it can live and thrive in honey. The presence of yeast in honey can degrade sugars such as dextrose and levulose into alcohol and CO2 so that it affects the total sugar content of honey [8].
Electric conductivity
The results of the electrical conductivity test can be seen in table 2. The moisture content level has a significant effect on the conductivity or electric power of honey (P>0.05). The lowest average conductivity value is owned by Sumatran forest honey with the highest moisture content, namely 25%. The value of electrical conductivity in this study continues to increase along with decreasing moisture content. The value of honey's electrical conductivity is standardized on Codex Alimentarius 2001 which states that the electrical conductivity, not more than 0.8 m/s. The electrical conductivity values for all honey samples exceed the established limits. However, the state of honey after dilution with aquadest solvent cannot be related to these provisions because of the many properties and compositions in honey. The use of a solvent such as purified water to dissolve honey can increase the conductivity to a value of 1.1 µS/cm which is equivalent to pure water with a NaCl content of 0.48 ppm. Honey has the highest electrical conductivity value with a moisture content of 23% and 22%, namely (1.037±0.017 m/s) and (1.035±0.043 m/s). Electrical conductivity varies depending on geographic location and botanical conditions. Electrical conductivity is influenced by levels of ash and acidity, the higher the ash and acid content, the higher the value of the electrical conductivity [7]. Electrical conductivity is also influenced by the mineral content and organic acids in honey [9]. The acidity of honey is related to the presence of organic acids in it and the mineral content of honey characterizes the plant from which honey is derived. Forest honey is thought to have advantages, namely high mineral content so that it will affect the value of electrical conductivity. The mineral content of honey is obtained from flower nectar which is influenced by mineral conditions in the soil, forest environmental conditions can increase the solid content and the electrical conductivity of honey.
Color intensity
The results of the L * a * b * color intensity test can be seen in table 3. The level of moisture content does not have a significant effect on the color intensity of honey, but the highest color intensity is owned by honey with the lowest moisture content, namely 22% with a color intensity of 16.51±1.02. Notation L* indicates the level of brightness on the honey object, the a* notation shows the level of redness and greenness of the object with a positive value indicating that the object tends to be red while the negative value shows the object tends to be greenish, the b* notation shows a yellow blue color on the object, if the value is positive shows that the object tends to be bluish and if the value is negating it indicates that the object tends to be bluish [10]. Honey sample images can be seen in figure 1. The difference in color in honey is caused by the presence of pigments such as carotenoids and flavonoids which are influenced by the type of plant and the geographical origin of the honey. Harvest age and consistency of honey also affect the resulting color. The color and consistency of honey also depend on the moisture content, saccharides, and pollen of honey. Sumatran forest honey has a darker color and is indicated to have a high phenolic value compared to light-colored honey [11].
The L* value will decrease or get closer to zero along with the browning process that occurs due to the heating process in honey. The level of browning is stated in the Browning Index (BI). Browning index is influenced by the length of time heating honey, this is due to the presence of non-enzymatic reactions by reducing sugar carbonyl groups, aldehydes, ketones, protein amino acid groups, and other compounds that will produce dehydrated products due to the heating process. The heating temperature used in the evaporation process was 60ºC and balanced with the pressure applied so that the boiling point of honey can be reached in a short time, a point which did not cause denaturation of some bioactive components in honey. In addition, a negative a* value in all samples indicates that the Sumatran forest honey sample has a higher value of red color and the b* value indicates a positive value indicating that Sumatran forest honey as a sample is more yellowish in color.
Conclusion
The quality of honey is maintained in terms of total sugar content, electrical conductivity, and color intensity with the process of reducing its moisture content using a vacuum evaporator. | 2021-06-29T20:03:44.330Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "456e6012963a7f311da924422f452ce54f1d33ad",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/788/1/012107/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "456e6012963a7f311da924422f452ce54f1d33ad",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Physics"
]
} |
245272945 | pes2o/s2orc | v3-fos-license | A phosphite-based screening platform for identification of enzymes favoring nonnatural cofactors
Enzymes with dedicated cofactor preference are essential for advanced biocatalysis and biomanufacturing, especially when employing nonnatural nicotinamide cofactors in redox reactions. However, directed evolution of an enzyme to switch its cofactor preference is often hindered by the lack of efficient and affordable method for screening as the cofactor per se or the substrate can be prohibitively expensive. Here, we developed a growth-based selection platform to identify nonnatural cofactor-dependent oxidoreductase mutants. The growth of bacteria depended on the nicotinamide cytosine dinucleotide (NCD) mediated conversion of non-metabolizable phosphite into phosphate. The strain BW14329 lacking the ability to oxidize phosphite was suitable as host, and NCD-dependent phosphite dehydrogenase (Pdh*) is essential to the selection platform. Previously confirmed NCD synthetase with NCD synthesis capacity and NCD-dependent malic enzyme were successfully identified by using the platform. The feasibility of this strategy was successfully demonstrated using derived NCD-active malic enzyme as well as for the directed evolution of NCD synthetase in Escherichia coli. A phosphite-based screening platform was built for identification of enzymes favoring nonnatural cofactor NCD. In the future, once Pdh variants favoring other biomimetic or nonnatural cofactors are available this selection platform may be readily redesigned to attain new enzyme variants with anticipated cofactor preference, providing opportunities to further expand the chemical space of redox cofactors in chemical biology and synthetic biology.
To facilitate mNADs-dependent biotransformations, it is essential to engineer enzymes to favor mNADs, and various studies have focused on optimizing cofactor preference of enzyme 11 . NCD is the only artificial coenzyme which was successfully biosynthesized and used in orthogonal redox reactions intracellularly, as demonstrated in our earlier studies [12][13][14] . Various NCD-favoring oxidoreductase, such as malic enzyme, phosphite dehydrogenase, d-lactate dehydrogenase, formate dehydrogenase, and formaldehyde dehydrogenase, have been successfully designed 2,[15][16][17][18] . However, these enzymes are not enough for the wide application of NCD in the future. Thus, one of the challenges remains the efficient directed evolution of NCD-dependent enzymes. Although a colorimetric method is useful in screening NCD-favoring mutants, it is labor-intensive.
High throughput screening method that can correctly identify rare positive hits from diverse mutant libraries is critical to directed evolution. However, a successful directed evolution is often hindered by the lack of efficient and affordable selection methods, especially involving enormous mutant libraries 19 . Various methods have been developed to identify mNADs-active dehydrogenase mutants depending on mass spectrometry and absorbance spectroscopic change 2,3,20,21 . However, these methods are labor-intensive and time-consuming without the robotic systems, and result in a low throughput 22 . By contrast, the growth complementation method, which couples the examined enzyme property with the fitness of the host cell, is not dependent on intensive labor 23 . This approach has been successful developed and used to evolve NAD(P)H-dependent oxidoreductases based on redox balance principles in engineered Escherichia coli (E. coli) strains with disrupted intracellular cofactor cycling 24,25 . A recent growth-based selection strategy has since been applied in engineering NMN-dependent enzymes by linking E. coli growth to the NMN cycle with OD 600nm as the readout 4 . Therefore, we hypothesized that if NCD balance can be linked to E. coli growth, the mentioned selection strategy might be adapted to evolve NCD-active enzymes.
Here, we built a phosphite-based selection platform for the initial screening of the libraries to identify NCDactive mutant. Growth was utilized as the readout in the selection platform. The selection platform operates based on NCD-drive phosphite metabolism. Briefly, phosphite serves as the sole phosphorous source for E. coli. When the native phosphite oxidation pathway is disrupted, cell growth would rely on the heterologously introduced NCD redox cycle. NCD-drive phosphite oxidation reaction is catalyzed by NCD-dependent phosphite dehydrogenase (Pdh_I151R/P176R/M207A, Pdh*), a mutant with robust NCD preference 15 . NCD synthetase, which was created in our previous study, is employed for the in vivo NCD biosynthesis from CTP and NMN 22 . Then, a closed NCD redox cycle will be formed with the NCD-dependent oxidoreductase mutant to regenerate NCD. The feasibility of this strategy was proved using NCD-dependent malic enzyme (ME-L310R/Q401C, ME*) as the candidate 2 . We hypothesized that if phosphite dehydrogenase can be engineered to favor other mNADs, such a paradigmatic selection scheme might be designed and applied in engineering diverse mNADs-favoring oxidoreductase, as well as mNADs synthetase in the future.
Results and discussion
Design of the phosphite-based selection system. The design of our selection system relies on a closed NCD cycle to drive the oxidation of phosphite in E. coli (Fig. 2). It consists of four important and basic elements: engineered E. coli that cannot use phosphite as the sole phosphorous source, NCD, heterogenous NCDdepend phosphite oxidation pathway, and NCD regeneration pathway. The growth of E. coli was associated with the NCD cycle by the phosphite oxidation. This was achieved by disrupting endogenous phosphite metabolism and directing NCD-dependent phosphite dehydrogenase (Pdh*) into the life-essential phosphorus metabolism. Since cells cannot biosynthesize NCD autonomously, a NCD synthetase that created in our previous research by reprograming the E. coli nicotinic acid mononucleotide adenylyltransferase (NadD) to use CTP and NMN as substrates can be employed for the in vivo NCD biosynthesis 22 . In the presence of intracellular NCD and Pdh*, cell growth was restored only when a closed NCD redox cycle was formed with a NCD-active oxidoreductase. It suggested that this system can be used to screen for NCD-active oxidoreductase. Furthermore, this system also had the potential to screen NCD synthetase under the condition of complete NCD-cycle. In this system, the specific functions of NCD and the complementary enzymes are not linked to cell survival and they can be exchanged. Therefore, we anticipate that the phosphite-based selection will be highly instrumental in engineering diverse NCD-dependent oxidoreductases and NCD synthetase.
As the selection system depending on phosphite metabolism to provide phosphorous source for cell growth, we first sought to select appropriate host that could not oxidize phosphite. E. coli is commonly used host strain for directed evolution of protein. However, E. coli has two independent pathways for oxidizing phosphite to phosphate depending on the phn operon and the phoA locus respectively 26,27 . Strains BW14329, BW16787, BW16847 and BW22246 were with the deletion of phoA and varying degrees of deletion of gene cluster phn. The capacity to oxidize phosphite of these strains were demonstrated by their ability to grow in MOPS minimal media with phosphite as the sole phosphorous source (Fig. 3A). All tested strains could grow normally on media with phosphate as phosphorous sources. It was consistent with expectations that, engineered strains with double www.nature.com/scientificreports/ knockout of phoA and phn cannot grow on phosphite medium compared with the control strain BW25141. Therefore, strains BW14329, BW16787, BW22246 and BW16847 can be used as host in the phosphite-based selection system. BW14329 was randomly selected as the host strain in the following study.
Biorthogonality of phosphite dehydrogenase mutant.
Based on the principle that the growth of the host cell depending on NCD-dependent phosphite metabolism, this selection system requires highly active and specific NCD-dependent phosphite dehydrogenase. NCD-dependent phosphite dehydrogenase could not provide phosphorus source for cell growth depending on NAD. Hence, we tested biorthogonality of Pdh* by monitoring the growth of engineered strains expressing different phosphite dehydrogenases. In our previous work, the cofactor preference of a series of Ralstonia sp. strain 4506 derived Pdh mutants, including Pdh_I151R, Pdh_I151R/P176E and Pdh* (Pdh_I151R/P176R/M207A), were characterized. According to kinetic constants of these mutants (Table S1), although the mutants Pdh_I151R and Pdh_I151R/P176E had higher NCD preference, but they still retained high activity against NAD 15 . The high K m value (4.7 mM) and the low k cat /K m value (0.045 mM −1 s −1 ) for NAD indicated that only Pdh* had the lowest activity with intracellular NAD and had the potential to exhibit bioorthogonality in vivo. Then, engineered strains, including BW14329-YX00, BW14329-YX01, BW14329-YX09, BW14329-YX10, and BW14329-YX11 (Table S1), were constructed by transferring plasmids expressing no Pdh, wild-type (WT) Pdh, Pdh_I151R, Pdh_I151R/P176E and NCD-dependent Pdh* into the host strain BW14329, respectively. As expected, all engineered strains enabled growth in MOPS minimal media under 2 mM phosphate. When replaced with phosphite as the sole phosphorous source, engineered strains showed different growth states (Fig. 3B, Table 1). BW14329-YX01 and BW14329-YX09 can grow well at a fast specific-growth rate, 0.11 ± 0.00 h −1 and 0.12 ± 0.00 h −1 respectively. Due to relatively low NAD activity, BW14329-YX11 grew to a certain extent at a lower specific-growth rate (0.07 ± 0.00 h −1 ). Attributing to the high cofactor specificity of Pdh*, BW14329-YX10 grew at the lowest growth rate (0.03 ± 0.00 h −1 ) under 2 mM phosphite, and the growth was very weak compared to BW14329-YX01. Although the biorthogonality of Pdh* in vivo was not strictly, these results suggested that the reaction mediated by Pdh* could potentially be used for the growth-based selection for NCDH-consuming reactions of interest.
Application of the selection method in directed nicotinate-mononucleotide adenylyltransferase evolution. NadD catalyzes the synthesis of nicotinic acid adenine dinucleotide using ATP and nicotinic acid mononucleotide as substrates. NCD synthetase (NcdS) was created by reprograming the substrate-binding pockets of NadD, and catalyzed the condensation of CTP and NMN to form NCD 14 . As a proofof-concept, we applied the selection method in directed evolution of NcdS. Cell growth will depend on the www.nature.com/scientificreports/ biosynthesis of NCD with presence of Pdh* and the NCD-cycle partner (Fig. 2B). The growth rate of strain will be positively correlated with the activity of NcdS. Here, we randomly selected WT NadD and several suspects formed during the directed evolution, including 22C8 (D22R), 23F7 (V23Q), 109H9 (D109R), 1C1 (P22K/ C132L/W176L), and 22D8 (D22K), and the NCD synthesis capacity was enhanced sequentially according to previous report (Fig. 4A) 14 . The high capacity of NCD synthesis was reflected in high activity toward CTP and low activity toward ATP. The NCD cycle module was assembled on redesigned plasmid pUC18 (bla::cat) with Pdh* and ME* coexpression controlled under the lac and ara operon respectively, to give plasmid pUC-chl-(P araB ) ME* + Pdh*. Plasmid expressing WT NadD or the variants and pUC-chl-(P araB )ME* + Pdh* were cotransformed into BW14329. We hypothesized that colonies only formed when NadD variants showed the high activity of NCD synthesis. In our results, colonies were only observed on MOPS plates with 5 mM phosphite when plasmid expressing 109H9, 1C1 or 22D8, but not WT NadD, 22C8 or 23F7 was transformed. Under the same conditions, www.nature.com/scientificreports/ colonies formed of 109H9, 1C1, and 22D8 were counted to be 5, 13, and 198, respectively. These results indicated that the number of colonies was positively correlated with the activity of mutant for NCD synthesis and this was in agreement with the screening principle.
To further prove our hypothesis, we tested the capacity of NcdS to regulate the growth of strains holding NCD-cycling pathway. We introduced NcdS-2, NcdS-3, and the V23Q/W176E mutant of NadD (3G8) 14 on plasmid pUC-18 with Pdh* coexpression, giving pUC-chl-NcdS2 + Pdh*, pUC-chl-NcdS3 + Pdh*, and pUCchl-3G8 + Pdh*. According to the previous research, NcdS-2 showed higher activity and preference of NCD biosynthesis than NcdS-3, whereas 3G8 had the lowest specificity 14 . The reconstructed plasmids were separately cotransformed with pTrc99K-ME* into BW14329, to give strains BW-PB01, BW-PB03, and BW-PB05, respectively. Growth behavior of engineered strains in MOPS media with phosphite as the sole phosphorous source was observed (Fig. 4B). Obviously, the higher activity of NcdS afforded an increased growth rate. It should be noted that the expression levels and activities of Pdh* and ME* influence the efficiency of phosphate production, which may affect the cell growth. In our results (Fig. 4C, Fig. S1), there was no significant difference in protein expression and crude enzyme activity for cofactors between different engineered strain. Therefore, the growth differences between strains were mainly caused by NcdS. These results suggested that the strain assembled with NCD-cycling pathway could potentially be used for the phosphite-based selection of NCD synthetase. Therefore, when NCD-cycling pathway was replaced by mNADs-cycling pathway, this system would potentially be applied to the selection of mNADs synthetase. Validation of screening system with evolved malic enzyme. Figure 2B indicated that cell growth was restored only when the NCD cycle was closed. Hence, we tested if the NCD regeneration reaction could support growth with the presence of NCD and Pdh*. Plasmids pUC-NcdS-2 + Pdh* and pTrc99K-ME were cotransformed into BW14329 to give strain BW-PB07. Consistent with our expectations, when BW-PB01 was cultured in liquid minimal media with 0.4% glycerol and 5 mM phosphite, the NCDH-dependent ME* enabled growth with a long lag phase (Fig. 5). In contrast, BW-PB07 grew at a lower rate in the same condition when ME* was replaced by ME. However, the growth difference disappeared when glycerol was substituted for glucose. As the oxidation state of carbon sources has a significant effect on cellular NADH/NAD ratio, the intracellular NAD level was increased when glucose was used as carbon source compared to glycerol 28,29 . We speculate that the accumulated NAD may be consumed by the overexpressed Pdh* in the case of insufficient NCD, which promoted the phosphite metabolism and resulted in the growth of the cell. Overall, these results suggested that the selection platform could potentially be used for screening NCDH-consuming oxidoreductase.
Conclusions
In summary, we have established a phosphite-based in vivo selection platform for NCDH-dependent reactions and NCD synthesis. ME and NCD-dependent mutant ME* were applied to test the selection system, and ME* was easy identified with the higher cell growth. On the other hand, we successfully applied the selection system toward to identify the variant with the higher activity that generated in the directed evolution of NcdS. Although the throughput and false positive rate were not demonstrated, this study suggested that NCDH-consuming enzymes can be identified by employing the in vivo selection process. It is not surprising that false hits would be identified from this phosphite-based screening platform. Further selection is needed to exclude false hits from the candidate by coupling this platform with a compatible colorimetric assay. As a result, the best performing enzyme variant would be identified. Despite the limitations, we envision that once Pdh variants favoring other The crude enzyme activities of Pdh*, ME* and ME toward NAD and NCD. Experiments were conducted in triplicates, and data are presented as mean values. Pdh*-NAD, activity of Pdh* toward NAD. Pdh*-NCD, activity of Pdh* toward NCD. ME*or ME-NAD, activity of ME* or ME toward NAD. ME* or ME-NCD, activity of ME* or ME toward NCD. BW-PB01-1 and BW-PB01-2 had the same genotype, as well as BW-PB03-1 and BW-PB03-2, BW-PB05-1 and BW-PB05-2. BW-PB01, BW-PB03, and BW-PB05 expressed Pdh*, ME* and different NcdS. BW-PB07 expressed Pdh*, ME and NcdS2.
Strains and plasmids.
Bacterial strains and plasmids used in this study are listed in Tables S2 and S3, respectively. E. coli BL21 (DE3) was used for plasmid construction. E. coli BW14329, BW16787, BW22246 and BW16847 were obtained from the Coli Genetic Stock Center (CGSC). The construction of the plasmids is detailed in the "Genetic methods".
Media and growth conditions. Luria-Bertani broth was used for growth during cloning. MOPS minimal medium 30 supplemented with different phosphorus sources was utilized for determining the growth behavior of the strains. Unless otherwise specified, MOPS medium was supplemented with 0.4 g/L glucose, and corresponding antibiotics (50 μg/mL Kan, 100 μg/mL Amp, 30 μg/mL Chl) and inducers (0.1 mM IPTG,1 mM L-ara). Seed cultures were grown in LB medium for protein induction at 25 °C at 200 rpm for 24 h, supplemented with 50 μg/ mL Kan, 0.1 mM IPTG and 1 mM L-ara. Cells were collected, washed thrice and resuspended with 1 mL of MOPS medium without phosphorous source. A 3-μL volume of cell suspension was spotted on the corresponding gradient phosphite agar plate. The plate was cultured at 25 °C for 72 h. To determine the growth curve of the strains, the cell suspension was then inoculated into 200 μL of MOPS medium with an initial OD 600 of 0.2 and cultivated at 25 °C with Bioscreen instrument. The absorbance at 600 nm was measured every 2 h. The specificgrowth rate and lag-phase data were estimated from absorbance growth curves using the modified Gompertz model as described previously 31 . The method used to determine the activity of NadD variants by detecting the transformation efficiency was detailed in the Supporting Information.
Genetic methods. Plasmids were constructed by restriction-free cloning 32 and the plasmid pUC18 as the initial template (Fig. S2). Genes encoding ME and ME* were amplified from plasmids pTrc99K-ME and pTrc99K-ME*, which were lab collection. Genes encoding Pdh and Pdh* were amplified from plasmids pK-Pdh and pK-Pdh*, respectively. Pdh, ME, and NcdS were expressed with His×6 tag at the C-terminal.
Western blot assay. Western blot was performed using His-tag antibodies to demonstrate the expression of Pdh*, ME/ME*, and NcdS in strains BW-PB01, BW-PB03, BW-PB05, and BW-PB07. About 5.0 × 10 9 of cells were collected by centrifugation at 10,000×g at 4 °C for 5 min and washed twice with 1 mL of 10 mM Tris-Cl buffer (pH 8.0). The cells were resuspended in 200 μL of 10 mM Tris-Cl buffer. Cell pellets were disrupted by sonicator and the supernatant was achieved by centrifugation at 13,000×g for 10 min. Next, 5 μL of loading buffer was added to 15 μL of the supernatant and boiled for 10 min. Samples were subjected to SDS-PAGE and | 2022-07-23T06:17:25.769Z | 2022-07-21T00:00:00.000 | {
"year": 2022,
"sha1": "25af41d933f4dad0acb4933b7abf3921bc17fed4",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "cf6e3aad5a273ccd5922ea94d3508ec4cff517f2",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268340690 | pes2o/s2orc | v3-fos-license | Long-term reduced functional capacity and quality of life in hospitalized COVID-19 patients
Background Persistent symptoms and exercise intolerance have been reported after COVID-19, even months after the acute disease. Although, the long-term impact on exercise capacity and health-related quality of life (HRQoL) is still unclear. Research question To assess the long-term functional capacity and HRQoL in patients hospitalized due to COVID-19. Study design and methods This is a prospective cohort study, conducted at two centers in Brazil, that included post-discharge COVID-19 patients and paired controls. The cohort was paired by age, sex, body mass index and comorbidities, using propensity score matching in a 1:3 ratio. Patients were eligible if signs or symptoms suggestive of COVID-19 and pulmonary involvement on chest computed tomography. All patients underwent cardiopulmonary exercise testing (CPET) and a HRQoL questionnaire (SF-36) 6 months after the COVID-19. The main outcome was the percentage of predicted peak oxygen consumption (ppVO2). Secondary outcomes included other CPET measures and HRQoL. Results The study sample comprised 47 post-discharge COVID-19 patients and 141 healthy controls. The mean age of COVID-19 patients was 54 ± 14 years, with 19 (40%) females, and a mean body mass index of 31 kg/m2 (SD, 6). The median follow-up was 7 months (IQR, 6.5–8.0) after hospital discharge. PpVO2 in COVID-19 patients was lower than in controls (83% vs. 95%, p = 0.002) with an effect size of 0.38 ([95%CI], 0.04–0.70). Mean peak VO2 (22 vs. 25 mL/kg/min, p = 0.04) and OUES (2,122 vs. 2,380, p = 0.027) were also reduced in the COVID-19 patients in comparison to controls. Dysfunctional breathing (DB) was present in 51%. HRQoL was significantly reduced in post COVID patients and positively correlated to peak exercise capacity. Interpretation Hospitalized COVID-19 patients presented, 7 months after discharge, with a reduction in functional capacity and HRQoL when compared to historical controls. HRQoL were reduced and correlated with the reduced peak VO2 in our population.
Introduction
The COVID-19 pandemic declared in March of 2020 resulted in a massive number of cases in several countries (1).SARS-Cov-2 infection overloaded healthcare systems and was responsible for over 450 million cases worldwide (2).Viral pneumonia is the hallmark of hospitalized COVID-19 patients, and, in severe forms, progress to acute respiratory distress syndrome (ARDS), the most worrying presentation with a high mortality rate and associated with long-term disabilities (3).
Experience from the previous severe acute respiratory syndrome (SARS-CoV-1) epidemic, suggests that pulmonary function at rest and exercise capacity could be profoundly impaired, either by the virus action or because of post-intensive care syndrome, but its long-term impact is unknown (4)(5)(6).Studies conducted in patients who recovered from COVID-19 have related a myriad of symptoms, including chest pain, fatigue, dyspnea, leg pain and weakness (7,8).A case-control study conducted at 2-3 months from disease onset showed that a significant proportion of hospital discharged patients reported symptoms such as breathlessness, fatigue, depression and limited exercise capacity (9).Furthermore, cross-sectional studies performing cardiopulmonary exercise testing (CPET), the goldstandard for functional capacity assessment, elucidated some exercise limitation pathophysiological mechanisms (10,11).Studies conducted 3 months after discharge had shown reduced functional capacity in 33 to 50% of patients post COVID-19 (12,13).However, these studies only evaluated short-term physical impairment after COVID-19 infection, with uncertainty about causality, mechanisms of limitation and persistence of this limitation.Possible underlying mechanisms for these persistent complaints can include cardiac, pulmonary and peripheral (oxygen extraction) limitations, with either two or more combined.
The impact in health-related quality of life (HRQol) have been shown to be impaired in patients post COVID-19 (14).Countless patients affected by COVID-19 are returning to their work activities, and the real burden of this disease is still being discovered.Therefore, the aim of this study was to assess long-term functional capacity and HRQoL, among survivors of hospitalization due to COVD-19, comparing the results with those of historical controls matched by age, sex, body mass index, and comorbidities.
Methods
This is a prospective cohort study of COVID-19 patients who required hospitalization due to respiratory symptoms between June 2020 and December 2020 and paired historical controls.Participants were recruited from a previous cohort, in which adult patients (≥18 years) were eligible if admitted with signs or symptoms suggestive of COVID-19 (cough, fever, or sore throat) within 14 days of onset and hospitalized in the prior 2 days (15).All patients were hospitalized at a private hospital in Porto Alegre, southern Brazil.This private institution is the reference hospital in the care of COVID-19 cases in Porto Alegre, RS, Brazil, with 372 infirmary beds, and 113 ICU beds.
Between six and nine months after hospital discharge, patients with confirmed COVID-19 by RT-PCR and pulmonary involvement on chest computed tomography were contacted by telephone to perform a CPET and a clinical evaluation through a HRQoL questionnaire.A physician assessed the presence of persistent symptoms during the clinical evaluation.Exclusion criteria were inability to perform CPET due to musculoskeletal limitation, absence of radiologic pulmonary involvement and patient refusal.The project was submitted to the local ethics committee and complied with both the National Health Council Resolution 466/12 and the Declaration of Helsinki.All patients signed an informed consent.
Data collection
All data were collected prospectively including demographic, symptoms at admission, comorbidities, need for oxygen support, supplemental ventilatory support type, need for intensive care and length of stay.Oxygen support therapy was defined as the therapy used with the highest oxygen concentration supply and invasiveness during hospital admission.Patients were also classified according to the World Heart Organization COVID-19 severity classification: mild (symptomatic patients meeting the case definition for COVID-19 without evidence of viral pneumonia or hypoxia); moderate (adults with clinical signs of pneumonia (fever, cough, dyspnea, fast breathing) but no signs of severe pneumonia, including SpO2 ≥ 90% on room air); severe (adults with clinical signs of pneumonia plus one of the following: respiratory rate > 30 breaths/min; severe respiratory distress; or SpO2 < 90% on room air); and critical patients with acute respiratory distress syndrome (ARDS) or sepsis or septic shock (16).At the follow-up visit, patients were interviewed to assess persistent symptoms, medications in use, current exercise activity and other clinically relevant information.
Cardiopulmonary exercise testing
CPET was performed on a treadmill (General Electric T-2100, GE Healthcare, United States) with breath-by-breath gas analysis (Metalyzer 3B, Cortex, Leipzig, Germany) between January 2021 Peak VE was also compared as a percentage of maximal predicted using a validated equation (18).For the percentage predicted peak VO2 (ppVO2) both the Wasserman's and Hansen algorithm and FRIEND equations were used (19).Dysfunctional breathing (DB) was defined by pattern recognition as described by previous studies (20,21).For this classification, we considered the graphs of minute ventilation (VE) versus time, VE/VCO2 slope and respiratory rate (breaths per minute), tidal volume (mL/min) vs. VE (L/min).CPET and spirometry were performed following current guidelines for exercise testing (22).
Quality of life assessment
Short Form36 (SF-36) physical and mental health questionnaire was completed by all post COVID-19 patients.The SF-36 addresses HRQoL in eight domains (general health, physical functioning, physical role function, bodily pain, vitality, emotional role function, mental health, and social functioning) that are summarized in two dimensions: physical and mental.Scores range from worst to best (0-100).The eight different scales scores were calculated and computed.For construction of summary measures, scales were standardized using a Z-score transformation, providing both physical and mental composite scores (PCS and MCS).We used national normative data for both z-scores calculations and for comparison purpose with our sample (23).
Selection of healthy controls and pairing
Control subjects were selected from a CPET database of 4,957 test subjects without diagnosed cardiovascular or pulmonary disease, evaluated at an experienced laboratory in the Brazilian Midwest region from 2011 to 2020.CPET were mainly performed for cardiorespiratory fitness assessment and exercise prescription.Test subjects who did not fulfill ventilatory maximality criterion (RER ≥1.05) were excluded before pairing.COVID-19 patients were matched with controls by a 1:3 ratio for age, sex, BMI, hypertension and diabetes.A nearest neighbor matching method was applied with a caliper of 0.2 without replacement.After matching, included variables were compared between groups to confirm that there were no significant differences.
Statistical analysis
Continuous data were tested for normality with Shapiro-Wilk test and presented as mean (standard deviation) or median (interquartile range).Categorical data are presented as absolute count and relative frequency.Comparisons between COVID-19 and matched controls were performed by independent samples Student's t-test and chi-square test.The effect size was calculated by dividing the mean difference between groups by the standard deviation of the population.Spearman's rank correlation coefficient was performed to test association of HRQoL and CPET data.Non-linear regression with curve fitting was used to examine the relationship between peak VO 2 and PCS of HRQol.We used a generalized linear model to estimate the association of COVID-19 infection in comparison to healthy controls for the ppVO2.An adjusted model including age, sex, height, and weight was also performed.This study used a convenience cohort of patients.We performed a post-hoc power calculation for the observed differences of the ppVO2 among COVID-19 and healthy controls resulting in a power of 89.94% for an alpha value of 5%.Significance was accepted at p < 0.05 for all tests.Data were analyzed in SPSS, Version 25.0 for Windows (SPSS Inc., Chicago, IL, United States) and R 4.1.1statistical software (R: The R Project for Statistical Computing, https://www.r-project.org).
Predictors of decrease predicted peak VO2
We subsequently performed an analysis to evaluate the impact of COVID-19 in the observed ppVO2 (by Wasserman and Hansen algorithm) in comparison to matched controls (Table 3).COVID-19 patients had a reduced ppVO2 with an unadjusted odds ratio (OR) of 0.89 (95%CI, 0.82-0.95;p = 0.002) and an adjusted OR of 0.88 (95%CI, 0.82-0.95,p = 0.002).We then sought to evaluate which characteristics were associated with the ppVO2 in hospitalized Study design.We screened 445 hospitalized patients diagnosed with COVID-19 infection and selected for eligibility when a positive polymerase chain reaction test and signs of lung involvement evaluated by computed tomography chest imaging.Of the 110 eligible patients, 47 accepted the invitation to perform a cardiopulmonary exercise test study 6 months after hospital discharge.
Discussion
Our study shows that hospitalized COVID-19 patients, even after more than 6 months post-discharge, can still demonstrate reduced functional capacity and HRQoL compared to matched controls.Several CPET prognostic markers, physical and mental aspects of HRQoL were also significantly reduced 6 months after hospital discharge in COVID-19 patients, demonstrating the long-term impact of the disease.Moreover, more than half of the patients has persistent symptoms at 6 months follow-up, increasing the burden of disease.
Our results are consistent with those found in previous studies evaluating patients in the short-term after COVID-19 infection (12,13).Skjorten and colleagues, using a treadmill, found one-third of patients with a ppVO2 less than 80%, additionally, 15% percent of these patients had shown reduced ventilatory efficiency (12).Clavario et al. reported one-third of patients with a reduced peak VO2 3 months post-discharge Functional impairment after COVID-19 infection remains a major concern.We demonstrated that after 6 months of discharge, COVID-19 patients had a reduction in ppVO2 and peak VO2 when compared to matched controls.The observed higher peak VO2 in males was not confirmed by the ppVO2, suggesting an absence of sex-related post-COVID-19 hospitalization functional impairment.Interestingly, we did not find any exercise limitation due to pulmonary gas exchange or ventilatory mechanics.In keeping with previous reports, cardiocirculatory limitation was the predominant deficit encountered in our study.A recent meta-analysis explored the utility of CPET to evaluate long COVID-19 symptoms in adults, showing that exercise capacity was reduced in these patients and that CPET may provide insight into the mechanisms for this impairment (26).
Several patients after COVID-19 had presented a rapid and irregular breathing pattern consistent with DB, which is characterized sometimes by rapid shallow breaths or other erratic ventilatory patterns (20,21).It was associated with persistent symptoms such as dyspnea and fatigue, and with a reduced ppVO2 as well.We have found a similar prevalence of DB when comparing our data to other studies, also showing a positive correlation of this ventilatory abnormality with symptoms (20,27).Nevertheless, identification of DB is subjective and requires pattern recognition, without any strict criteria.The development of quantitative methods would help us to diagnose this entity.
Notably, the requirement of Bilevel support, mechanical ventilation, ICU admission, hospital length of stay, and COVID-19 severity were all associated with a reduced ppVO2.The high number of COVID-19 infected patients will certainly impact the demand for dyspnea evaluation and referrals for rehabilitation soon.We should be aware that symptoms persist even 6 months after hospital discharge in COVID-19 patients.A preemptive approach towards rehabilitation could be beneficial, especially in those more likely to be impacted such as in those with severe disease presentations.Physical rehabilitation after discharge could improve these symptoms, especially in patients with a severe initial COVID-19 presentation, but the efficacy of this intervention is yet to be established in this scenario (27).
Mental and physical aspects of HRQoL were significantly reduced in COVID-19 patients 6 months after discharge.A reduced mental aspect of HRQoL is consistent with the findings of sleep disturbances, depression, anxiety, and cognitive impairment as reported in a systematic review (28).Of note, the comparison of HRQoL scores was adjusted by age and sex according to national normative data, which strengthens the evidence for this impairment when compared to the general population.Both peak VO2 and ppVO2 were positively correlated with several aspects of HRQoL, not only physical, but also social and mental.It provides a better understanding of persistent impairment after moderate to severe COVID-19: there is a pathophysiological basis for these symptoms associated with a documented reduction in exercise capacity.
Our study has several limitations.Although we used a 3:1 control ratio, our study cannot support that the late exercise impairment observed in COVID-19 patients is related exclusively to this etiology.Comparing CPET parameters after hospital discharge with a population affected by another viral pneumonia could better clarify if COVID-19 is responsible for these symptoms or they are merely due to the hospital stay.One of the variables most affected in post COVID subjects is the diffusion capacity, which was not measured in our study.Recruitment to the study is another limitation.The stigma related to COVID-19 infection and the environmental safety for a CPET study were barriers to patient recruitment.Although the selection was not based on the presence of symptoms, patients more likely to present dyspnea or fatigue could be more prone to accept the research invitation.Our inclusion criteria limited the results to hospitalized patients with pulmonary involvement, so caution should be taken in extrapolating these findings to less severe patients.
Conclusion
Hospitalized COVID-19 patients showed decreased exercise capacity after 6 months from discharge related mainly to cardiocirculatory impairment and peripheral muscle limitation.Dysfunctional breathing was common and associated with persistent symptoms.Both physical and mental quality of life domains were reduced in these patients.The requirement of higher level of oxygen support, intensive care admission, longer hospital stay, and COVID-19 severity were the main predictors of reduced peak VO2.Our results highlight the health support required by these patients even after more than 6 months from hospital discharge.
FIGURE 3 (
FIGURE 3 (A) Venn diagram illustrating the relationship between symptoms, reduced percent-predicted peak oxygen consumption, dysfunctional breathing and normal evaluation in COVID-19 patients.(B) Evaluation of quality-of-life domains of SF-36 between healthy controls and hospitalized COVID-19 patients six months after discharge; (C) Cubic regression between peak oxygen consumption during CPET and physical component score of HRQol in COVID-19 patients.
TABLE 1
Demographic and clinical characteristics of COVID-19 patients and healthy controls.
BMI, body mass index; CAD, coronary artery disease, CPAP, continuous positive airway pressure; ICU, intensive care unit; LOS, length of stay; US, ultra-sensible; WHO, world health organization.
TABLE 2
Comparison of cardiopulmonary exercise testing between healthy controls and hospitalized COVID-19 patients. | 2024-03-12T16:10:15.002Z | 2024-03-06T00:00:00.000 | {
"year": 2024,
"sha1": "6072b4dace4d2f1ab1c384da32b160633ec0d945",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2023.1289454/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5dbca3faf174b562c18abe2914f3c966891b51cd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267455061 | pes2o/s2orc | v3-fos-license | The influence of different sources of anticipated instrumental support on depressive symptoms in older adults
Objective This study investigated how anticipated instrumental support sources and intergenerational support influence depressive symptoms in older Chinese adults. Methods We employed binary logistic regression on data from 7,117 adults aged ≥60 in the 2018 China Health and Retirement Longitudinal Study, controlling for gender, marital status, and self-rated health. Results 38.89% of respondents exhibited depressive symptoms. Anticipated support from spouse and children, spouse only, children only, or other sources showed 52, 25, 46, and 40% lower odds of depression, respectively, compared with no anticipated support. Those providing financial support had 36% higher odds of depression than those without exchanges. However, those receiving financial support, receiving instrumental support, and receiving and providing financial and emotional support had 19, 14, 23, and 24% lower odds of depression. Conclusion Different anticipated instrumental support sources and intergenerational support influenced depression odds in older adults, suggesting potential benefits in promoting such support systems.
Introduction
Depression is a significant mental health concern that is prevalent among older populations worldwide.The World Health Organization has indicated that, globally, 15% of older individuals experience some form of mental illness, with depression notably prevalent (1).Particularly in low-and middle-income nations, the magnitude of this issue is pronounced.For instance, depression rates soared to 34.4 and 36.9%among the older populations of India and Bangladesh, respectively (2,3); far higher than the world average.China also experiences heightened prevalence, with studies noting that 20% of its older population grapples with depression (4).Distressingly, this rate escalates to 40.7% when considering older individuals in rural areas (5).The implications of depression in later life stages are profound: it can induce appetite and weight changes, disruptions in sleep patterns, and feelings of diminished selfworth or undue guilt (6).These manifestations can further compound risks of conditions such as obesity, diabetes, and even severe outcomes such as suicide, disability, or mortality (7)(8)(9)(10).Thus, addressing depression among older individuals is an imperative public health priority.As China's population ages, the demand for caregiving services for older adults also increases (11).However, caregiving by relatives is being weakened due to factors such as changes in family structure, rapid urbanization, and increased labor mobility (12,13).Research has found that over 30% of older adult's caregiving needs are not being met, and 41.67% of older adults have anticipated instrumental needs (14).In addition, as people age, they might encounter various health challenges.For example, frailty, urinary incontinence and an increased risk of falling, etc. (15).In such circumstances, receiving timely instrumental support becomes crucial, as without it the health of older adults may be seriously threatened.Hence, the assurance of anticipating instrumental support, or "anticipated instrumental support," is significant for them.
Chinese families have always attached great importance to the relationship between upbringing and support, and the filial piety culture contained in it is an excellent culture that China has always inherited.And this relationship mainly manifests as intergenerational support and anticipated instrumental support for older adults (16).Intergenerational support is considered a factor related to depressive symptoms in older adults, and many studies have been conducted in-depth studies from different content and directions of intergenerational support (17)(18)(19)(20).These studies focus on the impact of actual support received or provided by older adults on their depression.Anticipated support, the subjective perception of older adults toward future events, may buffer external stress on mental health (21).Studies has found that anticipated support can bring a sense of security to older adults and is negatively associated with depression (22)(23)(24).Among the various aspects of anticipated support, older adults anticipated receiving instrumental support from their adult children (25).It can be observed that both intergenerational support and anticipated support may be important influencing factors for depression in older adults.However, there is limited research that simultaneously explores the impact of these two factors on depression in older adults.Therefore, this study aims to investigate the simultaneous impact of intergenerational support and anticipated instrumental support on depressive symptoms in older adults.
Research on the correlation between anticipated instrumental support and depression in older adults has predominantly focused on two areas.The first considers how anticipated support modulates depressive symptoms.For instance, studies leveraging data from the 2011 and 2013 waves of the China Health and Retirement Longitudinal Study (CHARLS) indicate that anticipated instrumental support is linked to a decreased depression risk among older adults (26).In contrast, other research has found that anticipated instrumental support could inadvertently harm certain older adults.For some older adults individuals, receiving such support means a decrease in their physical function and self-care ability, which can induce feelings of inferiority and burden (27).Krause compared the implications of both received intergenerational support and anticipated support on depression, highlighting their distinct and diverse effects on the mental health of older individuals (21).Dong et al. found that the association between anticipated instrumental support and depressive symptoms is influenced by the balance between expected and received instrumental support.Older adults who received greater instrumental support than they expected were more likely to have a lower risk of depressive symptoms, while those with greater instrumental support expectations than actual receipt were more likely to have a higher risk of depressive symptoms (28).
The second focal area concerns the influence of various sources of anticipated support on depressive symptoms in older individuals.Cheng (29) comparative study between Chinese and American older adults, using data from the 2010 and 2012 waves of the Health and Retirement Survey in the United States and the 2011 and 2013 waves of CHARLS in China, discovered that, in China, anticipated instrumental support from children was a more vital protective factor against depression than support from other sources (29).This contrast was not evident in the U.S. data.These disparities are likely rooted in cultural beliefs and systemic paradigms.Within the Chinese cultural framework, there is a deeply ingrained ethos of children serving as primary caregivers during their parents' later years, exemplifying the revered tenet of filial piety.Such caregiving, apart from satisfying emotional yearnings (30, 31), addresses tangible needs, especially against the backdrop of China's limited institutional older adults care and social welfare infrastructure (32,33).Given this context, the distinct sources of anticipated instrumental support in China must be dissected.Understanding their varied impact on depression can unveil the interplay between traditional family expectations, present-day social systems, and the well-being of older adults.
In summary, there are several limitations to current studies.Firstly, most studies tend to analyze the relationship between intergenerational support or anticipated instrumental support and depressive symptoms in older adults separately, with few studies integrating intergenerational support and anticipated instrumental support to discern their collective implications for depression.Given that both domains distinctly influence mental well-being in older adults, an exclusive focus on either facet could inadvertently introduce analytical biases (26,34).Secondly, the research that concentrates on older populations in China largely draws upon survey data from a decade past.This temporal distance is pertinent, considering the substantial shifts that have transpired in China's social welfare landscape, elder care paradigms, and familial configurations (35,36).Thus, the following question arises: How does the evolving socio-cultural milieu impact the interplay between anticipated instrumental support, particularly from varied sources, and depression in older adults?Thirdly, while children play a significant caregiving role, spouses are equally pivotal in older adults' lives (37).Hence, exploring the influence of anticipated instrumental support from spouses on depression warrants deeper exploration.In addressing these gaps, this study harnesses the 2018 CHARLS dataset, encompassing both intergenerational support and anticipated instrumental support metrics, to analyze their effects on depression in older adults.The fact that the nature of support (anticipated, provided, or received) and its source (spouses, children, or others) could have intertwined effects on depression must be recognized.By concurrently assessing diverse types of received support and the different sources of anticipated instrumental support, we aim to unveil the intricate interplay of these variables.In doing so, we hope to offer a comprehensive view of their collective influence on the mental health of older adults, thereby laying the groundwork for future interventions targeting depression.
Data sources
This study utilized data from the 2018 wave of CHARLS, an expansive interdisciplinary survey project overseen by the National School of Development at Peking University.In 2018, CHARLS distributed questionnaires to 450 communities across 150 counties in 28 provinces, including autonomous regions and municipalities directly governed by the central government.The sampling procedure was rooted in a stratified random method.The survey encompassed 19,816 individuals aged 45 and above, of whom 10,997 were 60 or older.A total of 7,117 samples with adult children aged 60 years and older with complete records (no missing data) were included in this study, excluding those with "do not know" and "refused to answer" response options.
Dependent variable
The outcome of interest was the presence or absence of depressive symptoms in respondents.This was gauged using the short-form Center for Epidemiologic Studies Depression Scale (CES-D-10) featured in the CHARLS questionnaire, with comparable predictive accuracy compared with the full-length 20-item CES-D (38, 39).The CES-D-10 includes 10 items, each with four graded response options.These options are scored from 0 to 3, progressing from positive to negative sentiments.The total score is derived by summing the scores from all 10 items.Following an established cutoff of 10 indicating depressive symptoms, respondents scoring 10 or above were classified as having depressive symptoms (coded as 1), while scores below 10 indicated an absence of depressive symptoms (coded as 0) (38).
Independent variables
Table 1 delineates the specific categories and distinctions within intergenerational support and anticipated instrumental support.Given the patterns of support provision and receipt observed among respondents, this study parsed intergenerational financial and instrumental support into four categories: non-exchange-respondents neither provided nor received any support over the past year; providing only-respondents solely provided support without receiving any in return during the preceding year; receiving only-respondents solely received support and did not offer any within the same timeframe; mutual support-respondents both provided and received support during the past year.Considering the intrinsic reciprocal quality of emotional support, it was categorized simply as either non-exchange or mutual emotional support.The expectation around future instrumental support was bifurcated into two broad categories: those who did and those who did not anticipate receiving instrumental support.Within this context, the expected sources of instrumental support were grouped into four segments: spouse and children-respondents anticipate receiving support from both these sources; spouse only-support is expected solely from the spouse; children only-support is anticipated exclusively from children; others-respondents expect support from sources other than spouses or children.
Control variables
Existing literature has emphasized that depression in older adults is modulated by many individual factors (40)(41)(42).To account for these multifaceted influences, this study incorporated several control variables, including gender, age, marital status, place of residence, education level, health insurance, and selfrated health.Marital status was segmented into two primary categories: (1) married-this encompassed individuals who were married and cohabitating with their spouse and those married but residing separately because of work-related reasons; (2) unmarried-this broader category included individuals who were separated (not cohabitating with their spouse), divorced, widowed, or had never married.The specific classifications and corresponding details for each variable can be found in Table 2.
Demographic overview of the respondents
Of the 7,117 respondents considered in this study, 3,513 (49.36%) and 3,604 (50.64%) identified as male and female, respectively.A significant majority, 5,782 (81.24%), were in marital unions, and 1,335 (18.76%) were not.Regarding place of residence, 5,219 (73.33%) resided in rural settings, with the remaining 1,898 (26.67%) living in urban areas.Given China's varied gender and regional dynamics, which include distinct urban and rural contexts, our sample's balanced representation from these groups suggests that it effectively mirrors the wider older Chinese population.Notably, 38.89% of the older respondents exhibited depressive symptoms, and an encouraging 71.13% anticipated the availability of instrumental support.Detailed respondent demographics are available in Table 3.
Evaluating the influence of intergenerational support and anticipated instrumental support on depression among older adults
This study employed three binary logistic regression models, using the occurrence of depressive symptoms among respondents as the dependent variable.In model 1, control variables that exhibited significant correlations with the manifestation of depressive symptoms in the univariate analysis were included to assess their influence on depression.Building upon model 1, model 2 integrated the three domains of intergenerational financial, instrumental, and emotional support to examine their collective impact on depression in this demographic.Model 3, an extension of model 2, introduced the variable of anticipated instrumental support to examine its influence according to different anticipated sources.The Hosmer test results for all models exceeded 0.05, indicating a satisfactory model fit (43).For a detailed breakdown, refer to Table 5 and Figure 1.
Model 1 evaluated the influence of control variables on depressive symptoms among the older population.The findings were as follows: (1) Women had 1.71 times the odds of experiencing depressive symptoms compared with men.(2) Married individuals had 22% lower odds of manifesting depressive symptoms than their unmarried counterparts.(3) Compared with those without formal education, individuals with junior high school education and high school or higher education had 19 and 32% reduced odds of showing depressive symptoms, respectively.(4) Older adults covered by urban employee health insurance had 35% reduced odds of experiencing depressive symptoms relative to those without any health insurance.(5) Contrary to individuals who rated their health as "very good, " the odds of having depressive symptoms for those whose self-rated health was "good, " "fair, " "poor, " and "very poor" were 1.47, 2.55, 6.76, and 14.06 times higher, respectively.( 6) No significant differences in the odds of depressive symptoms were observed between urban and rural dwellers, those with primary education and those without formal education, or those without insurance and those with different health insurance types, such as urban residents' health insurance and new rural cooperative insurance.Model 2 assessed the influence of intergenerational support on depressive symptoms among the older population.The findings were as follows: (1) Regarding intergenerational financial support, older adults who solely provided financial support had 1.36 times the odds of experiencing depressive symptoms compared with those not engaged in any financial exchange; Those who only received and those engaged in mutual financial support had 19 and 23% reduced odds of depressive symptoms, respectively, compared with those without any financial exchange.(2) For intergenerational instrumental support, older adults who solely received instrumental support had 23% reduced odds of experiencing depressive symptoms compared with those not involved in any instrumental exchange.No significant difference in the odds of depressive symptoms was observed between individuals only providing instrumental support, those with mutual instrumental support, and those without any instrumental exchange.(3) For intergenerational emotional support, older adults with mutual emotional support had 24% reduced odds of manifesting depressive symptoms compared with those without any emotional exchange.
Finally, Model 3 explored the influence of various sources of anticipated instrumental support on depressive symptoms in the older population.The analysis revealed that anticipating support from different sources acted as protective factors against depression.Specifically, older adults who anticipated spousal and child support, spousal support only, child support only, or support from other sources had 52, 25, 46, and 40% reduced odds of experiencing depressive symptoms, respectively, compared with those without such expectations.
Discussion
The role of anticipated instrumental support in protecting older adults from depression After adjusting for the effects of intergenerational support and individual-related factors, older adults without an anticipation of receiving instrumental support displayed notably higher odds of developing depressive symptoms.The odds of depressive symptoms were reduced by 52, 25, 46, and 40% for those anticipated instrumental support from spouse and children, spouse only, children only, and other sources, respectively, compared with their counterparts without such anticipation.These findings align with those of Cheng (29), who used data from the 2011 and 2013 waves of CHARLS (29).
The current study delved deeper into the effects of different sources of anticipated instrumental support-spouse and children, spouse only, children only, and other sources-on depressive symptoms in older adults.The results showed that all support sources significantly reduced the odds of depression, with varying magnitudes.We found that anticipated support from spouses and children exhibited the most significant protective effect against depression.These outcomes emphasize the evolving role of anticipated instrumental support in safeguarding older adults against depression as societal dynamics shift.Such effects can be traced back to traditional Chinese cultural values and the prevalent social welfare system.In China, caregiving within the household remains paramount, with spouses and children being the primary caregivers for older adults (44).The anticipation of receiving instrumental support, particularly from family members, lessens anxieties among older adults.
However, recent demographic shifts have seen the family living structure in China transition from predominantly multigenerational households to more nuclear configurations.The number of older adults living solely with their spouses or alone has increased (45).Further, the anticipation of receiving instrumental support from children has been declining (29).This evolution underscores the burgeoning importance of anticipated support and its heightened influence on the mental well-being of older adults.Reliable and trusted caregiving anticipation, especially from close family members, can alleviate mental stressors.The absence of this anticipation can amplify anxieties and the potential for depression.Notably, the anticipation of support from other individuals appeared to reduce depressive symptoms more effectively than from a spouse alone (40% vs. 25%).This might be attributed to the fact that spouses are typically of a similar age, which introduces uncertainty regarding their caregiving capabilities and potentially intensifies stress.Compared with spousal support, the anticipation of assistance from other sources, possibly younger or more capable individuals, offers a more certain relief, reducing the caregiving burden on families and mitigating feelings of guilt in the older generation (27).
Impact of various patterns of intergenerational support on depressive symptoms in older adults
Financial support
Older adults who solely provided financial assistance to their younger generations were found to be more susceptible to depression.Given that a considerable number of respondents (73.33%) resided in rural areas with relatively lower income levels, financially supporting their children could compromise their quality of life, which can, in turn, impact their overall well-being.This observation aligns with the findings of Zhang et al. (46).On the contrary, older adults who were solely on the receiving end or engaged in mutual financial transactions with their children had a reduced risk for depression.Such financial contributions from children can bolster a sense of accomplishment in older adults and alleviate financial stressors, further mitigating depressive tendencies.Importantly, mutual financial support demonstrated a more substantial protective effect against depression than merely receiving support, reducing depressive tendencies by 19% vs. 23%.Older adults providing support to children often reflects a sound financial foundation, while receiving support can be interpreted as a gesture of gratitude by the children.Such mutual financial support indicates solid intergenerational bonds and plays a pivotal role in warding off depression (47).
Instrumental support
Our findings indicated that older adults who exclusively received instrumental support showcased reduced depressive symptoms.Such support not only streamlines the daily undertakings of older adults but also provides more opportunities for them to bond with their children, which is instrumental in alleviating their anxiety.
Emotional support
Mutual emotional exchanges were identified as a protective factor against depression, a finding that resonates with Choi et al. (48).Strengthened emotional bonds are believed to intensify the connection between older adults and their offspring, minimizing the risk associated with depressive episodes (17).
Multiple factors influencing depressive symptoms in older adults
This study showed that various factors, such as gender, marital status, education level, health insurance, and self-rated health, significantly correlate with depressive symptoms among older adults.Within the sample of this study, 60.30% of older women exhibited depressive symptoms, a figure notably higher than that of their male counterparts.This suggests that older women face a greater susceptibility to depression compared with older men.Marital status emerged as another critical determinant.Relative to their married peers, those who remained unmarried often displayed signs indicative of diminished emotional comfort during their later years.Given the persistence of feelings such as loneliness, the onset of depressive symptoms becomes more probable (49).Further, a positive correlation was observed between superior self-rated health and reduced depressive symptoms.This self-assessment is not merely a reflection of their physical state but also encompasses their perception of their health, which can, in turn, mirror their depressive state (50,51).
This study comprehensively explored the effects of anticipated instrumental support, diverse types of intergenerational backing, and individual determinants on depression in older adults, offering a more nuanced understanding of the causative factors.However, certain limitations must be acknowledged.First, the utilization of cross-sectional data only permitted an analysis of the contemporary effects of the variables on depression without considering dynamic shifts over time.Future research employing panel data could delve deeper into these evolving dynamics.Second, this study did not factor in potential overlaps and synergies between the diverse intergenerational support and anticipated instrumental support.Because anticipated instrumental support may be influenced by intergenerational exchange, including content, direction, and recency (25).Additionally, the impact of different levels of anticipated instrumental support and intergenerational support on depression in older adults was not considered in this study.Prior research has found that older adults with 'high receipt and low expectations' were associated with fewer depressive symptoms, while older adults with "high expectations and low receipt" were associated with greater depressive symptoms, which might introduce some bias to the findings (28).In the future, we can employ structural equation models to explore the effects of different levels and sources of anticipated instrumental and intergenerational support on depressive symptoms in older adults.
Despite the limitations, our paper found that different sources of anticipated instrumental support and intergenerational support have significant effects on depressive symptoms in older adults and have important theoretical and policy implications.Firstly, our study offers new insights into research on depressive symptoms in older adults by combining the analysis of different sources of anticipated instrumental support and intergenerational support.Previous studies have predominantly focused on the availability of anticipated instrumental support, overlooking the distinct impact of various sources on depressive symptoms in older adults.Secondly, our findings can serve as a valuable reference for state policymakers in formulating relevant policies.Different sources of anticipated instrumental support represent the assessments and expectations of older adults in hypothetical situations.These insights can help us better understand the social and familial needs of older adults, providing essential guidance for the development of future intervention measures.Against the backdrop of an increasingly aging population and a consistently low fertility rate, the healthcare and long-term care requirements of older adults have increased.Moreover, the absence of a robust social security system, coupled with the early stage of the long-term care insurance system, intimately associated with older adults, has intensified their dependence on anticipated instrumental support from their offspring.To attenuate depressive symptoms, especially those resulting from an anticipated lack of instrumental support, policymakers must holistically address care provisions for older adults and reduce the caregiving burden on families.Furthermore, fostering a more pronounced caregiving responsibility ethos within families, particularly among spouses and offspring, can enhance the resilience of older adults against potential future challenges.
FIGURE 1 Forest
FIGURE 1Forest diagram of factors influencing depressive symptoms in older adults.
TABLE 1
Questionnaire of intergenerational support and anticipated instrumental support.
(2) When [ChildName] is not living with you, How often do you contact with [Child Name] on phone/by message/on WeChat/by mail/by email?≥1 time per week Receive <1 time per week No receive Anticipated instrumental support (1) Suppose that in the future, you needed help with basic daily activities like eating or dressing.Do you have relatives or friends (besides your spouse/partner) who would be willing and able to help you over a long period of time?
TABLE 4
Correlation analysis of the influencing factors of depressive symptoms (n = 7,117, n/%).
TABLE 5
Regression analysis of the effects of anticipated instrumental support on depressive symptoms in older adults (n = 7,117). | 2024-02-06T18:17:49.361Z | 2024-01-30T00:00:00.000 | {
"year": 2024,
"sha1": "e377372643e8047333beadbe145d265dbd33ad56",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2024.1278901/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "63b04b31ac5a91136a4c31962e4bb593dcacaab4",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249998146 | pes2o/s2orc | v3-fos-license | Study on Ultrasonic Nondestructive Testing of Self-Compacting Concrete under Uniaxial Compression Test
To study the variation law of ultrasonic parameters of self-compacting concrete before and after damage under uniaxial compression test conditions, the C30 self-compacting concrete blocks stored for 7 days and 28 days were subjected to ultrasonic nondestructive testing, and the variation law of the sound time, amplitude, and sound velocity before and after the damage of self-compacting concrete blocks was emphatically analyzed. The concrete acoustic detection software was introduced to judge and analyze the abnormal values of the parameters of each measuring point, and the defect distribution map of each test block was obtained. The results showed that after curing the self-compacting concrete test block for 7 days and 28 days, the average value of sound time before and after the failure of each measuring point of the test block is small, and the average value of sound time before the failure is less than that after; the average amplitude after failure is smaller than that before failure, and the amplitude of some measuring points will be smaller than that before. The average sound velocity after failure is less than that before failure, and the internal defects appear and the structure is not dense. This study provides a theoretical basis for the application of ultrasonic detection technology in the field of self-compacting concrete and also provides a practical basis for the stability monitoring and failure warning of self-compacting concrete.
Introduction
Self-compacting concrete (SCC) is a new composite material developed based on ordinary concrete; with its gravity, it can be compacted and formed, which has excellent construction performance [1][2][3][4][5]. In recent years, it has been widely used in many projects [6][7][8][9], which has become a new direction for the development of concrete materials. For example, a super-high-rise project adopted C60 SCC for pouring [10], and Miyun Reservoir adopted C20 SCC for the second lining project of a tunnel pipe [11], and it is applied in high-speed railway project construction [12] and in pouring prefabricated components [13].
Although SCC has good working performance, during the use of SCC, its structure is affected by an external load, and it undergoes internal defects and cavity initiation and expansion; finally, it leads to structural damage, which seriously affects the bearing capacity and durability of the components [14][15][16][17][18][19]. Therefore, we must choose appropriate methods to judge whether there are defects in the use of SCC materials to ensure the safety of concrete structures. The change of ultrasonic parameters before and after SCC failure is studied, to ensure that the quality defects of SCC components are detected without damage [20][21][22], which is the trend of SCC engineering quality detection.
Sandrine Rakotonarivo [23] studied the influence of concrete interface transition zone on ultrasonic parameters. In general, the construction period of concrete construction is long and the coverage is wide, which is easily affected by external factors [24,25]. Therefore, various defects are prone to occur in the actual pouring process, thus affecting the structural stability of concrete engineering. As a dynamic nondestructive testing method [26][27][28][29][30], ultrasonic testing technology has been widely used in the field of concrete due to its strong applicability, high detection sensitivity, and timely detection results [8]. In the detection of SCC engineering with defects [31][32][33], it can locate the defect position more accurately and further determine the causes of defects. The principle [31] of using ultrasonic detection technology to detect SCC in projects is that the ultrasonic pulse source emits a high-frequency elastic pulse wave to SCC, and then the wave fluctuation characteristics are recorded. When there is a discontinuous interface in concrete, the wave impedance surface appears on the defect surface, and when the wave reaches the interface, the transmission and reflection of the wave are produced, and the energy received by the wave is reduced [34]. When the concrete has serious defects such as looseness, honeycomb, and holes, it will produce scattering and diffraction of waves [35]. According to the initial arrival time of the wave and the energy attenuation characteristics of the wave, the frequency change, and the degree of waveform distortion, the density parameters of concrete can be obtained [36]. By processing and analyzing the ultrasonic characteristics of different sides and heights, the nature, size, and spatial relationship of concrete defects can be identified. It can be seen that, compared with the traditional detection technology, ultrasonic detection technology greatly improves the efficiency and accuracy of the whole project detection [37].
Scholars have studied the SCC engineering by ultrasonic testing. Dong Junfeng [8] used ultrasonic to test the void defects of self-compacting concrete and proposed a more accurate method to judge the sound velocity of void defects. Hao Wenxiu [38] compared the influence of different concrete strengths on ultrasonic wave speed. Gu Xingyu [39] established a three-phase finite element model of asphalt concrete concerning the results of ultrasonic testing. Huang Zhengyu [40] studied a simple and practical method for qualitative analysis of ultrasonic detection of concrete defects and imaging. Qu Xiushu [41] proposed the superposition principle in the calculation of the rectangular concrete-filled steel tube column based on the test data of the ultrasonic inspection of the concrete-filled steel tube. Zhao Guoqi [42] used ultrasonic signals in a specific frequency domain to perform health inspections on key parts of large concrete structures. Zheng Dan [43] studied the influence of frequency and water content on ultrasonic testing of concrete. Zhu Ziqiang [44] studied the attenuation characteristics of ultrasonic waves in concrete. Chen Dongdong [45] studied the power spectrum characteristics of ultrasonic wave propagation in concrete. Lin Weizheng [46] studied the thickness of cement concrete with the ultrasonic detector. Qin Tienan [47] measured the thickness of a concrete coating by the ultrasonic wave and evaluated the uncertainty. Petr Cikrle [48] introduced the application of ultrasonic inspection in concrete bridges and measured the size of holes in concrete panels.
In this paper, an ultrasonic testing analyzer was used to collect ultrasonic parameters of SCC before and after the uniaxial compression test, to study the variation of ultrasonic parameters of SCC before and after failure, to determine the distribution of internal defects in SCC under load. The evolution law of internal defects of SCC after failure is obtained, it provides a theoretical basis for the application of ultrasonic testing technology to the field of SCC, and it also provides a practical basis for stability monitoring and failure warning of SCC engineering.
Test Raw Material and Proportion
The mixed design of SCC aims at the performance indexes needed in practical engineering [49]. The design code of SCC [50,51] and the experiment show that the working performance of the prepared concrete mixture meets the specified working requirements, and the designed proportioning scheme is scientific and feasible. Subsequent tests can be carried out with the prepared concrete [52,53]. Materials used in SCC mixtures are powders, natural aggregates, admixtures, and additives. The cement selected is ordinary Portland cement with a grade of PO 42.5, the initial setting time is greater than 150 min, and the final setting time is less than 240 min. Natural aggregate mainly includes coarse aggregate and fine aggregate. The fine aggregate is river sand, the fineness modulus is 2.3-3.0, the main component is quartz sand, the surface is mostly round, the appearance is smooth, the texture is hard and dense, the porosity is low, the bonding force with cement is poor, the moisture content is 0.01%, and the water absorption is poor. The coarse aggregate is natural gravel, the particle size is 4.75-19 mm, the surface is rough and has the characteristics of porosity to absorb the cement slurry, and the bonding force with the cement is strong. The ratio of fine aggregate to coarse aggregate is 1.58. Admixture is a polycarboxylate water-reducing agent, and the main component is a polycarboxylate polymer masterbatch, with a high water-reducing rate, which can improve the fluidity of SCC by 25-35%, with good plasticity and being green and pollution-free. The performance indexes of raw materials used in the test are shown in Table 1.
According to the test requirements, C30 SCC was used for testing, and the ratios in Table 2 were obtained through multiple ratio tests. The SCC mix ratio is produced according to the design ratio. The quality of the required raw materials is weighed according to the design of each set ratio, and then crushed stone, fly ash, cement, and sand are added to the single-shaft forced concrete mixer in sequence ( Figure 1a). Then, water and water reducer are added evenly during the mixing process, and the mixing process lasts for 3-5 min. After the mixing process is over, the mixer is turned off, the concrete mixture is placed upside down in the container, and the concrete is then mixed. The object is shown in Figure 1b. We tested the slump extension, expansion time T500, v-shaped funnel time, and H2/H1 value of the concrete mixture [54]. Each test is shown in Figure 2.
See Table 3 for the work performance values and test results required by the SCC design code "Technical Specification for Self-compacting Concrete Application" JGJ/T 283-2012 and other requirements [55,56]. We tested the slump extension, expansion time T500, v-shaped funnel time, and H2/H1 value of the concrete mixture [54]. Each test is shown in Figure 2. See Table 3 for the work performance values and test results required by the SCC design code "Technical Specification for Self-compacting Concrete Application" JGJ/T 283-2012 and other requirements [55,56].
Specimen Design
Based on the design code of self-compacting concrete and the values obtained by tests [51,[55][56][57], two groups of SCC test blocks (group A1 and group A2) with the specification of 150 mm × 150 mm × 150 mm were made, each as a set of three test blocks. After 24 h,
Specimen Design
Based on the design code of self-compacting concrete and the values obtained by tests [51,[55][56][57], two groups of SCC test blocks (group A1 and group A2) with the specification of 150 mm × 150 mm × 150 mm were made, each as a set of three test blocks. After 24 h, the mold was removed and placed in the standard curing room for curing. Under the same curing conditions, the A1 test block was maintained for 7 days and the A2 test block was maintained for 28 days.
The experiment shows that the workability of the prepared concrete mixture meets the specified working requirements, and the designed mix proportion scheme is scientific and feasible. After that, tests can be carried out with the prepared SCC.
The Test Process
Similar to acoustic emission detection [57], ultrasonic testing technology was used to detect each test block after curing. An ultrasonic testing analyzer (The ZT801 geotechnical acoustic wave tester produced by Zhongtuo Technology (Beijing) Technology Co., Ltd. was selected for this test.) was used to collect ultrasonic parameters of test blocks A1 and A2 before and after the uniaxial compression test. Before the uniaxial compressive strength test of the self-compacting concrete block, we used the ultrasonic testing analyzer (as shown in Figure 3) to sample the data from the intact test block, which is relatively close to the transmitting transducer and the receiving transducer. On the test point, for the accuracy of the test, we reduced the friction between the transducer and the test surface of the test block, reduced the loss of energy, and used the coupling agent to tightly fit the transducer on the test point.
Then, we collected the relevant measuring point data. The sampling sequence is carried out according to the arrangement order of the measuring point. The sampling method of each measuring point is the same. If there is an error in the sampling of a measuring point, the measurement point is remeasured, the acoustic parameters of each measurement point are collected multiple times, and the average value of the collected data is obtained. When the uniaxial compressive strength test of the self-compacting concrete is completed, the measurement point of each specimen after failure is sampled, and the sampling method is the same as the operation before failure. The data sampled by the geotechnical acoustic wave detection analyzer (The ZT801 geotechnical acoustic wave tester produced by Zhongtuo Technology (Beijing) Technology Co., Ltd. was selected for this test.) include the sound speed, sound time, and amplitude of each measuring point.
For the experiment of pair measuring method (as shown in Figure 4) for data collection, the distance is 150 mm, the distance between measuring points is 0.25 m, the sampling period is 0.4 us, and the block surface is mesh of 5 × 5. We formed 25 test points, the detection of the surface relative to the surface, in the same position for 5 × 5 meshing, then formed, relatively, 25 test points, and the arrangement of the measuring points are shown in Figure 5. A uniaxial compression test was carried out with the WHY-2000 pressure testing machine (as shown in Figure 6, from China University of Geosciences (Beijing)), and the loading rate was 20 mm/min. The sensor layout of the text block is shown in Figure 7. Figure 8 shows the sound time value of each measuring point before and after the failure of each test block and the average value of the sound time before and after the failure. As shown in Figure 8a, the average sound time value of test block A1-1 at each Figure 8 shows the sound time value of each measuring point before and after the failure of each test block and the average value of the sound time before and after the failure. As shown in Figure 8a, the average sound time value of test block A1-1 at each measuring point before destruction is 4859.1 µs, the average value of sound time at each measuring point after destruction is 5293.1 µs, the pre-damage average was 91.9% of the post-damage average, and the root-mean-square deviation of test block A1-1 before and after the damage is 2748.4 µs. Among the 25 measuring points, the sound time value of 17 measuring points before destruction is less than the sound time value after destruction, and the sound time value of most measuring points is greater than the sound time value before destruction. As a result, the average value of sound time after the destruction of test block A1-1 is obviously greater than the average value of sound time before destruction. As shown in Figure 8b, the average value of sound time of test block A1-2 at each measuring point before destruction is 4639.3 µs, and the average value of sound time at each measuring point after destruction is 5045.7 µs, the pre-damage average was 91.9% of the post-damage average, and the root-mean-square deviation of test block A1-2 before and after the damage is 2354.7 µs. The sound time value before the destruction of 14 measuring points is less than the sound time value after the destruction, and the sound time value of most measuring points after the destruction is greater than the sound time value before the destruction, so the average value of the sound time after the destruction of block A1-2 is greater than the average value of the sound time before the destruction. As shown in Figure 8c, the average sound time value of test block A2-1 at each measuring point before destruction is 4459.9 µs, the average value of sound time at each measuring point after destruction is 5099.3 µs, the pre-damage average was 87.5% of the post-damage average, and the root-mean-square deviation of test block A2-1 before and after the damage is 2689.5 µs. There are 21 detection points whose sound time value before destruction is less than that after destruction, and the sound time value of most detection points is greater than that before destruction, so the average value of sound time after the destruction of test block A2-1 is greater than that before destruction.
Analysis of Sound Time of Test Block
As shown in Figure 8d, the average sound time value of test block A2-2 at each measuring point before destruction is 4061.4 µs, the average value of sound time at each measuring point after destruction is 4618.2 µs, the pre-damage average was 87.9% of the post-damage average, and the root-mean-square deviation of test block A2-1 before and after the damage is 2698.1µs. In the test block, the sound time value after the destruction of 17 measuring points is greater than the sound time value before the destruction, so the average value of the sound time after the destruction is greater than the average value of the sound time before the destruction.
Above all, whether for the SCC test block after curing for 7 days or the SCC test block after curing for 28 days, the average value of sound time of the measured points before the destruction is less than the average value after the destruction, but the difference between the average value of sound time of the measured points before and after the destruction of each test block is small; this indicates that although internal defects occur in the test block after destruction, they are not sensitive to the influence of the average value of sound time. Combining Figure 8 with root-mean-square deviation analysis, the average value of sound time of some measuring points is significantly larger than that before the destruction. In theory, after the failure of the test block under the uniaxial compression test, defects and cracks appear in some structures. When ultrasonic encountered defects and cracks, scattering and reflection occurred. Ultrasonic would bypass defects and cracks and change the original propagation path. Figure 9 is the average value of the amplitude of each measuring point before and after the failure of each test block. According to Figure 9a, the average value of the amplitude of each measuring point before the failure of test block A1-1 is 29.74 dB, the average value of the amplitude of each measuring point after the failure is 29.16 dB, and the rootmean-square deviation of test block A1-1 before and after the damage is 3.51 dB. In each measuring point, the amplitude of 15 measuring points before failure is greater than that after failure, and the amplitude of most measuring points after failure is less than that before failure, resulting in the average value of the amplitude of A1-1 after failure being less than the average value of the amplitude before failure. According to Figure 9b, the average value of the amplitude of A1-2 before failure is 29.96 dB, the average value of the amplitude of each measuring point after failure is 29.46 dB, and the root-mean-square deviation of test block A1-2 before and after the damage is 4.34 dB. The amplitude of nearly half of the measuring points is smaller than that before failure, and the average value of A1-2 after failure is smaller than that before failure. It can be seen from Figure 9c that the average amplitude of each measuring point of test block A2-1 before failure is 30.13 dB, the average amplitude of each measuring point after failure is 29.16 dB, and the root-mean-square deviation of test block A2-1 before and after the damage is 4.06 dB. The amplitude of 14 measuring points before failure is greater than that after failure, and the amplitude of the remaining measuring points before failure is less than that after failure. The average amplitude of A2-1 after failure is smaller than the average value of amplitude before failure. From Figure 9d, it can be seen that the average amplitude of each measuring point of test block A2-2 before failure is 29.69 dB, the average amplitude of each measuring point after failure is 28.91 dB, and the root-mean-square deviation of test block A2-2 before and after the damage is 4.44 dB. The amplitude of most measuring points after the failure of the text block is smaller than that before failure so the average amplitude after failure is smaller than that before failure. Above all, whether for the SCC test block after curing for 7 days or the SCC test block after curing for 28 days, the average value of the amplitude of the measured points after the failure is smaller than the average value before the failure, but the difference between the average value of the amplitude of the measured points before and after the failure of each test block is smaller. Combining Figure 9 with root-mean-square deviation analysis, the amplitude of some measured points is significantly smaller than that before the failure. This is because the defects and cracks in the structure will lead to scattering and reflection during the ultrasonic wave propagation, and the ultrasonic wave will attenuate obviously, which will lead to the amplitude of some measuring points becoming smaller. Figure 10 shows the average value of the sound velocity of each measuring point before and after the failure of each test block. It can be seen from Figure 10a that the average sound velocity of each measuring point of test block A1-1 before failure is 0.042 km/s, the average sound velocity of each measuring point after failure is 0.039 km/s, and the root-mean-square deviation of test block A2-1 before and after the damage is 0.034 km/s. In all measuring points, more than half of the sound velocity after failure is less than that before failure, and the average sound velocity of test block A1-1 before failure is greater than that after failure. It can be seen in Figure 10b that the average value of the sound velocity of each measuring point of test block A1-2 before failure is 0.039 km/s, the average value of the sound velocity of each measuring point after failure is 0.036 km/s, and the rootmean-square deviation of test block A2-1 before and after the damage is 0.023 km/s. The average value after failure is 92.3% of the average value before failure. The average value of the sound velocity of 18 measuring points after failure is less than that before failure. The average value of the sound velocity of test block A1-2 before failure is greater than that after failure. It can be seen from Figure 10c that the average sound velocity of each measuring point of test block A2-1 before failure is 0.042 km/s, the average value of the sound velocity of each measuring point after failure is 0.039 km/s, and the root-mean-square deviation of test block A2-1 before and after the damage is 0.025 km/s. The average value of sound velocity before and after the failure of the text block is the same as that of A1-1. The average value of sound velocity before failure of A2-1 is greater than that after failure. According to Figure 10d, the average value of sound velocity after the failure of 16 measuring points of A2-2 is less than that before failure. The average value of the sound velocity of each measuring point before failure is 0.062 km/s, and the root-mean-square deviation of test block A2-1 before and after the damage is 0.069 km/s. The average value of the sound velocity of each measuring point after failure is 0.046 km/s, and the average value after failure is 74.2% of the average value before failure. The average value of the sound velocity of test block A2-2 after failure is smaller than the average value before failure.
Analysis of Sound Velocity of the Text Block
Above all, whether for the SCC test block after curing for 7 days or the SCC test block after curing for 28 days, the average value of the sound velocity after the destruction of each measuring point is smaller than the average value before the destruction. Combining Figure 10 with root-mean-square deviation analysis, the sound velocity value of some measuring points is significantly smaller than that before the destruction. This is because the internal medium is more uniform before the failure of the SCC test block. Therefore, the ultrasonic wave propagates at a relatively high speed inside the test block. However, in the later stage, due to the failure of the text block, defects appear in part of the structure of the text block, resulting in the noncompactness of the structure of part of the text block. Therefore, the sound velocity value of some measuring points will decrease significantly.
Analysis of Abnormal Values of Measuring Points
Through the calculation of concrete sound wave detection and analysis software, the sound velocity chromatogram and amplitude chromatogram before and after the destruction of each SCC test block are obtained, and the abnormal measuring points and abnormal values of the text block are judged. Figures 11 and 12 are the amplitude chromatogram before and after the destruction of the text block, and Figures 13 and 14 are the sound velocity chromatogram before and after the destruction of the text block. It can be seen from Figures 11 and 12 Figures 12 and 13 that there is no obvious abnormality in the sound velocity value before and after the failure of the text block. Through the analysis of the above data, it can be seen that after the failure of each compact concrete test block under the uniaxial compression test, defects and cracks will appear in some structures, resulting in the uncompact structure in some areas. When ultrasonic waves encounter defects and cracks, they will be scattered and reflected, and attenuated significantly. The ultrasonic waves will bypass the defects and cracks and change the original propagation path. Therefore, after the failure of the test block, the defects of the internal structure will cause the amplitude of some measuring points to be abnormal, while the sound velocity value is not abnormal. Figure 15 is the test block defect distribution diagram obtained after the calculation and analysis of the above abnormal measurement points. Before the failure of the text block, the sound velocity amplitude is normal, while the acoustic parameter value is abnormal after the failure. The area with a color anomaly in the figure is the part with the abnormal amplitude value and sound velocity value of the text block. The figure shows that the abnormal value of the block is mainly amplitude value, there are no obvious abnormal sound velocity values, the area of the abnormal amplitude of block A1-1 is concentrated between rows 1-2 and the area of the abnormal amplitude of block A1-2 is concentrated between rows 3-5, the area of the abnormal amplitude of block A2-1 is only concentrated in the top and center of the block, and of the block, A2-2 is concentrated in the center and edge of the block.
Conclusions
Through the ultrasonic nondestructive testing method, we studied the failure process of SCC under the uniaxial compression test, and the following conclusions were obtained: i.
An ultrasonic testing analyzer was used to study the variation of ultrasonic parameters of SCC before and after uniaxial compression test failure; the evolution law of internal defects of SCC after failure was obtained, which provided a theoretical basis for the application of ultrasonic testing technology in the field of SCC. ii. After curing the SCC test block for 7 days and 28 days, the sound value before and after the failure had the following rules: The average value of sound time before and after the failure of each measuring point is smaller than that after the failure, but the difference between the average value before and after the failure is small. Defects and cracks appeared in some structures, the ultrasonic propagation path was longer than before the failure, and the sound time value of some measuring points was significantly larger than before the failure. iii. The amplitude before and after the failure of the test block has the following rules: The average value of the measured points after the failure is smaller than the average value before the failure. Structural defects and cracks cause scattering and reflection during the ultrasonic wave propagation, the ultrasonic wave shows obvious attenuation, and the amplitude of some measured points is significantly smaller than that before the failure. iv. The sound velocity values before and after the failure of the test block have the following rules: The average value of the sound velocity after the failure of each measuring point is smaller than the average value before the failure. The test block is damaged, and some of the structures are defective, resulting in the uncompacted structure of part of the block, and the sound velocity value of some measuring points is significantly smaller than before the destruction. v.
During the SCC ultrasonic testing process, the ultrasonic velocity was affected by many factors. In the subsequent testing process, the influence of these factors must be reduced. | 2022-06-25T15:16:17.945Z | 2022-06-22T00:00:00.000 | {
"year": 2022,
"sha1": "19e056e3e064504062429c3ee60697e716653812",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/15/13/4412/pdf?version=1656301785",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "033a77ceec2be54035714bde9387d5691c2f8aab",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230539082 | pes2o/s2orc | v3-fos-license | Diagnostic Value of Platelet Indices in Patients with Pulmonary Embolism
Pulmonary embolism is caused by a thrombus that blocks the pulmonary artery. The role of the platelet is mainly related to the formation of thrombus. This study aimed to determine the diagnostic value of platelet indices in patients with pulmonary embolism. This study was a retrospective observational research involving 55 patients with and without pulmonary embolism at the period of January 2014 and June 2019 at Dr. Wahidin Sudirohusodo Central Hospital, Makassar. The diagnosis of pulmonary embolism was based on CT angiography. Platelet Indices (PI), Mean Platelet Volume (MPV), Platelet Distribution Width (PDW), and plateletcrit (Pct) were analyzed respectively in two groups. Thirty-one (56.3%) patients were diagnosed with pulmonary embolism. There was significant difference of MPV and Pct values between embolism and non-embolism group (9.3±1.5 fL vs. 9.5±0.7 fL, p=0.49) and (0.2±0.1% vs. 0.2±0.1%, p=0.82). Contrastingly, there was a significant difference in PDW value between the two groups (13.2±4.9 fL vs. 9.9±1.1 fL, p=0.002). Receiver Operating Characteristics (ROC) analysis showed cut-off value ≥ 10.5 fL of PDW with a sensitivity of 77.4%, a specificity of 75%, Positive Predictive Value (PPV) of 80%, and Negative Predictive Value (NPV) of 72%. Platelet indices (PDW) showed a good diagnostic value on pulmonary embolism disease with a cut-off value ≥ of 10.5fL.
INTRODUCTION
Pulmonary embolism is a third cardiovascular emergency following acute myocardial infarction and Cerebrovascular Accident (CVA) to increase mortality and morbidity and cause fatal cases if not immediately treated. A pulmonary embolism is an event of pulmonary tissue infarction due to partial or total blockage of the pulmonary artery caused by a detached thrombus. Approximately 90% of embolism originates from deep vein thrombosis or pelvic vein thrombosis, which can migrate into the pulmonary circulation. Symptoms and signs depend on occlusion in the pulmonary arteries ranging from asymptomatic to life-threatening conditions, such as hypotension, cardiogenic shock, to sudden cardiac [1][2][3] arrest.
It is found that the annual incidence of pulmonary embolism resulting in death is 100,000-200,000 cases, while the mortality rate for undiagnosed patients is approximately 30%. If pulmonary embolism is early diagnosed and adequately treated, this mortality rate can fall below 10%. Pulmonary embolism is more common in males than females, with the most common symptoms are shortness of breath (73%), which usually occurs within seconds, at rest or with activity, pleuritic pain (44%), and only 4,5 24% of patients have tachycardia.
The interaction of platelets against the vessel wall and its role in thrombus formation is significant in the vascular disease's etiology and pathogenesis, including pulmonary embolism, development of t h r o m b o s i s i n d u c e d b y v e n o s t a s i s hypercoagulability, and vessel wall trauma. Platelet activation plays a vital role in thrombosis, inflammatory processes, and cardiovascular disorders. Larger platelets contain solid granules that are metabolically and enzymatically more active than smaller platelets. Large platelets produce more prothrombotic substances such as thromboxane A2, serotonin, b-thrombomodulin, p-selectin, glycoprotein IIIa, and proinflammatory mediators, including interleukin (IL)-1, IL-3, and IL-6. The substances stimulate megakaryocyte proliferation, which increases the number of platelets. Therefore, large platelets are more susceptible to adhesion and aggregation than normal and small platelets. The diagnosis of pulmonary embolism is more complicated than its treatment and prevention. Assessment of clinical features, laboratory tests, additional tests such as chest X-ray, ventilation-perfusion scanning, magnetic resonance angiography, and Computed Tomography (CT) thoracic angiography with contrast as the gold standard for diagnosis of pulmonary embolism is relatively expensive, not available 24 hours at the 5-7 hospital, and limited to only a few hospitals. Research that discusses the role of biomarkers such as D-dimers, Brain Natriuretic Peptide (BNP), and troponin to assist diagnosis and make prognosis in pulmonary embolism patients has been widely discussed. However, these biomarkers cannot be used as a basis for diagnosis; therefore, it is necessary to seek other ideal, accurate, safe, widely available, and inexpensive laboratory parameters as an 9,10 alternative.
Mean platelet volume, PDW, Pct are indices of platelets that are highly easy to perform on routine blood tests. Mean platelet volume is a measure of the mean platelet count in the blood and represent platelet activation. Platelet distribution width is the degree of heterogeneity in platelet size. Mean platelet volume, plateletcrit, and total platelet count can be used to indicate the number of platelets There was no study in Indonesia to determine the platelet index's diagnostic value in pulmonary embolism patients. Therefore, based on the description, the authors aimed to use the platelet index in pulmonary embolism patients to determine its diagnostic value as a biomarker to help diagnose pulmonary embolism.
METHODS
This study was a retrospective observational research conducted by taking secondary data of patients suspected of pulmonary embolism based on the medical records of Dr. Wahidin Sudirohusodo Hospital, Makassar, from January 2014 to June 2019.
The population in this study was patients with pulmonary embolism diagnosed by pulmonary specialists based on CT-scan angiography. Data were obtained and selected from the patient medical record. This study's samples were all accessible population with complete data of the platelet index parameters such as MPV, PDW, and Pct. Exclusion criteria were patients with hematologic malignancies, chronic infections, patients with liver and kidney disorders, and patients with incomplete data of the platelet index parameters such as MPV, PDW, Pct, and CT-radiological data. Angiographic scan of medical records.
The platelet index test was carried out using a hematology analyzer. Research approval was obtained from the Health Research Ethics Committee, Faculty of Medicine, Hasanuddin University/Dr. Wahidin Sudirohusoso Central Hospital. Descriptive statistics, categorical variables will be reported as numbers and percentages, while numerical variables will be reported as mean±Standard Deviation (SD). An independent T-test will be used to compare two independent groups. The area under the ROC curve (AUC) was used to determine the platelet index's diagnostic role. A cut-off value was selected to assess sensitivity, specificity, PPV, and NPV. All results were reported by using 95% confidence intervals (95% CI). A p-value < 0.05 was significant. Research
RESULTS AND DISCUSSIONS
A total of 55 pulmonary embolic patients with angiographic CT scan results were involved in this study (31 patients in the embolic group and 24 patients in the non-embolic group). The mean age of pulmonary embolism patients involved in this study was 53.1±17.5 y.o. The clinical characteristics of the patients were shown in Table 1.
There was no significant difference in MPV values between both groups. Mean MPV values in embolic and non-embolic groups were 9.3±1.5 fL and 9.5±0.7 fL, respectively (p=0.49). Similar results were also found in other platelet indexes, such as Pct with a mean of 0.2±0.1% and 0.2±0.1% in embolic and non-embolic groups, respectively (p=0.82). Also, the mean PDW value in the embolic group was significantly higher (13.2±4.9 fL) than the non-embolic group (9.9±1.1 fL). Based on the ROC analysis, cut-off value ≥ 10.55 fL for PDW showed sensitivity of 77.4%, specificity of 75%, PPV 80.0%, NPV 72.0%, and AUC 0.840 (95% CI 0.736 -0.944, P < 0.001) (Figure 1). There was no significant difference in MPV between patients with embolic and non-embolic pulmonary in this study. Based on the cut-off value obtained from the Receiver Operating Characteristic (ROC) curve analysis, the PDW had better diagnostic value than the other platelet indices.
Circulating platelets with heterogeneity in size, density, and reactivity have a significant role in the mechanism of thromboembolic disease. Changes in the platelet index, primarily MPV and PDW, indicate platelet activation. Large platelets measured as MPV may represent the mean volume of younger and more reactive platelets and produce more thrombogenic factors. An increase in MPV indicates activation, hyperfunction of the platelets and is associated with a tendency for the thrombotic [11][12][13] process.
The study conducted by Moharamzadeh et al. showed no significant difference in the platelet index between embolic and non-embolic pulmonary showed a statistical significance of PDW value with p=0.002. Based on the ROC curve, the cut-off value was 12.8 fL, sensitivity was 61%, specificity was 71.64%. However, MPV value was statistically low with p=0.038, a cut-off value of 9 fL, 35% sensitivity, 4 and 89.5% specificity. This was in line with the results in this study that the PDW value was statistically significant with p=0.002, but MPV was not statistically significant with p=0.49.
Platelet indices have been widely studied because platelet activation causes changes in platelet morphology. During platelet activation, the form of the platelet changes from biconcave to a spherical shape. They will release granules that stimulate the release of prothrombotic factors and are characterized by pseudopods. Pseudopodia are protrusions on the surface of platelets that are f o r m e d d u r i n g p l a t e l e t a c t i v a t i o n . T h i s transformation can affect the value of PDW, suggesting a greater specificity, simplicity, and practicality of PDW compared to MPV as a marker to 11,12 assess platelet activation.
The tendency for thrombosis in pulmonary embolism is the result of increased platelet activation and the use of platelets in thrombus formation indicated by an increase in MPV and PDW. The recurrence thrombosis in pulmonary embolism can also be observed from the increase in MPV and 9,10 PDW.
The study conducted by Vagdatli et al. reported that both MPV and PDW were increased in diseases associated with platelet activation, and it was emphasized that PDW is a more specific marker 11 for platelets activation compared to MPV. Research on the platelet index in patients with pulmonary embolism has been widely carried out abroad with different results. The platelet index is not specific and thus cannot be used as a gold standard. Platelet index is also an indicator of coronary artery disease, stroke, inflammatory disease, kidney and liver disease, type 2 diabetes mellitus, malignancy, 6,11 and anti-platelet therapy. Besides, some aspects of various detection technologies such as the type and method of the hematology analyzer, the amount and type of anticoagulant, and the time interval between blood collection and sample processing can produce 11 irregularities in laboratory results.
CONCLUSIONS AND SUGGESTIONS
Mean platelet value and PDW platelet indices increase during platelet activation. In this study, PDW had better diagnostic value than other platelet indices and can be used as a marker in pulmonary embolism patients with a PDW cut-off ≥ 10.55 fL. It was recommended to perform further research by adding other variables such as comorbidities or risk factors for pulmonary embolism.
Diagnostic Value of Platelet Indices Tanra -, et al. | 2020-12-10T09:06:56.762Z | 2020-12-07T00:00:00.000 | {
"year": 2020,
"sha1": "54ea95170617df6a7435e1b025165414eb03248e",
"oa_license": "CCBYSA",
"oa_url": "https://www.indonesianjournalofclinicalpathology.org/index.php/patologi/article/download/1625/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5720a0208dd5061b79a5a2b3d3af7e4b3bd5fb03",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212942284 | pes2o/s2orc | v3-fos-license | A survey of the consensus for multi-agent systems
Multi-agent systems (MASs) has developed into an emerging complex system science and gradually infiltrated into various fields of social life. The problem of consensus (i.e. all agents eventually to reach an agreement upon a common quantity of interest) is the basis of distributed coordinated control of the MASs, which has attracted tremendous attention from both theoretical and practical perspectives. This paper comprehensively reviews the state-of-the-art development in the consensus of MASs. Firstly, the basic framework and overview of MASs and consensus are discussed. Secondly, the motivations, results and methods of several kinds of consensus problems are introduced, including consensus subjected to communication constraints, leader-following consensus, group consensus, consensus based on trigger mechanism, finite-time consensus, multi-consensus and multi-tracking. Finally, some challenging issues and development trends of the consensus of MASs are considered.
Introduction
In recent years, with the deepening of scientific research on the biological behaviour, researchers have had a more profound and intuitive scientific analysis of the group coordination behaviours that are prevalent in biological populations in nature, such as the collaborative division of labour between ant colonies, parade of fish schools, formation of bird groups and cooperative hunting of the herd. Through a large amount of data observation and research, it is shown that the overall intelligent behaviour and actions can be achieved through the local or regional communication and cooperation between the individuals, although the individual's ability in the group is quite limited. Without the centralized control from the outside world and the internal global information exchange, these groups can present the overall complex behaviour, such as maintaining formation, escaping natural enemies, collaborative attacks and finding food, only through the information exchange with the surrounding individuals. The multi-agent systems (MASs) are derived from the exploration and research of biological behaviours in nature, and it is the refinement and development of the behaviour patterns of biological groups. Durfee, Lesser, and Corkill (1989) define a MASs as a loosely coupled structure composed of multiple agents, and agents interact with each other to solve problems that cannot be solved by a single agent, due to lack of CONTACT Chong Tan tc20021671@126.com ability, knowledge or resources, or even low-efficiency problems. The advantages of the MASs over a single agent are (1) ability to perform more complex and dangerous tasks; (2) high efficiency; (3) highly fault tolerant and robust; (4) low cost and easy to develop and so on. MASs can improve the quality and efficiency of complex problems with asynchronous parallel activities between agents. Its loosely coupled structure ensures the reusability and scalability of its components. Its data and resources are dispersed in various agents in the system environment, expressing the distribution of system description problems. Through the coordinated control and collaborative operation of the intelligent group, the effect of MASs far exceeds the cumulative sum of its individual performance. Therefore, the MASs has developed into an emerging complex system science and gradually infiltrated into various fields of social life (Jiang, Liu, & Zhang, 2018). The control methods of studying MASs usually include centralized and distributed. Although the centralized control method is easier to install and implement, when the number of subsystems is large, it requires the central station with sufficient resources to withstand a large amount of communication and computational load. Therefore, the reliability of the central station is relatively high. This type of control is essentially a simple extension of control methods and strategies for the traditional single system. In contrast, the distributed control method does not rely on the central station function but the control of systems by adopting a complex system structure. Compared with centralized control technology, distributed processing technology has the advantages of high reliability, fast running speed and convenient operation. Research on distributed control of MASs originated from distributed computing, management science, statistical physics and other disciplines, and research in the field of control dates back to the literature (Tsitsiklis & Athans, 1984). The consensus problem is the basis of distributed coordinated control of the MASs. It has been widely used in cooperative control, formation control, sensor network design and clustering of social insects (Xue, Liu, Gu, Li, & Guan, 2017). Therefore, the consensus problem has become a hot issue in the research of the MASs. The theoretical study of the consensus problem can be roughly divided into three stages. The first stage is mainly based on the simulation of a biological group mechanism, by using computers to simulate some consensus phenomena of natural groups. In 1987, Reynolds built a computer model according to the characteristics of birds, fish and other groups in nature and proposed the famous Boid model. In 1995, Vicsek, Czirók, Ben-Jacob, Cohen, and Shochet proposed a classical model describing the phase shift of self-driven particles from the perspective of statistical mechanics based on the Boid model. The second stage is the initial stage of theoretical research. In 2003, Jadbabaie, Lin, and Morse gave a theoretical explanation for the consensus behaviour of Vicsek model by applying graph theory and matrix theory and analysed the effect of graph connectivity on consensus. In 2004, Olfati-Saber and Murray used the properties of the Laplacian matrix to study the consensus problem of the first-order integrator MASs, and formalized the solvability concept and protocol concept of the consensus problem. The theoretical framework of the consensus problem is proposed, which reveals the relationship between the algebraic connectivity of the graph, the consensus convergence rate and the upper bound of the time-delay tolerance in Olfati- Saber and Murray (2004). In 2005, Ren and Beard analysed the consensus problem of the second-order integrator MASs and pointed out the importance of the communication topology including the directed spanning tree for achieving asymptotic agreement. The introduction of the Laplacian matrix has made a qualitative leap in the study of consensus problems from the simulation phase to the theoretical analysis phase. Since then, graph theory has become an important tool for the theoretical analysis of consensus problems, and the study of consensus issues has entered the third stage. In the third stage, the research focuses on the analysis of consensus models, the design of consensus protocols, convergence, equilibrium, and application prospects. Many scholars have applied different model methods and carried out in-depth research and expansion of consensus theory from different directions. The consensus has developed rapidly and yielded fruitful results, and has been widely applied to a variety of scientific and engineering problems, including synchronization of coupled oscillators, formation control, swarm control, optimal cooperative control, clustering, sensor networks, etc. (Lin, Zhang, & Liu, 2018;Zhang, Hu, Liu, Yu, & Liu, 2019). This paper will introduce mainly the development and research status from the following aspects: consensus subjected to communication constraints, leaderfollowing consensus, group consensus, consensus based on trigger mechanism, finite-time consensus, multiple consensus and multiple tracking. Detailed analysis is made and insightful understanding is given with respect to recent results on the consensus issue of MASs reported in the literature. The remainder of this paper is organized as follows. The consensus subjected to communication constraints is described in Section 2, such as time delays, uncertain communication, saturation, quantization and perturbation. Section 3-7 focus on reviewing the latest theoretical results and their respective advantages and disadvantages about leader-following consensus, group consensus, consensus based on trigger mechanism, finite-time consensus, and multi-consensus and multitracking. Section 8 presents some challenging issues.
Consensus subjected to communication constraints
In order to achieve consensus and coordinated control of MASs, an important factor is the ability of agents to exchange information by the networks. The intervention of the network not only fundamentally breaks through the limitations of the traditional 'point-to-point' signal control, avoids the laying of dedicated lines between control nodes, reduces system wiring, and has many other advantages, such as low cost, easy expansion, flexible structure, easy to diagnose and maintain system, etc. (Hu, Wang, Chen, & Alsaadi, 2016;Xia, Gao, Yan, & Fu, 2015). However, the network also causes some problems different from the traditional control system. Since there are a large number of information sources in the network, when each node transmits information through the network, the network communication channel is shared in a sharing manner. However, the network bandwidth is limited, and the data traffic in the network changes irregularly. When multiple nodes exchange data through the network, data collision, multi-path transmission, connection interruption, and network congestion often occur (Hu, Wang, Liu, & Gao, 2016). Therefore, time delays and package dropouts inevitably occur, which will affect the performance of the MASs and even lead to its instability (Savino, Souza, & Pimenta, 2018).
In recent years, the research on the consensus of MASs with time delays has been continuously developed. In Sun and Wang (2009), based on the tree transformation method, the necessary and sufficient conditions for the average consensus are established. For the discretetime MASs with the agent velocity in the non-convex set, Lin, Ren, and Gao design the distributed constraint protocol to study the consensus problem of bounded delay, by using the model transformation and boundedness analysis method (2017). Inspired by the predictive power of nature creature, a small world prediction protocol for the A/R and Vicsek models is designed in Zhang, Chen, and Stan (2011). And for linear dynamic networks without leaders, a distributed predictive control protocol is proposed, which shows that the prediction protocol can improve the convergence speed of consensus and reduce the sampling frequency. In Ferrari-Trecate, Galbusera, Marciandi, and Scattolini (2009), the consensus problem of MASs with saturated inputs is considered, and distributed predictive control mechanisms and pinning control are used to achieve consensus and improve performance.
Although the literatures (Ferrari-Trecate et al., 2009;Zhang et al., 2011) introduce predictive control methods into MASs, the effects of time-delay on the consensus of MASs are not considered. The first-order and second-order continuous-time MASs with the same constant time-delay are considered in Fang, Wu, and Wei (2012) and Wu, Fang, and She (2012), the weighted average predictive control is introduced to simultaneously increase the upper bound of the maximum tolerance delays and convergence speed. In Wang, Zuo, Lin, and Ding (2017), a zero-input solution is used as the predicted value of the agent's state in the time-delay period, and the sufficient conditions of the global consensus are given for Lipschitz nonlinear MASs with input delays, based on the Jordan type of the Laplacian matrix. At present, most of the results accept time delays passively, that is, outdated information is used directly to design a protocol (or algorithm). Obviously, outdated information cannot completely and truly reflect the current dynamics of the system (Hu, Chen, & Du, 2014). It is difficult to implement accurate and effective control of the system by using the protocol based on outdated data. Therefore, considering the transmission capability of information in the network environment, it will be a promising research topic on how to overcome actively the impact of time delays on consensus and performance indicators. Tan et al. have introduced the networked predictive control scheme to compensate for the communication delays actively for the discrete-time MASs in Tan and Liu (2013), Tan, Liu, and Shi (2015), Tan, Liu, and Duan (2012), , Tan, Yin, Liu, Huang, and Zhao (2018), and Li, Tan, and Liu (2016). The state consensus problem of discrete-time homogeneous MASs with time delays is studied under the condition that the states of the agents are unmeasurable, the outputs of the agents are measurable and not fully measurable in Tan and Liu (2013), Tan et al. (2015), , and . And the output consensus problem of discrete-time hetergeneous MASs with delays is studied in Tan et al. (2018) and .
The theoretical design of protocols or algorithms cannot accurately act on the actual object, which greatly limits the further development of the protocols and its engineering application, because almost all physical systems are limited by the operating range of the actuator or device loss (Hu, Wang, & Gao, 2018). That is, actuator saturation constraints or input saturation constraints (Zhang, Li, & Zhao, 2017). So it is necessary to consider the working range of the actual system in the process of designing the protocol. The MASs subjected to saturation constraints is essentially a nonlinear MASs. A distributed adaptive consensus control scheme is proposed for a class of nonlinear MASs with input saturation in kahkeshi Maryam and Maedeh (2019), based on the minimum learning parameter algorithm and the dynamic surface control method. The global consensus problem for discrete-time MASs with input saturation constraints, and a fixed undirected topology is considered in Yang, Meng, and Johansson (2014). A dual integrator dynamics model with input saturation constraints is established, and a consensus control algorithm is designed in Zhou and Yan (2014). A model based on a dead zone operator is proposed to provide a smooth model of saturated nonlinearity in Shahriari-kahkeshi and Taj (2019), and a consensus strategy is proposed, based on the minimum learning parameter algorithm and the dynamic surface control method.
In fact, MASs are often affected by various complex environments, and local information exchange between multiple agents may be interfered by some uncertainties (Hu, Zhang, Yu, Liu, & Chen, 2019;Jenabzadeh & Safarinejadian, 2019). Overcoming the impact of uncertain communication on consensus is of great significance (Hashemi, Askari, Ghaisari, & Kamali, 2017; Kaïs, Karim, & Tarak, 2017;Xiao & Mu, 2017). A robust feedback controller is designed to ensure the consensus of uncertain MASs with external disturbances in Ramya, Sakthivel, Ren, Lim, and Leelamani (2019), based on interference suppression and Smith predictor scheme. The distributed consensus problem of MASs with parameter uncertainty is studied in Yang and Li (2019), and an adaptive updating law with time-varying parameter is designed. Wang et al. propose a metamorphic adaptive low-gain feedback approach to investigate the semi-global robust tracking consensus problem of uncertain MASs with input saturation in Wang, Chen, and Zhang (2019). Xu, Peng, and Guo (2018) investigated the consensus problem for a class of nonlinear MASs with stochastic uncertainties and disturbances; a novel impulsive control protocol is presented to reduce the control cost effectively. The H ∞ PID feedback for an arbitrary-order delayed multi-agent system is investigated to improve the system performance, based on the extended Hermite-Biehler theorem in Ou, Chen, Zhang, and Zhang (2014). The consensus problem of a class of MASs with uncertain topology and partially unknown control directions is studied in Chen, Li, Zhang, and Wei (2019). Under the assumptions that the uncertain topology is a fuzzy joint connection and only a small number of followers can access the leader information, some new control protocols are proposed to solve the consensus problem of the first-order and second-order nonlinear MASs.
The exchange of information between agents is usually limited by the capacity of the communication channel. When the information to be transmitted exceeds the communication carrying capacity, the performance of the system may be degraded or even unstable. In order to solve the constraints caused by limited communication bandwidth, the quantization information is often encoded at the transmitting end and correspondingly decoded at the receiving end, which will introduce quantization error and strong nonlinear factors (Hu, Wang, Liu, & Zhang, 2019;Hu, Wang, Shen, & Gao, 2013;Meng, Zhao, & Lin, 2013;Wang, Dong, & Wang, 2017). Under the quantization effect, all first-order nonlinear agents can reach a consensus by using an edge-based adaptive protocol in Li, Ho, and Li (2018). By constructing a novel dynamic quantizer, a distributed protocol via sampled and quantized data is designed to solve the consensus problem of the continuous-time linear MASs in Ma, Ji, and Sun (2018). The distributed preamble fixed-time quantization consensus problem of nonlinear MASs is considered in Zhang, Hu, and Huang (2019), based on impulse control. A neuro-based robust adaptive consensus control scheme for a class of uncertain nonstrict-feedback MASs is proposed in the presence of input quantization and unmodelled dynamics in Qin, He, and Li (2019). A distributed dynamic output feedback protocol is proposed for the MASs with structured uncertainty and external disturbance in Xue, Wu, and Yuan (2019), which utilizes not only relative output information of neighbouring agents but also relative state information of neighbours. The non-fragile consensus control problem is studied for a class of nonlinear MASs with uniform quantization and randomly occurring deception attacks in Wu, Hu, and Chen (2019).
Leader-following consensus
In recent years, the leader-following problem of the MASs has also received extensive attention (Tan, Liu, & Duan, 2010). According to the different properties of the leader, a leader-following consensus problem can be categorized as a real leader case and a virtual leader case.
A leader-following consensus protocol is adopted to solve the consensus problem of heterogeneous multiagent systems with time-varying communication and input delays in Dai, Lin, and Liu (2014). The distributed tracking control problem for first-order agents with multiple dynamic leaders and directed Markovian switching topologies has been investigated in Li, Xie, and Zhang (2015). The leader-following consensus problem for second-order MASs is studied in Zhang and Duan (2018) and Zhu and Cheng (2010). And Su presents a novel distributed internal model approach to further study the leader-following rendezvous problem for doubleintegrator MASs subject to both external disturbances and uncertainties in Su (2015). And both the distributed full and partial state feedback control without velocity measurement have been investigated. Ding, Han, and Guo (2013) investigate network-based leader-following consensus for a distributed MASs. Liu and Huang (2018) further study the leader-following attitude consensus problem of multiple rigid body systems subject to a jointly connected switching communication network. Combining a feed-forward control method with an adaptive control approach, a new adaptive distributed controller is proposed for multiple uncertain Euler-Lagrange systems, which can adapt to arbitrary bounded nonuniform time-varying communication delay and directed switching communication network in Lu and Liu (2018). A control scheme based on distributed robust adaptive neural network is designed to ensure that the uniform output tracking errors between followers and leaders are semi-globally uniformly and ultimately bounded in Shen and Shi (2015), avoiding the classical 'explosion of complexity' problem in a standard back-stepping design. For multiple rigid spacecraft systems, whose attitude is represented by the unit quaternion, a nonlinear distributed observer is established to achieve the leader-following consensus in Cai and Huang (2014). Lu, Chen, and Chen (2016) present two non-smooth leader-following formation protocols for non-identical Lipschitz nonlinear MASs. By introducing local estimators for the bounds of reference trajectory and a filter, a new backstepping based smooth distributed adaptive control protocol is proposed to achieve leader-following consensus control for high-order nonlinear MASs in Huang, Song, Wang, Wen, and Li (2017). Contrary to the previous studies on leader-following consensus, the Caputo fractional MASs cover bounded and unbounded time-dependent Lipschitz coefficients in Almeida, Girejko, Hristova, and Malinowska (2019). A constrained control protocol is designed for the nonlinear MASs with input constraint in Deng, Sun, and Liu (2019). The exponential leader-following consensus problem is investigated for a class of nonlinear stochastic MASs with partial mixed impulses in Tang, Gao, Zhang, and Kurths (2015). The global leader-following consensus problem for the MASs with bounded controls has been studied in Zhao and Lin (2016). Under a fixed directed graph, the leaderfollowing output consensus problem is investigated for a class of nonlinear MASs in Hua, Li, and Guan (2019). For high-order stochastic nonlinear MASs, the dynamic gain in the controller is used to compensate the timevarying coefficients of the nonlinear function in You, Hua, Yu, and Guan (2019). A distributed adaptive state feedback control law is introduced to make leader-following consensus for a class of uncertain nonlinear MASs under jointly connected directed switching networks in Liu and Huang (2017).
Group consensus
In many practical situations, a group of agents must be able to sense and respond to unexpected situations or any changes when a cooperative task is implemented. Besides, different agreements of agents may be caused by different task distributions in cooperative control. Therefore, it is an important issue that appropriate protocols are designed to make agents reach different consensus values. This problem is called group consensus problem, which is more suitable for dealing with collaborative control problems (Xia, Huang, & Shao, 2010;Yu & Wang, 2009a. As one of the hot topics in the distributed control of MASs, the group consensus problem of MASs has broad applications in multi-robot manipulators, satellite clusters, vehicle formations and so on (Li, Duan, & Tan, 2011;Tan, Liu, & Duan, 2011).
Recently, great deals of excellent research results on group consensus have emerged constantly. Miao and Ma (2015) investigate group consensus for the first-order discrete-time or continuous-time MASs with nonlinear input constraints. Kim, Park, and Choi (2014) investigate the group average-consensus and group formation-consensus problems for first-order MASs by using average matrices. Liu and Zhou (2014) investigate the impulsive group consensus problems of second-order MASs under directed network topology with acyclic partition, and then some criteria on convergence for such algorithms are established. Gao, Hu, Shen, and Jiang (2019) investigate the group consensus for leaderless MASs. When cyber-attacks are recoverable, the sufficient conditions of the group consensus for the MASs subjected to cyber-attacks are given. The leader-following group consensus problem of second-order MASs is discussed in Ma, Wang, and Miao (2014) and Shi, Cui, and Xie (2017). Ning and Lin introduce an approach of clustering, based on the group consensus of dynamic linear high-order MASs in (2015). Zhao and Park (2014) investigate the group consensus problem by model transformation for discrete-time MASs with a fixed topology and stochastic switching topologies. The cluster consensus of heterogeneous MASs is studied in Chen, Wang, Zhang, and Lewis (2018), by using the linear small gain theory, the output regulation theory and small gain theory. Zheng and Wang (2015) consider the group consensus problem of heterogeneous MASs, in which a novel protocol is proposed, the state transformation method is used and an equivalent system is obtained. Some corresponding sufficient conditions are obtained to achieve group consensus of heterogeneous MASs with fixed and switching topologies in Wen, Huang, Wang, Chen, and Peng (2015). Hou, Xiang, and Ding (2019) consider the group consensus problem for nonlinear MASs, which shows that the consensus can be achieved in both discrete time and continuous time. A reverse group consensus problem for the dynamic agents in the cooperation-competition network is investigated in Hu, Yu, Wen, Xuan, and Cao (2016), which can be divided into two sub-networks. It is found that the reverse group consensus problem can be achieved if the mirror graph is strongly connected. A distributed cooperative control of MASs is proposed for distributed generators clusters in multi-microgrids in Shen, Xu, and Yao (2018). The proposed control method presets the pinned consensus values for multiple MGs considering global cooperation and realizes a pinning based group consensus for distributed generators.
In addition, some interesting and excellent achievements also have been achieved to deal with the group consensus problem for MASs with time delays in recent years. Ma et al. mainly investigate the second-order group consensus for MASs with time-varying delays based on using the second-order neighbours' information in (2014). Li (2019) studies the reverse group consensus problem for second-order MASs with delayed nonlinear dynamics and intermittent communication in the cooperation-competition networks. The group consensus problem of MASs with time-delay is studied in Du, Wang, and Zhao (2015). Weighted group consensus problem of MASs is investigated in Du et al. (2015). A state-based predictive approach for group consensus controllers for MAS with time-varying delays is proposed in An, Liu, and Tan (2018), and the criteria for group consensus are presented. He and Wang (2016) studied the weighted group consensus of MASs with bipartite topologies through adjusting the proportion of the current states and the delay states in the control algorithms, which is able to enlarge the upper bound on the maximum time-delay of weighted group consensus. The group consensus problem of nonlinear MASs with delayed Lurie-type dynamics is investigated in Guo et al. (2015), and a pinning control scheme is designed under an undirected communication graph. Wen, Yu, Peng, and Wang (2016) investigate the dynamics group consensus problem of heterogeneous MASs with time delays, in which agents' dynamics are modelled by single integrators and double integrators.
Consensus based on trigger mechanism
In the theoretical study of MASs, it is usually assumed that there is abundant energy, excellent computing power, and real-time communication. However, in practical applications, the computing power and communication capabilities of a single agent depend on its embedded digital microprocessor, and the energy comes from the embedded battery. The resources of the MASs include the computing power, communication capability, and energy reserve of the agent. Excessive calculations and communication will cause the agent to be busy, unable to respond to other work, or even not working properly, which will affect the normal operation of the entire system. Moreover, the energy of the agent is limited, and excessive calculation and communication consume a lot of energy. Studies have shown that wireless communication will consume up most of the energy of the sensor (Nada, Bousbia-Salah, & Bettayeb, 2018). Exhaustion of energy will cause agents to fail to work, affect the performance of the MASs, and even cause the system to crash. In order to take advantage of the distribution and robustness of MASs, it is especially important to reduce communication and computing as much as possible. Therefore, when designing the control strategy, it is necessary to fully consider the utilization of the system's own energy and network resources, which makes the MASs cooperative control design more challenging. How to reduce the utilization of MASs resources? The most direct method is to reduce the amount of information exchange between agents by designing a transmission strategy. It is well known that the use of digital signal control methods can save more information exchange and computing resources than continuous signal control (Zou, Wang, & Zhou, 2017). Digital signal triggering methods usually include time triggering and event triggering (Hu, Wang, Alsaadi, & Hayat, 2017). The former mainly refers to the traditional sampling control, that is, the measurement and control updates of the system are periodic, and the control information remains unchanged during the period by the zero-order holder. The latter determines whether data is sent by judging a given event condition. The advantage of periodic sampling is easier to implement in the analysis and design. In fact, 'after a certain period of time' can also be regarded as an event, then time triggering can be a special case of event triggering. Through the design of the event and trigger response, the event-triggered mechanism can be better than the time-triggered mechanism to save the resource (Hu, Liu, Zhang, & Liu, 2020).
The time-triggered strategy usually refers to the method of sample control, that is, information measurement and control task execution are performed periodically . The consensus problem under the sampling control framework is called sampling consensus. Due to the infinite sampling period, information transfer between agents is not possible. Therefore, how to select the sampling period to ensure the consensus is the main research content of sampling consensus. At present, there is a large amount of literature on the sampling consensus of MASs. Initially, the sampling consensus study was mainly for the first-order integrator MASs (Xie, Liu, Wang, & Jia, 2009a. Two sampling consensus algorithms are proposed for the second-order integrator MASs with directed topology in Cao and Ren (2010). A distributed consensus protocol is designed based on the current and past location sampling information, and the necessary and sufficient conditions are given to ensure the consensus of the second-order MASs in Yu, Zheng, Chen, Ren, and Cao (2011). When the current location information is not available, only the sampling information of the position and velocity is used to design the protocol and necessary and sufficient conditions for the sampling consensus are obtained in Yu, Zhou, Yu, Lü, and Lu (2013). A novel consensus protocol is proposed to achieve the state consensus for any large sampling interval in Xiao and Chen (2012). And the sampling interval is required to have a lower bound when the sampling interval is aperiodic. For the second-order MASs with nonlinear dynamic and directed topology, the algorithm for determining the maximum allowable sampling interval is given in Wen, Duan, Yu, and Chen (2013). The above literatures assume that all agents synchronize data updates simultaneously, and clock synchronization techniques are required. However, it is sometimes difficult to guarantee sampling synchronism due to communication technology, external disturbances and so on. Therefore, it is necessary to design an asynchronous sampling consensus algorithm, that is, each agent can update data according to its own sampling period, regardless of the update time of the neighbour nodes. Asynchronous sampling consistency is studied for the Vicsek model in Cao, Morse, and Anderson (2008), where each agent samples the direction of its neighbours at discrete times. The asynchronous sampling consensus problem of secondorder MASs with time-varying topology is studied, and sampling periods are designed in Gao and Wang (2011). By modelling the switching of network topologies by a Markov process and considering the effect of communication delay, a new sampled-data consensus control protocol with a variable sampling period is proposed in Ding and Guo (2015).
From the perspective of resource utilization, the sampling control is sometimes conservative (Zhang, Hu, Zou, Yu, & Wu, 2018). For example, when the states are almost the same at two consecutive sampling instants, if the control inputs and tasks are still periodically updated, which will result in wasting the system's resources. In other words, if the state changes little, the previous data can be used to replace the current value. To overcome the limitations of sampling control and reduce unnecessary waste of resources, the basic idea of event-triggered control was first elaborated in Aström and Bernhardsson (1999). By the comparison between time trigger and event trigger, it is pointed out that the event trigger strategy has greater advantages in reducing data transmission. The leaderless consensus and the leader-following consensus under event-triggered estimators can be achieved in Deng and Yang (2019) and Cheng and Ugrinovskii (2016). Eventbased impulsive controller and state-dependent triggering function are designed for leader-following consensus of MASs with general linear models in . Adaptive event-based controller and triggering function for each agent are designed in Zhu, Wang, and Zhou (2019), which shows that the proposed adaptive event-based method can reduce the communication among neighbouring agents. An event-based leaderfollowing strategy for synchronization of MASs is considered, and a model-based method is used to predict the relative states of nodes in Wu et al. (2017). The problems of event-triggered group consensus and tracking group consensus are investigated in Ma and Du (2017), Tu, Zhang, and Xia (2016), and Yu, Yan, and Li (2017). An event-triggered control protocol is proposed to ensure the quantitative consensus of the second-order MASs in Mu, Liao, and Huang (2018). A hybrid trigger mechanism is proposed for MASs with time-varying time delay, switching topologies and random network attacks in Chen, Yin, Liu, and Liu (2019). For more recent advances in the event-triggered consensus of MASs, the review (Ding, Han, Ge, & Zhang, 2018) can be referred.
Finite-time consensus
Convergence speed is an important performance index for the consensus. The consensus algorithm depends on the convergence rate. Many researchers choose suitable communication topology to obtain a higher speed by the optimal vertex configuration. At present, most of the consensus algorithms are asymptotical consensus algorithms, that is to say, the optimal index value of convergence rate is in an infinite time, and the states of all agents cannot be consistent within a limited time. However, many practical control systems require more stringent convergence time and fast dynamic response, and dynamically move to the equilibrium point of the system or achieve zero tracking error within a finite time (Niamsup & Phat, 2018). For example, a brake control system requires that the vehicle's speed reaches zero or the vehicle reaches a specified position within a limited time. Because the finite-time consensus algorithm has the advantages of fast convergence, strong anti-interference and excellent robustness to uncertain factors, finite-time consensus has a stronger engineering application background. However, the finite-time control problem is difficult for theoretical analysis, which is non-smooth in the sense of time invariance. Due to the lack of effective analysis tools, the design and analysis of finite-time consensus algorithms are much more difficult than asymptotical consensus. Therefore, it is an important engineering research topic to propose effective design and analysis methods for the finite-time consensus problem.
At present, the main methods for studying finitetime consensus can be divided into two types. One is the homogeneous theory. The homogeneous theory includes three steps: the first step is to prove that the system is globally asymptotically stable by constructing the Lyapunov functional and combining with the Barbalat lemma or Lasalle invariant principle under the given consensus protocol; the second step is to prove that the system is locally finite-time stable by using the homogeneous theory; the third step is to infer that the system can achieve global stability in finite time by combining with the first two steps. The other is constructing Lyapunov functional, by which the finite-time consensus can be proved, and the upper bound of the time is also obtained (Hu, Zhang, Kao, Liu, & Chen, 2019;Li, Liu, Sun, & Tan, 2019).
The finite-time consensus protocol can be divided into two categories: discontinuous protocol and continuous protocol (Zheng & Tie, 2014). The discontinuous protocol mainly includes a switching protocol and an ultimate sliding mode protocol. For the MASs with a dynamical leader, Meng et al. have designed a distributed observer based on a super-twisting algorithm to solve the finite-time consensus tracking problem of the system in Meng and Lin (2014). Considering the dynamic leader with unknown acceleration and the follower with bounded disturbance, a nonlinear consensus protocol based on non-singular terminal sliding mode algorithm is designed to drive that states of the followers converge to the corresponding state of the leader for a limited time in He, Wang, and Yu (2015). For the MASs consisting of Euler-Lagrangian dynamics, Ghasemi and Nersesov (2014) design a nonsmooth sliding surface-based protocol. When the system trajectory slides on the sliding surface, the states of the system reach the specified position for collaboration control.
Due to the existence of discontinuous control items, there is chattering in the system, which is undesirable in practical systems. Therefore, researchers begin to design continuous controllers to avoid chattering. The continuous protocol mainly includes single-fraction power protocol, homogeneous finite-time protocol, highorder sliding mode protocol, and augmented integral protocol. Under the condition of switching topology with tree structure, Xiao, Wang, and Chen (2014) propose a novel nonlinear continuous protocol for MASs with unknown internal dynamics. In the case of unmeasurable speed and input saturation, Zhang, Jia, and Matsuno (2014) design first-order and high-order finitetime observers by using the homogeneous theory to analyse the stability of the closed-loop system, so that the states of all followers can converge to the leaders' states in finite time.. Wang, Li, and Shi (2014) design a distributed finite-time protocol by using the power-integration technique to ensure that all states of the followers can converge into a convex set composed of leader states, and the algorithm is also applicable to the case of multiple static leaders. Zuo and Tie (2014) propose a class of global continuous time-invariant protocol for first-order integrator MASs such that the convergence time can be designed and estimated off-line. Some consensus algorithms depend on output information or the complete state of neighbours, Chen, Lewis, and Xie (2011) propose an algorithm only requiring the relative state measurements, by using the binary consensus protocol and the pinning control scheme. A continuous nonlinear distributed protocol is proposed to achieve a finite-time consensus of heterogeneous systems in Li, Ren, and Liu (2016). A virtual velocity is introduced to the protocol of second-order MASs, which can be tracked by the real velocity in finite time in Feng and Zheng (2018). The finitetime consensus of heterogeneous second-order MASs with measurable and unmeasurable velocity is studied in Wang and Xiao (2012). Zhang and Yang (2013) consider the finite-time consensus problems of second-order MASs with one and multiple leaders under a digraph. A new nonsingular finite-time terminal sliding-mode control method is proposed for second-order MASs with disturbances in Zhou, Zhou, Chen, and Chen (2018). Based on the pinning error function vector, robust distributed finite-time consensus is given. Wang and Song (2018) present a distributed and smooth finite-time control scheme to achieve leader-following consensus under the topology containing a directed spanning tree. By using the recursive method, the finite-time consensus of high-order uncertain nonlinear MASs is guaranteed by non-Lipschitz continuous control laws in Hua, You, and Guan (2016). The global finite-time consensus tracking problem for uncertain second-order MASs with input saturation is studied in Yang, Eng, Dimarogonas, and Johansson (2013). A distributed controller based on a sliding mode observer is proposed to realize global finite-time consensus tracking with a limited control input. Zheng, Chen, and Wang (2011) give the nonlinear consensus protocol for MASs with Gaussian white noises and define the concept of probability finite-time consensus. The problem of leader-follower finite-time consensus for a class of time-varying nonlinear MASs is studied in Liu and Liang (2016).
Multi-consensus and multi-tracking
Due to the interaction between agents and environments, MASs generate complex clustering behaviour. Researchers have a strong interest in the clustering behaviour of MASs, and have obtained some meaningful research results, and applied them to the fields of traffic control (Abdoos & Mozayani, 2013;Balaji & Srinivasan, 2010), flexible manufacturing (Nejad & Sugimura, 2010), intelligent robots (Lopez-Ortega & Villar-Medina, 2009) and collaborative expert systems (Chiddarwar & Babu, 2011). In a multi-agent network, its nodes are autonomous individuals with certain intelligence. When multiple agents collaborate to complete a complex task, the evolution of the MASs presents multiple coordinated states in some stages due to different task assignments or changes in the environment, in which the agent is located. Multi-consensus means that MASs present multi-consensus states under appropriate distributed control protocols . These states may be related to the grouping of MASs or may not require grouping of MASs. Multi-tracking system collaboratively tracks multiple desired orbits (or virtual leaders) under appropriate distributed control protocols. Cluster behaviours of MASs, such as multi-consensus, multi-tracking, multi-swarming and other clustering behaviours, have important theoretical significance for exploring the theory of complexity (Xu, Zhao, Yang, Gui, & He, 2017).
In recent years, many outstanding research results in multi-consensus and multi-tracking have emerged. Compared with the group consensus problem, the multiconsensus problem includes more rich content and is closer to the actual engineering problem; in other words, the group consensus problem is a special case of multiconsensus problems. The multi-consensus problem of the MASs with nonlinear protocols is investigated in Li and Guan (2013). Cui and Xie (2016) introduce a protocol under the assumption that all subgroups satisfy the intra-balance condition, and focus on the group consensus tracking problem of continuous-time second-order MASs. Xie and Shi (2017) introduce a control protocol that divides the whole system into subgroups with multiple leaders and study discrete-time second-order multiagent groups tracking with Markov switching topology. Based on the internal model principle, Zhang et al. transform the global robust group output adjustment problem into the global robust dynamic stability problem of the MASs. The global robust group output adjustment problem for MASs with uncertain second-order nonlinear dynamics is studied in Zhang and Liu (2018).
In the wireless networks, a multi-tracking protocol is proposed, in which the source node tracks the progress and 'cooperation' of the neighbours to improve their end-to-end delay and overall network performance. For small self-organizing wireless networks with node failures, Yanmaz and Karrels study the performance of multitracking routing protocols in (2008). Han et al. study the multi-tracking problem of the first-order multi-agent network through self-trigger control in Han, Guan, and . The states of multiple agents in each secondorder sub-network asymptotically converge to the same desired trajectory in Han and . Han and He (2016) introduce the concept of intelligence to characterize the level of proxy intelligence, and propose a distributed switching pulse protocol by using sampling position data and sampling speed data alternately. A control protocol is designed to achieve multi-tracking of bounded variables, where the final tracking error is proportional to the sampling period in Han, Guan, and . Chen, Guan, and He (2015) study the multitracking problem of second-order MASs based on sampling position information. Based on the fast terminal sliding mode control method, Han and Guan (2017) propose a distributed finite-time formation tracking protocol and study the finite time formation tracking control problem of MASs. Wei and Yi (2016) transform the multi-consensus problem into the stability problem of the extended error system. A new nail-like consensus protocol with nonperiodic intermittent effects is designed in Huang and Jiang (2018). Franchi, Giordano, and Michieletto study the choice of online leaders in (2019).
For the multi-consensus and multi-tracking problem of high-order MASs, researchers have conducted in-depth research (Monaco & Celsi, 2019;Qin, Ma, & Yu, 2018). Peng, Wang, and Zhang (2014) propose a new iterative learning method for collaborative tracking and estimation of linear MASs with dynamic leaders. Amini and Azarbahram study a new method for achieving consensus for a nonlinear MASs, by using a fixed-order nonfragile dynamic output feedback controller in (2016). Hu and Guan (2016) adopt a node clustering scheme to ensure a relatively high degree of connectivity within each potential subgroup and use some exclusion effects to deal with subgroup outbound links. Zhang, Liu, and Wang (2016) convert the multi-tracking control problem of MASs into the zero-stationary error control problem of some independent subsystems and study the multitracking control of high-order heterogeneous MASs. Yan and Yu (2017) study event-triggered tracking control of a coupled-group MASs. Under the Lipschitz condition, Pei and Chen (2018) studied the consensus tracking problem of heterogeneous MASs with fixed topology. Zhang and Han (2018) propose a distributed pulse protocol to study the robust multi-tracking problem for heterogeneous MASs with uncertain nonlinear and disturbance.
Conclusions and prospects
Over the past few years, the consensus of multi-agent systems has attracted much attention from various scientific communities. Up until now, many algorithms have been well designed to guarantee that agents converge to the common value. However, the existing results and methodologies have some limitations from strict assumptions and special requirements. And taking into account the impact of time-varying, perturbation, various uncertainties, nonlinearities, diversity of agent's structure and other more complex factors, there still exist a number of challenging research topics in further investigations.
Inter-group communication between different groups has an important impact on group consensus. The intrabalance communication constraints critically limit the application scope for group consensus. So it is worthwhile to study how to properly design and optimize communication between different groups of agents. In addition, new control methods should be explored to achieve group consensus, such as impulsive control, pinning control, adaptive control, etc.
There are many studies on the problem of finite-time consensus for continuous-time MASs in existing literatures, but few researches are discussed in the case of discrete-time MASs. To the best of our knowledge, the problem of finite-time consensus for discrete-time heterogeneous nonlinear MASs has not been adequately investigated.
Although some event-triggered consensus issues of MASs have been well addressed in the literature, it is a difficult point for how to design the dynamical event-triggered mechanism due to coupled information among the agents. The event-triggered mechanism can reduce the sampling actions and/or control updates, yet decrease the convergence rate at the same time. Hence, it would be a promising topic to design a suitable eventtriggered mechanism to achieve consensus in a finite time. That is, the convergence rate is fast, and the utilization of communication and computation resources are low.
The problem of consensus has been investigated for linear MASs with time delays, uncertain communication, saturation, quantization and perturbation. However, it will need in-depth research for nonlinear MASs with communication constraints, uncertainties and perturbations.
The various consensus issues of multi-agent systems have yielded fruitful results in theory, which contain consensus subjected to communication constraints, leader-following consensus, group consensus, consensus based on trigger mechanism, finite-time consensus, multi-consensus and multi-tracking and so on. However, there are still gaps in how to apply mature theoretical results to actual engineering systems. Therefore, the engineering application of the consensus algorithms has a long way to go.
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding the Franklin Institute-Engineering and Applied Mathematics, | 2019-12-05T09:25:01.138Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "0b287cd97162672169c9d34760c551e38be42376",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21642583.2019.1695689?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "d28f955fdd3fd035f242cce7aaeda87f52812dbf",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
221158223 | pes2o/s2orc | v3-fos-license | Interactions of particulate matter and pulmonary surfactant: Implications for human health
Particulate matter (PM), which is the primary contributor to air pollution, has become a pervasive global health threat. When PM enters into a respiratory tract, the first body tissues to be directly exposed are the cells of respiratory tissues and pulmonary surfactant. Pulmonary surfactant is a pivotal component to modulate surface tension of alveoli during respiration. Many studies have proved that PM would interact with pulmonary surfactant to affect the alveolar activity, and meanwhile, pulmonary surfactant would be adsorbed to the surface of PM to change the toxic effect of PM. This review focuses on recent studies of the interactions between micro/nanoparticles (synthesized and environmental particles) and pulmonary surfactant (natural surfactant and its models), as well as the health effects caused by PM through a few significant aspects, such as surface properties of PM, including size, surface charge, hydrophobicity, shape, chemical nature, etc. Moreover, in vitro and in vivo studies have shown that PM leads to oxidative stress, inflammatory response, fibrosis, and cancerization in living bodies. By providing a comprehensive picture of PM-surfactant interaction, this review will benefit both researchers for further studies and policy-makers for setting up more appropriate regulations to reduce the adverse effects of PM on public health.
Introduction
Nowadays air pollution has become one of the top health killers all around the world [1,2]. The main pollutants contain ozone (O 3 ), nitrogen dioxide (NO 2 ), sulfur dioxide (SO 2 ), and particulate matter (PM) [3]. PM is reported to induce adverse effects on human health after either short-term or long-term exposure. Short-term exposure generally causes acute inflammatory responses in airways and peripheral blood [4], while long-term exposure is positively related to the mortality of cardiovascular disease and lung cancer [5][6][7]. The deposition of PM on the respiratory tract is mainly dependent on particle size [8]. It is found that smaller particles result in greater total lung and peripheral lung deposition as well as farther distal airway penetration [9]. Generally, particles with diameter larger than 10 μm can enter the nose and mouth. PM 10 (subscript represents aerodynamic diameter no more than 10 μm) [10] can penetrate the trachea and bronchial regions, followed with the deposition at lung bifurcations [11]. Among them, particles larger than 8 μm tend to impact in the upper airways [12]. PM 2.5 and PM 0.1 can enter the non-ciliated alveolar regions, giving rise to deep deposition within the lung [11]. Since the particles smaller than 0.5 μm can be easily exhaled after penetrating the lung, PM 2.5 is considered to be a more serious threat to human health. Particles within this range can be stably deposited on respiratory tracts, easily access to cells, and transfer through blood and lymph circulations. The toxic PM can cause severe symptoms at some sensitive sites, e.g., bone marrow and heart [13]. Therefore, PM 2.5 is most widely employed to study the pathological, physiological and toxicological effects of PM on human health. PM is not only a fast-growing factor associated with early death but also impacts atmospheric conditions [14][15][16], and the accumulation of PM through years leads to climate change [17,18]. PM pollution is especially severe in developing countries due to economic growth and large populations. The average concentration of PM 2.5 in China in 2013 is 61 μg/m 3 , while those in Europe and USA are 16 μg/m 3 and 10 μg/m 3 [19], respectively, nearly all exceeding the World health organization (WHO) guidelines (10 μg/m 3 ) [3].
It has been proved that upon inhalation of PM into alveoli, the particles initially interact with pulmonary surfactant, which maintains alveolar stability and modulates immune functions, impairing the phase behavior and metabolism of alveoli ( Fig. 1). Meanwhile, adsorbate such as polycyclic aromatic hydrocarbons (PAHs), As and Pb may penetrate the surfactant to internal circulation and cause chronic and acute toxicities to human beings. Therefore, pulmonary surfactant is where the immediate health hazard occurs after PM enters respiratory tracts. There are many reviews summarizing the formation of PM and the possible hazards that PM brings about to human health [20][21][22], providing systematic introductions of how PM pollution affects the public and nature. Several reviews shed light on the interaction between nanoparticles and pulmonary surfactant [23,24] as well as the corresponding cellular and immune responses. However, most of the reviews were published nearly 10 years ago with many interaction mechanisms remained unclear. In recent years, several reviews have been published on the topic of particle-lung surfactant interactions [25][26][27]. The authors discussed the interactions of lung surfactant with inhaled nanoparticles or with particles that used in nanomedicine (especially drug delivery). Garcia-Mouton et al. emphasized the features of nanoparticles and the mutual effects of nanoparticles and surfactant [25]. Hidalgo et al. focused on the fate of the nanoparticles utilized as nanocarriers during their interaction with pulmonary surfactant, suggesting the application of surfactant in nanomedicine [26,27]. These reviews offer not only summaries of particle-surfactant interactions, but also the implications for future nanomedicine.
In this review, summaries of composition and formation of both PM and pulmonary surfactant are first introduced. Then the interactions between various particles and pulmonary surfactants are discussed, followed by toxicological impacts of particles on lung surfactant, lung and other living cells and organisms. This review aims to highlight the recent progress on the in vitro research of the biophysical, physicochemical and morphological changes of pulmonary surfactant upon the exposure to PM, with a focus on the underlying intermolecular and interfacial interaction mechanism, correlating the fundamental investigations to surfactant functions and physiological performances of the lung and other tissues and organs. The particles investigated in the literature are mostly synthetic compounds and mixtures, while environmentally derived PM are also studied.
Composition and formation of PM
PM, a mixture of solids, liquids and gaseous matters in the air, commonly contains inorganic components such as sulfates, nitrates, ammoniums [28], crustal materials, sea salts [29], silicon, elemental carbon [30], and organic components such as organic carbon, quinones [31], polycyclic aromatic hydrocarbons (PAHs). Though organic compounds like quinones and PAHs are present with low amounts, their disease (e.g., cancer [32,33] and diabetes [34])-inducing effects cannot be neglected. The PM composition is complex and varies with time and locations. Usually, the chemical composition can be analyzed by inductively coupled plasma mass spectrometry [29], X-ray fluorescence spectrometry and ion chromatography [15]. Study of the composition enables us to figure out the sources and how PM affects public health and meteorological conditions.
There are two major sources of PM: anthropogenic activities and natural sources. Generally, naturally derived PM is from volcanoes, dust storms, living vegetation, etc., while anthropogenic PM is from fuel combustion, industrial and agricultural activities, traffic, etc. These sources lead to the spatial variation of PM composition and mass percent of each component. Mineral materials are the major contributor to PM composition as primary aerosol species, which is formed directly from source as particles (Fig. 2). The spatial variation of mineral materials in PM results from different crustal compositions. The PM dust collected in Northern China showed high levels of SiO 2 and CaO, along with low levels of K 2 O and Na 2 O, while some samples also contained heavy metals such as As and Pb. The main elements present in the dust including Si, Al, Fe and Ca are consistent with local crustal composition [35]. The mean concentration of mineral species in PM is usually higher in urban areas than that in other regions due to the influence of agricultural activities and unpaved roads. However, the rural areas in northwest China is an exception, where the low vegetation and forest cover results in a high level of mineral species in PM composition [28]. Another primary aerosol species-organic carbon was found of high levels in rural northwest China, urban South Asia, and High Asian Area (higher than 1680m a.s.l), where biomass burning is a determining factor. In contrast, the organic carbon levels in the USA and Europe are much lower [15]. The third largest primary aerosol species-elemental carbon is emitted with the highest levels in Asia and Africa [36] due to the incomplete combustion of carbonaceous materials [37]. NH [15]. A similar trend was also observed in the Midwestern USA [28]. The relatively high concentration of SO 4 2-in urban areas is due to industrial and residential heating [38].
NO 3 -and NH 4 + also mainly result from fossil fuel combustion [39], while the use of fertilizers and biomass burning contribute to NH 4 + emission as well [40]. SO 4 2-was found with the highest emission in China, owing to economic growth and dense population which are in huge need of coal consumptions [15,[41][42][43]. Unlike in China, the elevated PM 2.5 concentration in the monitored sites in Iowa was mostly attributed to vehicular emission [28]. In addition to crustal composition and anthropogenic activities, terrain and meteorological conditions also play important roles in the discrepancy of PM composition among locations. Air stagnation is a meteorological phenomenon with features such as anti-cyclonic condition, high temperature, no precipitation and weak wind [44]. These features would result in the accumulation of PM pollution. High temperature accelerates the volatilization of ammonium nitrate. No precipitation means a lack of the scavenging sink to reduce PM 2.5 . Air stagnation is strongly associated with terrain conditions. For example, the low elevation in eastern China, west coast in the USA and the Mediterranean basin gives rise to more frequent air stagnation, and thus PM pollutions in these areas are more severe than other surrounding regions. High altitude can also lead to air stagnation for peripheral areas where surface wind is blocked [19]. In addition to spatial variation, PM pollution exhibits a seasonal variation that can be ascribed to both anthropogenic activities and meteorological conditions [44,45]. In winters and springs, the air quality in China is always much worse, which is due to the heating that increases the secondary aerosol emission, and more stagnated air condition with a lower planetary boundary layer [17].
Along with the advent of nanotechnology, a myriad of nanoparticles have been produced in industry. These intentionally engineered nanoparticles become another source of PM pollution. Specifically, the application of inhaled targeted drug delivery in vaccines, therapeutics and diagnostics has provoked new concerns towards the potential adverse effects on human health [46,47].
Composition and function of pulmonary surfactant
Pulmonary surfactant is a mixture of surfactant proteins (SP-A, SP-B, SP-C, and SP-D) and lipids at alveolar air/liquid interface [48,49]. Pulmonary surfactant is a crucial part in physiological respiration [50]. It generally has three key functions: 1. preventing alveoli from collapse during respiratory activity, as it lowers the energy required for the alveoli to inflate by varying surface tension at the air/liquid interface; 2. defending against pathogen by killing them or preventing the dissemination; 3. modulating immune responses [51]. The first one is achieved through the film lipids enriched in dipalmitoyl phosphatidylcholine (DPPC). The viral defense and modulation of immune responses are mainly ascribed to the surfactant proteins (SP-A and SP-D).
Pulmonary surfactant contains four major surfactant proteins with critical functions, though their weight percentage (10%) is much lower compared with that of surfactant lipids (90%). Hydrophilic surfactant proteins SP-A and SP-D are Ca 2+ -dependent lectins. Both SP-A and SP-D are involved in host-defense mechanisms and protection against viral infections [52,53] due to the structural similarities, e.g., Cterminal lectin domains which bind non-host oligosaccharides on viruses and bacteria [51]. Other structural characteristics contain NH 2terminal domain, collagen-like region, and -COOH-containing carbohydrate recognition domain [54,55]. In contrast, hydrophobic surfactant proteins SP-B and SP-C are involved in the stabilization and formation of surfactant film. The detailed structural features and functions of the surfactant proteins are presented in Table 1.
Lipids comprise 90% of the mass of pulmonary surfactant. Phospholipids contribute 80%~90% to the mass of lipids while cholesterol, triglycerides, and fatty acids occupy the remainder. Phosphatidylcholine (PC) is the most abundant phospholipid (80~85% of phospholipids) in pulmonary surfactant, with disaturated compound dipalmitoyl phosphatidylcholine (DPPC) as the major component of PC. Other phospholipids include phosphatidylglycerol (PG), phosphatidylinositol (PI), phosphatidylethanolamine (PE), and phosphatidylserine (PS). PG is the second abundant phospholipids (7-15% of phospholipids) while PI, PE, and PS occupy less than 5% for each [51,68]. The structures and functions of some representative lipids are illustrated in Table 2.
Surfactant proteins play important roles in film stabilization and viral defense, while surfactant lipids are ultimately responsible for surface tension change during respiration [69]. DPPC, the major component that occupies 30-45% of pulmonary surfactant [86], is a zwitterionic molecule that contains two hydrocarbon tails and a polar head. The gel-to-liquid crystal melting transition of DPPC occurs at 41°C, which makes DPPC a condensed phase at physiological temperature [69]. The condensed phase of DPPC endows it an ability to reduce surface tension to near zero at the end of exhalation. This property results from the highly packed alignment of DPPC molecules at air/liquid interface [48,87]. When alveolus is inflated or deflated, the surface pressure difference required (ΔP) is proportional to surface tension (γ) divided by the radius of the alveolus (R), which can be described by Laplace equation (1).
If the surface tension of alveolus remains high during exhalation, the pressure difference would increase with decreasing radius, making the pressure in the smaller alveolus higher than that in the larger alveolus. Because of the interconnected structure of alveoli, the smaller ones would collapse to larger ones [88,89]. The presence of DPPC in lung surfactant can effectively solve this problem by reducing the surface tension.
The pulmonary surfactant can stabilize the alveoli during respiration via two processes. First, after the secretion of bilayer vesicles in alveolar subphase, lipids, proteins, and other components are adsorbed quickly onto the air/water interface to form an interfacial film. During the adsorption process, SP-B and SP-C lower the energy barrier for this energetically-unfavorable transfer and stabilize the intermediates involved, ensuring this process is fast enough on the scale of seconds [90]. Second, when the surfactant film is compressed by the shrink of alveolus and the surface pressure is elevated, the film would stay at the interface to reduce the surface tension to minimize the volume change of alveoli. Until the maximum surface pressure (Π e ) is reached, the film transits from a two-dimensional structure to a three-dimensional structure and the collapse of film occurs [69]. The alveolar collapse would be avoided in lung due to the surfactant. When the film containing disaturated phospholipids (e.g., DPPC) and unsaturated lipids (e.g., POPC and POPG) is transferred to the air/liquid interface and compressed, liquid condensed (LC) domains with ordered packing and quasi-crystalline organization are formed and floating in a much more disordered phase called liquid expanded (LE) phase. If cholesterol is present in the film, it would modulate the organization and dynamics of the phases, converting the LC/LE coexistence to liquid ordered/liquid disordered coexistence [90]. A classical model indicated that the Enhances adsorption of phospholipids from subphase to interface [60]. Increases collapse pressure of fatty acids to avoid "squeeze-out" SP-C (o) a Nonpolar α-helical protein containing 35 amino acids, 4.2 kDa [61] Stabilizes phospholipids [62,63], increases viscosity of interfacial film [64] SP-D (i) a Glycoprotein, dodecamer of four trimmers, 43 kDa [65] Regulates surfactant metabolism [66], promotes epithelial cells to uptake pathogenic bacteria [67] a "i" represents hydrophilic and "o"represents hydrophobic Remains as a condensed phase at physiological temperature [69]. Generates a near-zero surface tension [70].
Phosphatidylserine (PS) a e.g. Determines the cellular and subcellular distribution of quinidine [80]. Regulates the activities of several enzymes in cell signaling [81] Phosphatidylethanolamine (PE) a e.g. Causes lateral pressure and introduces curvature stress to stabilize membrane proteins [82,83] Phosphatidylinositol (PI) a e.g. Increases the rate of alveolar fluid clearance [84]. Involved in the stabilization of surfactant monolayer [81] (continued on next page) interfacial film only contains a rigid condensed phase during compression, which implies the retainment of DPPC in the film and "squeezed out" of all other components, especially fluid lipids, to the subphase to promote low surface tension [91]. The "squeezed out" part remains underneath as a reservoir for the expansion. The reservoirs underneath are multilayered and multilamellar structures interconnected with the interfacial monolayer film by surfactant proteins, which facilitates the fast diffusion of surface active species to the interface upon expansion [92]. However, many pieces of evidence proved that the film under compression is not that condensed, suggesting a coexistence of LC, LE and collapsed phases [93,94]. An observation further revealed that the concentration of DPPC at the interfacial film was reduced during compression, which implies that in addition to fluidizing lipids, some DPPC may also be squeezed out [95]. The synergistic effect of surfactant proteins and lipids keeps the alveolar lining in a metastable status, preventing lung collapse during respiration.
Interactions of particulate matter and pulmonary surfactant
A variety of nanoparticles, such as simple metal oxides [96][97][98][99][100], non-metal oxides [101,102], polymer-coated and polymer nanoparticles [103][104][105], metal nanoparticles [106][107][108], carbon nanomaterials [100,109], and other compounds containing carcinogenic matters [110][111][112][113], have been investigated in the studies of particle-surfactant interactions. A few studies on environmental PM dust exposure have also been conducted [110,114,115], but they are still in the very early stage compared to those using engineered nanoparticles. One challenge is the variety of composition, size, and other biological and physical properties of PM 2.5 due to geographic locations and seasons. This variety also contributes to the high heterogeneity and complicity of the particles, which creates difficulties in identifying and characterizing the effect of each component. Many engineered nanoparticles such as silica and aluminum oxide are also present in PM dust, therefore in many studies, engineered nanoparticles are used as substitutes of environmental PM to provide implications for air pollution.
DPPC is a commonly used surfactant model because of its dominating proportion in natural pulmonary surfactant and the convenience of preparation and characterization. Some natural lung surfactants derived from mammals are also used in the light of physiological relevance. These natural surfactants are comprised of not only DPPC but also surfactant proteins and other lipids that may exist in human lungs. In most studies, surfactant molecules are dispersed as monolayers or bilayers to explore the interaction with particles. Langmuir monolayer has been widely utilized because it is an essential surfactant model of biological relevance [116,117]. The studies of bilayer and multilayer structures which are more physiological relevant have also been conducted using newer characterization methods.
PM adheres to pulmonary surfactant by various types of interactions, such as electrostatic forces, hydrophobic interactions, van der Waals forces, etc. Among them, electrostatic forces and hydrophobic interaction primarily depend on the surface charge and surface hydrophobicity of PM, respectively, while van der Waals force is ubiquitously distance-dependent which arises from the inherent movement of electrons. The interaction potential (energy), w(r), and force, F(r) of van der Waals interaction between two atoms or small molecules can be obtained as follows: Here, r is the interatomic distance, C vdW is the interaction constant which is given by where μ 1 and μ 2 are the dipole moments, α 1 and α 2 are the electronic polarizabilities, ε 0 is the vacuum permittivity, ε is the relative dielectric constant of the surrounding medium, k B is the Boltzmann constant, T is the temperature, v 1 and v 2 are ionization frequencies, h is the Plank constant [118]. Equations (2) and (3) where d is the distance between the interacting objects, C is the interaction constant that is decided by the molecule's property, ρ is the number density of molecules of the surface material, A is the Hamaker constant [119], R is the radius of the sphere. Equations (5) and (6) illustrate the van der Waals interaction energy w(d) and the force F(d) between atoms and surfaces, while equations (7) and (8) describe that of sphere-flat surface systems. Equations (2)-(8) are applied to the circumstances where the interaction range and separation are much smaller than the radii of the interacting objects. It is obvious that van der Waals force is greatly dependent on the distance between the surfactant and PM. Equations (2)- (8) can be used to estimate the interactions for neutral PM-surfactant systems, or to evaluate the van der Waals force for charged PM-surfactant systems where van der Waals force contributes to the net interfacial interaction. PM impacts pulmonary surfactant in many aspects, including phase behavior, stability, compositions, morphology, etc. Particles may be retained in surfactant forming aggregate domains, or surfactant may be adsorbed to the surface of particles, thus the concentration, elasticity, lateral diffusional property, compressibility, and adsorption behavior of the surfactant are changed, which directly affects the biophysiological properties of surfactant. Particles may also penetrate through surfactant linings to induce inflammation, oxidative stress, or other cytotoxicity and genotoxicity to surfactant and epithelial cells, which is considered an indirect effect on alveolar lining. The transport of particles in monolayers is determined by the Brownian motion and the drag force exerted on the particles from the film. The mean-squared displacement of Brownian motion is where 4Dt is for the particles that are attached to the film to undergo two-dimensional lateral movement. D is the diffusion coefficient and t is the lag time [120]. The viscosity of the surfactant film influences the diffusion according to Stokes-Einstein equation where k B is the Boltzmann's constant, T is the temperature, η is the viscosity of the film, R is the radius of particle. Because of the high viscosity of the surfactant, additional drag force causes a decrease in diffusion coefficient. Drag coefficient ζ is expressed in equation (11) where k B is the Boltzmann's constant, T is the temperature, D is the diffusion coefficient. For larger particles (i.e., particles with diameter larger than 200 nm), the drag exerted due to the high viscosity of the surfactant is added to the Einstein-Stokes model, leading to Danov−Aust −Durst−Lange (DADL) model [121,122] where the diffusion coefficients are given by where η m is the two-dimensional viscosity. Equations (9)- (13) prove that the translocation of PM within the surfactant is affected by the viscosity of the surfactant and temperature. The instrument that is prevalently utilized in the research of interactions between pulmonary surfactant monolayers and nanoparticles is Langmuir-Blodgett trough (LB trough) (Fig. 3A). The trough can be used to prepare thin molecular films at air/liquid interface. The back and forth of two opposite barriers change the surface area of the film through compression and expansion, which simulates the status of pulmonary surfactant during respiration [123]. During the expansion and compression, surface tension of the film is continuously measured with a Wilhelmy plate (balance), and a plot of molecular area versus surface pressure is then generated. This surface pressure -area (Π -A) isotherm clearly depicts the phase behavior of the interfacial film. Technically, LB trough is applicable to amphiphilic molecules that can form stable Langmuir films on air/liquid interface. Besides a tool for interfacial property study, LB trough can also be used to fabricate highly organized multilayer structures. Langmuir films can be transferred to a substrate by either Langmuir-Blodgett (LB) technique that utilizes vertical lifting (Fig. 3B), or Langmuir-Schaefer (LS) technique with horizontal lifting (Fig. 3C). Multiple layers can be obtained with LB technique by repeating dipping and lifting cycles [124]. LB technique always leads to low surface coverage for non-amphiphilic molecules because of aggregation. LS technique gives rise to a uniform film of high quality on substrate [125], though a multilayer film cannot be obtained.
There are several different methods to expose lung surfactants to PM. One is to mix the surfactant solution with particles and then inject the mixture onto the interface by microsyringe [101,103,108,112,126]. This conventional method enables an easier control over the feed ratio, and a sufficient exposure of surfactant to particles can also be achieved. Nevertheless, the observation of the adsorption behavior of particles to surfactant is ruled out since the mixing results in some adsorption before measurement. The manual mixing disqualifies this exposure method to simulate the natural inhalation process. Another method is dispersing particles in a subphase and a lipid solution is spread on the interface either after [96,98,109,[127][128][129] or before particle dispersion [99], as in the latter situation the particle solution is injected through the lipid film to the subphase. This method is adequate to study adsorption behavior but requires a large number of particles, and the diffusion of colloidal particles in subphase would greatly affect the adsorption process. Besides the above mentioned strategies, it is also possible to deposit surfactant to air/water interface, and then colloidal particles are spread on the interface from air side [110,130]. This process is more similar to the natural exposure process of lung lining to PM than the previous ones. A dry powder insufflator has been used to generate and apply particle aerosols to surfactant, which mimics how lung surfactant is actually exposed to PM. In the study conducted by Farnoud et al. [131], aerosols were applied to surfactant during the whole compression process instead of solely at 0 mN/m. This operation implies the importance of particle deposition on compressed films in the study of particle-surfactant interaction.
The phase behavior of the surfactant films on air/liquid interface can be evaluated based on isotherms of molecular surface area versus surface pressure. An isotherm example of pure deuterated DPPC-d 62 shows obvious LE-LC transition and coexistence (Fig. 4A). The transition plateau of gas to LE indicates that DPPC starts to be compressed to a two-dimensional liquid. With further compression from LE to LC, DPPC transits from a liquid to a two-dimensional packed semicrystalline phase. In the LC phase, the tails of DPPC molecules are aligned and pointing out to the air, while water molecules are squeezed out, leading to a dehydrated, poor re-spreading DPPC film with poor adsorption ability [132]. Finally, DPPC collapses at maximum surface pressure (π e ). In contrast, the LE-LC coexistence plateau is not shown for natural surfactant due to the multicomponent. For example, in Fig. 4B, the transition of LE to LC-LE coexistence exhibits as a change in the slope around 15 mN/m. LB trough can also be used to monitor the surface pressure change of lung surfactant during dynamic compression-expansion cycles, giving rise to a better understanding of its surface activity under breathing conditions. Fig. 4B shows typical surface pressure change of Survanta during compression-expansion cycles which can mimic the expiration-inspiration process. Following the collapse during compression, the film was immediately returned to the fully expanded area, with an elastic stretch occurring at the drastic decrease of surface pressure. This stretch was due to the recovery of the film from folds and collapse. During compression, the flat around 42 mN/m is a "squeeze out" plateau, where the unsaturated lipids that cannot stand high pressure are squeezed out to the subphase as a reservoir. Saturated lipids and fatty acids are left to form a condensed phase. Upon expansion, part of the reservoir is re-adsorbed to the interfacial film while the rest materials are permanently lost. This phenomenon is exhibited as a hysteresis loop between the compression and expansion curves (Fig. 4B) [133,134]. The hysteresis area can be used to indicate the stability and respreadability of the surfactant film, where a larger loop represents a higher degree of inhibition caused on the surfactant. Due to the expelling of partial surfactant at the end of the first collapse, the second compression curve significantly shift to a smaller surface area compared with the initial one. Similar shifts also occur for the following consecutive cycles.
The surface pressure of the monolayers is affected by temperature, molecular area, the numbers of ions, etc. Because of different contributions of LE and LC phases to surface pressure, there are two separate Eq. (14) and (15) which are in a good agreement with experimental Π-A curves [135]: Eq. (14) is for LE state and Eq. (15) is for the coexistence state of LE and LC. In these two equations, m is the number of kinetically independent units per monolayer molecule, k is the Boltzmann's constant, T is the temperature, A 0 is the actual area required for each lipid molecule, ω is the particle molecular area, Π coh is the cohesion pressure that accounts for the intermolecular interaction, θ is a coefficient related to the area per mole of monomer in a cluster, α and β are expressed as follows: where A c is the molecular area that corresponds to the area at the onset of transition, Π c is the surface pressure of the transition commencement [135,136]. It is noted that the predictions in Eq. (14) and (15) are unrestrictedly convincing only within low-pressure range. Besides phase transition behavior, compressibility, elasticity, viscosity, etc. can also be derived from Π-A isotherms. The compressibility C s is expressed as equation (18) Only "squeeze-out" plateaus around 42 mN/m representing transition from monolayer to multilayer are observed during compression. The change in slope can tell the phase transition. The expansion curves exhibit elastic stretching around 37 mN/m. [130]. Adapted with permission from Kodama et al. [130]. Copyright © 2014 Biophysical Society. Published by Elsevier Inc.
where A is the molecular area and Π is the surface pressure. Elasticity is the reciprocal of compressibility. The higher compressibility (lower elasticity) represents a more diluted monolayer with weak intermolecular interaction. Low compressibility is beneficial for surfactant function because it enables surfactant films to reach zero surface tension rapidly with a small change of surface area. In addition to LB trough, pulsating bubble surfactometer [137], captive bubble tensiometer [138] have also been used to measure the surface activity of lung surfactant.
Atomic force microscopy and domain images
The morphological properties of pulmonary surfactant are studied by atomic force microscopy (AFM) [139] after a Langmuir film is transferred onto a substrate or by Brewster angle microscopy (BAM) directly on air/liquid interface. In AFM images, pure DPPC exhibits regular patterns where the dark LE phase is distinguished from the bright LC phase (Fig. 5). The shapes and sizes of the domains change with the extent of compression. The shapes are determined by two competing forces: one is the long-range dipole-dipole repulsion which leads to an elongated domain, and the other is the boundary tension (γ) that forces a domain to exhibit a circular shape to minimize the tension [140]. The optimal diameter of LC domain is determined by equation (19) where e is the natural constant, δ is the molecular dipole distance, ε is the dielectric constant of water, ε 0 is the dielectric constant of vacuum, Δm is the dipole density difference between phases [141][142][143][144]. The disconnected LC domains move by Brownian motion, repelled by each other and the LE domains to avoid coalescing. The previous studies showed that the introduction of particles to the surfactant leads to changes in domain shape and size. For example, alkylated Au nanoparticles (NPs) disrupt the network of microdomains and cause pinhole defects in both LE and LC domains (Fig. 5).
The fractions of LC and LE domains determine the viscosity of lung surfactant, thus the translocation of particles in the surfactant is affected. The overall viscosity of the film (η s ) is denoted in equation (20) where η so is the shear viscosity of the continuous LE phase, A is the area fraction of LC and Ac is the critical LC fraction where the LC domains start to merge and the viscosity diverges [145]. As a result, the translocation of particles within lung surfactant is greatly influenced by the compression extent of the film [146]. In addition to AFM, other instrument such as fluorescence microscope [147] can also be used to study the morphological change of lung surfactant when interacting with PM.
Surface Forces Apparatus (SFA)
Surface forces apparatus (SFA) is similar in some ways to AFM that generally measures the force between a sharp tip and a surface, which detects the force between two curved surfaces. In these techniques, both the tip and the surfaces can be functionalized with desired molecules [148]. SFA has a unique feature of measuring the interaction forces as a function of the absolute separation between two surfaces, which is especially useful for soft materials [149,150]. The interfacial force is measured based on the Hooke's Law by detecting the deflection of the double cantilever spring [151], while the surface separation distance is monitored in real time using an optical technique called multiple beam interferometry [152]. Lung surfactant lipids can be deposited onto substrate surfaces (e.g., mica) as monolayers, bilayers or multilayers by LB trough and then be applied for force measurements in aqueous media using SFA. The surface force measurements can provide useful information regarding the particle-lipids systems. Moreover, the measurement of interfacial forces within surfactant lipids monolayers and bilayer systems can offer important information and interactive monitoring of the membrane fusion during PM-deposition.
SFA have been widely applied for quantifying a variety of forces in both biological [148] and non-biological systems [153], such as van der Waals forces, electrical double forces, hydrophobic interaction [154] and solvation forces (e.g., hydration interaction). The interfacial forces between different surfactant components have been directly quantified using an SFA [155]. In this study, the stability of the bilayers during surface interaction was evaluated by detecting the occurrence of hemifusion using SFA. Lee et al. [156] obtained real-time images of domain reorganization and force-distance profiles of lipid bilayers during membrane hemifusion using a fluorescence SFA. They demonstrated that the domains tend to rearrange in order to decrease the energy barrier and increase the fusion rate in the membranes. This technique correlates fluorescence imaging with force measurements, which is applicable to monitor dynamic rearrangement and adsorption processes in lung surfactant monolayers and bilayers.
X-Ray scattering and Other Techniques
The structural information of lung surfactant at air/water interface can be obtained with several techniques, such as x-ray scattering, neutron reflectivity and sum frequency spectroscopy. X-ray scattering is a family of sensitive and nondestructive analytical techniques to characterize the chemical composition, crystal structures and physical properties of thin films and other materials. These techniques can be used to investigate biological membranes directly and in situ at the air/water interface under near physiological conditions [157]. The techniques of xray scattering include x-ray reflectivity (XR), grazing incidence X-ray diffraction (GIXD), x-ray diffraction, etc. In XR, x-rays are reflected from a flat surface and measured. XR is useful to measure layer thickness of thin films and multilayers, surface density gradients and layer density, and surface roughness. This technique can easily distinguish monolayer from bilayers or multilayers [158] due to the reflectometry pattern produced from the interference of the reflected beams from each interface. Usually, a surface normal electron density profile is acquired in XR. The in-depth structural change of surfactant monolayer can be revealed from the visible interference fringes and distinct features in XR curves [159]. In addition to the outer part of monolayer structures, XR is also very sensitive to layered lung surfactant due to the presence of ionic lipids with high density of electrons in the headgroups [158]. GIXD provides information about atomic order, crystallinity and molecular orientation of surface and layers. GIXD is generally used in combination with XR, which gives a more comprehensive picture of three-dimensional distribution of lung surfactant [127,160].
Neutron reflectivity is similar to x-ray reflectivity, where a beam of neutrons is shined onto a flat surface and then reflected and measured. The magnitude of neutrons irradiation can be very high, for example, for hydrogen and deuterium, which leads to high contrast in the measurement. Neutron reflectometry has been widely used to observe the structures of bilayers attaching to monolayers or more bilayers at the air/ water interface [161]. The multilayers of bovine-and porcine-derived pulmonary surfactant at the air/water interface were investigated by neutron reflection, showing a disordered lateral surface with lipid/protein bilayers alternating with aqueous layers [162]. The repetition period and correlation depths were also measured to be 70 Å and 3 to > 25 bilayers, respectively.
Sum frequency generation vibrational (SFG) spectroscopy is a second-order nonlinear optical technique. This coherent vibrational spectroscopy possesses selectivity of infrared and Raman spectroscopies together with surface sensitivity. It is a potent tool to study the structure and orientation of different biomolecules such as lipids since the vibrational spectra detected are determined by fundamental molecular structure. For example, in the investigation of the interaction between DPPG bilayers and melittin, the C-H and C-D stretching signals were measured from isotopically symmetric and asymmetric DPPG bilayers, providing real-time information of the structural perturbation, such as water alignment and adsorption kinetics, on the bilayers [163].
Studies of PM with different surfactant systems
DPPC is the most common lipid in biological membranes (e.g., pulmonary alveoli lining) and DPPC monolayer has already been well-characterized by many researchers [164]. Therefore, DPPC monolayer is widely used as a simple model of lung surfactant in particlesurfactant studies. In addition, investigations on DPPC monolayer can lay the foundation for studies on other complex biological membrane systems. Hao et al. [99] measured the adsorption behavior of Fe 3 O 4 to DPPC monolayers. With the introduction of Fe 3 O 4 , the isotherm of DPPC shifted to larger surface areas and the extent became larger with increasing concentration of nanoparticles. This observation implies that the nanoparticles were adsorbed into the monolayer. Generally, the isotherm shifts observed in most studies are similar because the nanoparticles investigated are always much larger than DPPC or other lipid molecules. The isotherms shift to larger molecular areas to account for the areas occupied by the introduced particles [165]. The AFM images of DPPC with Fe 3 O 4 showed that the nanoparticles formed granule domains on the monolayer, reducing the elasticity of the monolayer. This interaction was considered as the electrostatic attraction between the negatively charged phosphate and positively charged Fe 3 O 4 particles.
Clinical lung surfactant has been used to compare with pure DPPC monolayer. Infasurf is a clinical lung surfactant extracted from calf lung fluid. The composition is more complex and closer to human lung surfactant than pure DPPC. In the research conducted by Farnoud et al. [131], the Π-A isotherms of DPPC and Infasurf monolayers only showed slight changes upon the deposition of carboxylated modified polystyrene nanoparticles. Nevertheless, the maximum surface pressure of the DPPC monolayer kept decreasing with every compression and expansion cycle, due to the loss of DPPC from the air/water interface at the end of each cycle. Upon collapse, nucleation occurs at the phase boundary due to the curvature because the film cannot be compressed further without destabilization. The decrease in the energetic barrier for nucleation leads to the formation of bilayer folds and other two-to-three-dimensional transformation, especially when there are plenty of nuclei [166,167]. The presence of the carboxylated modified polystyrene could provide nucleation sites and reduce the pressure required for collapse, facilitating the collapse of the film during compression [168]. This inhibitory effect was diminished on Infasurf, and the phase behavior was restored after five cycles. This result suggests that in vivo, the particles investigated may not inhibit the function of lung surfactant. Moreover, the reduction of the maximum surface pressure of Infasurf was less than that of DPPC, indicating that less Infasurf was squeezed out, or the replenishment of Infasurf was faster than pure DPPC [131]. This phenomenon is consistent with the functions of the SP-B and SP-C present in Infasurf. Other commercial pulmonary surfactants such as Survanta [108,130,169], Curosurf [98,101,104] are also used to test how exogenous PM affects their phase behavior and morphology. Survanta is a natural bovine lung extract that contains a large fraction of fatty acids and triglycerides (10% 20% with respect to DPPC by weight) but does not contain SP-A [133]. Curosurf is extracted from porcine lung surfactant, and 99% of the contents are lipids, with 1% SP-B and SP-C. The investigation of pure DPPC and Survanta conducted with alkylated Au NPs indicates that the domain formation in both systems was affected by the presence of NPs. In DPPC, the LE phase formation was promoted while the LC phase formation was hindered [108]. In Survanta, the SP-B and SP-C induced the formation of many small condensed domains [170]. The alkylated Au NPs accumulated in the LE phase in Survanta and merged into the hydrophobic proteins [108]. The presence of surfactant proteins also results in a higher foaming ability, which is an essential property for the surfactant to maintain interfacial property [111]. The foaming ability would be attenuated by NPs [109].
Besides pure DPPC and clinical pulmonary surfactant, the mixtures of DPPC and other phospholipids or fatty acids have also been utilized to explore the interaction with PM. These observations can not only shed light on physiological processes but also validate the functions of each component in natural surfactant. In the investigation of surfactant comprised of different combinations of lipids (DPPC, DPPC: POPG, DPPC: DLPC, Infasurf), the POPG-containing lipids showed a significant decrease of alkyl tilt angle in grazing incidence X-ray diffraction (GIXD) measurement when interacted with nanoparticles, meaning that anionic POPG could cause a change in the ratios of LE and LC phases due to the fluidizing property of POPG [127]. POPG is manifested to be less ductile than DPPC at certain surface areas, due to its smaller head group/chain mismatch [171]. Stachowicz-Kusnierz et al. [112] compared DPPC, POPC monolayers, and a monolayer of a mixture of DPPC and POPC. The Π-A isotherms of both the binary monolayer and pure POPC monolayer showed larger mean molecular areas than that of DPPC because of the double bond in POPC. Upon the addition of Benzo [a]pyrene, the condensing effect became more obvious on the binary monolayer than on the one-component monolayers. Molecular dynamics simulations validated this difference by demonstrating that the unsaturated lipids such as POPC and POPG could make surfactant more fluidized, and meanwhile, they might sustain a larger pressure increase at low initial pressure [79,112]. Zhao et al. [109] found that unsaturated lipids DOPC displayed a synergistic solubilization effect with DPPC to increase the total solubilizing ability of natural pulmonary surfactant on PAHs particles.
Monolayers have been extensively investigated, while bilayers and multilayers which exist in a variety of locations in the lung are observed and studied more recently. The bilayer vesicles in the subphase of alveoli and attached to the interface are responsible for the transfer and preformation of the film at the air/water interface. Moreover, more and more evidence revealed that the structure of a surface monolayer with one or more bilayers underneath exists at the alveolar interface. This multilayered structure which may derive from the multilaminations during the production and migration of lamellar bodies also contributes to the mechanical property of pulmonary surfactant [172]. It has been reported that invading particles also interact with bilayer vesicles during the deposition on the alveoli [97,173]. A dynamic simulation showed that the mean square displacement of dibenz[a,h]anthracene particles on a DPPC/DPPG/cholesterol bilayer (64:64:2) was larger than that on a pure DPPC bilayer, suggesting the important roles of cholesterol and DPPG in the modulation of the flexibility of bilayers [111]. In a bronchoalveolar lavage fluid (BALF), oxides particles interacted with liposomelike structures to form large agglomerates. Instead, the oxides particles interacted with bilayer vesicle structures and formed much smaller aggregates in the 2:1 mixture of DPPC and dipalmitoyl phosphatidic acid (DPPA) [97]. In solutions of unilamellar lipid vesicles composed of DPPC/POPG/Palmitic acid (PA), Ruge et al. [174] discovered that the lipids greatly modulated the effects exerted by SP-A and SP-D which enhanced the alveolar macrophage uptake of magnetite nanoparticles.
When a cationic lipid such as cetyltrimethylammonium bromide (CTAB) was added to DPPC monolayer, the bulking of the monolayer was diminished at high surface pressure. This result is due to the cohesive effect of CTAB that enables the monolayer to maintain flat geometry and even achieve negative surface tension at high surface pressure [175].
Studies of surfactant interacting with different PM
4.3.1. The effect of particle size It is known that particle size impacts the cellular uptake of particles. In vivo studies indicated that larger NPs caused a greater load of macrophages [176] and induced more cytotoxicity and inflammation in mice [177]. Particle size also plays a crucial role in the interactions between PM and lung surfactant. In a study of poly(organosiloxane) NPs with diameters of 12 nm and 136 nm [165], 12 nm particles did not cause significant changes on the isotherm curves of both DPPC monolayer and DPPC/DPPG/SP-C (80:20:0.4 mol %) lipid mixture except at very high concentrations, while 136 nm particles diminished the coexistence of LE and LC in DPPC monolayer but enhanced the transition plateau of DPPC/DPPG/SP-C film even with low concentrations. The isotherm of DPPC monolayer, in this case, did not shift to larger molecular areas with nanoparticles, suggesting the transfer of some lipid molecules to particle surface. The compressibility of both DPPC and the mixture films was increased with the presence of 136 nm particles, resulting from the perturbed lipid packing order and decreased intermolecular interactions. These effects were stronger with higher particle concentrations. The disturbance on the interfacial packing is controlled by an interplay between steric hindrance, excluded area effects and other interactions [178]. The large particles also attenuated the phase separation of DPPC monolayer due to the decrease in line tension and inhibited the vesicle insertion process, which would impair the preformation of surfactant interfacial film [165].
Nevertheless, the strength of particle-surfactant interaction is not linearly associated with particle size. Instead, parabolic curves are always derived with the existence of critical diameters that cause a significant change in the biophysical properties of pulmonary surfactant, and the values vary with the type of NPs. Ku et al. [179] reported that the isotherm of DPPC monolayer shifted to larger molecular areas when interacting with gelatin-based particles (136 nm, 197 nm, 221 nm, 236 nm, 287 nm). The extent of the shift was maximum with the 236 nm particles. The surface potential of the DPPC film was also greatly influenced by the presence of the particles, thereinto smaller particles resulted in steeper and less delay of the headgroup dipole reorientation while larger particles led to stronger delay. Kodama et al. [130] discovered that among the five different sizes, only 20 nm particles led to the disappearance of "squeeze out" plateau of Survanta, which was also verified in the fluorescence microscopic image showing that only the 20 nm particles caused changes in the LC domain. A molecular dynamic simulation was conducted by Curtis et al. [173] to explore how different sized particles interact with DPPC bilayers. The results indicated that the hydrophilic particles with diameters from 2 nm to 25 nm became wrapped in DPPC bilayer, while smaller particles with 1 nm diameter were embedded in the bilayer surface (Table 3). Equation (21) shows the equation denoting that the radius of particles influences the wrapping energy of the particles in lipid membrane: where ϵ is the membrane bending rigidity, R is the radius of particles, в is the membrane tension, A c is the contact area, u ad is the adhesion energy per unit area [180]. Kodama et al. [130] pointed out that the effect of particle size on particle-surfactant interaction might be ambiguous since the observation would also partly result from the difference of the associated physical properties, e.g. total surface area and specific surface area, the chemical nature of particle-molecule interaction. The total surface area effect has already been evaluated and it was excluded from the factors Coarse-grained nanoparticles 1, 2, 4, 6, 10, and 25nm Coarse-grained DPPC bilayer Hydrophilic particles larger than 2 nm become wrapped while 1 nm particles become embedded impacting the phase transition of surfactant. However, the researches on the effect of specific surface area and nature of particle-molecule interaction are limited. In general, particle size can only be assessed as a reference factor, rather than an independent factor that determines the particle-surfactant interaction.
The effect of surface charge
Surface charge is a key factor determining the toxicity of NPs. The study of functionalized (e.g., Guanidinium-, acetylated-, zwitterionic-, hydroxylated-, PEGylated-, carboxylated-and sulfated-) polystyrene particles proved that lung inflammation was significantly influenced by surface charge [181]. Regarding the interactions between particles and pulmonary surfactant, electrostatic force has been considered as one of the main contributors [96,99,182]. Electrostatic force is a much stronger and longer-range force than other forces [183]. According to Coulomb's law, the electrostatic force K E , between two charged particles is where Q and q are the charges of the particles respectively, b d is the unit vector from one particle to the other, d is the distance between the particles, as ε 0 is the vacuum permittivity and the approximate value is 8.85*10 −12 C 2 N −1 m -2 . ε is the dielectric constant of the medium. The electrostatic force would be attractive if Q and q are of opposite signs, otherwise repulsive. When taking the aqueous environment (e.g., the subphase) into consideration, the effect of surface charge becomes more complex. According to the Derjaguin-Landau-Verwey-Overbeek (DLVO) theory, the interaction between charged spherical PM and surfactant surface is influenced by both the van der Waals force and double layer force which derives from the electric double layer formed in aqueous solutions [184][185][186]. An equation describing the double layer force between a charged sphere and a flat surface is given as The double layer force exists as relatively long-rang repulsion. The net DLVO interaction has a high peak known as the energy barrier at high charge density and low electrolyte concentration. In concentrated electrolyte solution, a secondary minimum would appear at some critical separation, while the primary minimum is present when the interacting surfaces are in contact. When the surface charge densities are high in solutions with dilute electrolytes, the surfaces repel each other as the double layer force dominates. When the charge densities are below a certain value or the electrolyte concentration is higher than the critical coagulation concentration, the energy barrier falls below 0, giving rise to rapid coagulation [188].
where R is the radius of the sphere, ε l and ε 0 are the dielectric constant of the liquid and vacuum, respectively, κ is the inverse of the Debye length, σ T and σ s are the surface charge densities of the sphere and the surface, respectively [187]. Fig. 6 illustrates a DLVO model for the particle-particle and particle-surface systems with same charge signs as an example. The double layer force exhibits as repulsive in this case. Besides van der Waals force and double layer force curves, a net DLVO interaction curve is also plotted. For PM-lung surfactant surface system, double layer force may exist as repulsive or attractive, depending on the charge sign and charge density of PM, and the composition of lung surfactant. The surface charge densities and electrolyte concentrations determine the magnitude of the energy barrier, reflecting the stability of the particle-particle dispersion and particle-surfactant film system in the solutions. The silica particles with negative charges form lipid-particle complexes with the positively charged ammonium groups of DPPC, changing the dipole moment of the DPPC molecules to affect the molecular packing, thus the nucleation of LC domains is disrupted. As a result, the LE-LC plateau on the isotherm becomes flat [189]. Another negatively charged particles -carboxyl modified polystyrene caused a partial collapse of the DPPC monolayer and changed the ratio of ordered domains to obtain a more compact DPPC monolayer. The polystyrene particles also increased the hysteresis area during compression and expansion cycle. These phenomena proved the adsorption of the polystyrene particles to DPPC during expansion and the ejection of the particles to subphase during compression [134]. The electrostatic interaction also greatly eliminates the translocation capability of particles and the elimination effect increases with surface charge density. In a molecular dynamics simulation, the charged particles were partially wrapped in DPPC instead of penetrating the film, facilitating the structural change and inhibiting the phase change of the surfactant film (Fig. 7C, D, E) [190].
The positively and negatively charged particles do not show much difference to pure DPPC due to the zwitterionic property of the molecules. However, because of the presence of other cationic, anionic lipids and proteins in natural pulmonary surfactant, the effect of particle charge becomes complex (Fig. 7A). Negatively charged polylactide nanoparticles were found to be a more potent surface activity inhibitor on Curosurf than positively charged nanoparticles [104]. Hu et al. [191] confirmed this trend in a molecular dynamics simulation. It is because that the positively charged SP-B 1-25 could be adsorbed onto the anionic particles even when the anionic particles were trapped in the film, leading to protein denaturation and the conformational change of surfactant. Another positively charged protein SP-C also contributes to this effect [92,104]. However, in the investigations of aluminum oxide, silicon dioxide, and latex nanoparticles, the negatively charged latex and silica did not cause direct or strong interaction to either synthetic (mixture of DPPC, POPG, etc.) or exogenous surfactant (e.g., Curosurf), while the positively charged aluminum oxide, silica and latex form aggregates with surfactant vesicles, and the interaction strength increased with surface charge density. The particle-vesicle aggregates can last for weeks [101]. This affinity of positive particles towards lung surfactant is ascribed to the net negative charge of the surfactant. Behyan et al. [127] reported that both cationic and anionic silica particles shifted the isotherms of DPPC: POPG to larger molecular areas with low concentrations in the subphase, while only cationic particles showed an impact on Infasurf isotherm at high surface pressure. However, though GIXD and x-ray reflectivity studies revealed that anionic silica nanoparticles would interact with lipid head and change the alkyl chain organization and orientation of the surfactant, the effect was very small. By contrast, cationic nanoparticles caused a large reduction of the chain tilt angle in the condensed phase, which would affect the LE-LC phase ratio, thus changing the mechanical properties of the film (Fig. 7B). Behyan's discovery was believed to be strongly associated with the presence of anionic POPG. The interactions caused by the introduction of anionic particles observed in some other studies were from the electrostatic repulsion between the anionic phospholipids and the particles [192].
In general, the surface charge of particles enhances the interaction with lipids. For pure DPPC monolayer, because of the zwitterionic feature, the charge signs of particles do not make much difference on the translocation behaviors [190]. The behavior is quite different in natural pulmonary surfactant with the presence of other lipids and surface proteins, where positively charged particles always cause more pronounced effects on the structural and functional properties of the surfactant. The strength of the interaction is positively related to surface charge density. However, Kodama claimed that the effect on particlesurfactant interaction brought about by surface charge was not as powerful as that of particle size [130].
The effect of particle hydrophobicity
Besides electrostatic attraction, hydrophobicity also contributes to the interaction between PM and lung surfactant. Hydrophobic interaction is greatly dependent on temperature since it is entropy-driven. The entropy will increase (ΔS>0) when hydrophobic particles interact with each other, while the enthalpy decreases (ΔH>0) resulting from the breaking of hydrogen bonds. According to Gibbs free energy formula if ΔH is smaller than TΔS, it leads to a negative value of ΔG, which indicates the spontaneous hydrophobic assembly. There is no widely accepted theoretical formula for hydrophobic force or energy because the interactions between macroscopic hydrophobic surfaces and those between hydrophobic nanoparticles or molecules are not quantitatively equal. For example, the hydrophobic interaction between two solid surfaces could be demonstrated in Eq.(26) [193] where λ is the curvature of the interacting surfaces, C 0 is an empirical parameter, D is equilibrium distance, D HB is the hydrophobic decay length. An empirical formula showing the hydrophobic energy is Here, γ is the interfacial tension, a is the area per molecule, a 0 is the optimum area per molecule [194]. Additionally, the existence of hydrophilic headgroups, tails and other moieties, and local surface geometry also directly affect the hydrophobic interactions. Therefore, hydrophobic force is non-additive [195]. As a matter of fact, the long-range force observed between two hydrophobic objects is a combination of several forces [196]. Hydrophobic force predominates over electrostatic and steric repulsion when the distance of the particles decreases to the decay length [194]. The calculated energy curves in Fig. 8 indicate that hydrophobic interaction is a short-range force.
Hydrophobic particles can nucleate the collapse of the compressed DPPC monolayer, causing an irreversible decrease of the collapse pressure, as hydrophobic attraction leads to the formation of particle-DPPC complex and stabilizes the particle-DPPC monolayer from expelling the particles [189,197], which contrasts the observation on hydrophilic silica particles [178]. This trend is also supported by Zhang et al. [106] that the aggregates formed by the introduction of hydrophobic Au NPs to disturb the domain size on DPPC monolayer could stay on the surface of pulmonary alveoli for a long time. It is owing to the hydrophobic interaction between the Au particles and the surfactant, and this interaction decreased the compressibility and inhibited the phase transition of the surfactant. The retention rate of hydrophobic particles at the surfactant Infasurf monolayer is also higher than that of hydrophilic particles (Fig. 9A) [198]. It is the acyl group [165] as well as the long hydrophobic tails [199] of DPPC molecules that interact with hydrophobic particles, therefore DPPC molecules would be adsorbed to particles from the air/ water interface, changing the structure of the surfactant monolayer. Konduru et al. [97] discovered that the lipid adsorption ability of hydrophobic cerium oxide was much stronger than that of other hydrophilic particles when incubated in different surfactant systems including DPPC, DPPC/DPPA, and rat bronchoalveolar lavage fluid (BALf) (Fig. 9B).
The Π-A isotherm of DPPC was significantly and horizontally shifted to larger molecular surface areas by hydrophobic montmorillonite and silica particles owing to the formation of hydrophobic complexes with the incorporation of the particles, which led to excluded area effects [129,197]. The hydrophilic halloysite and bentonite NPs shifted the Π-A isotherm to smaller molecular areas, resulting from the decreased distance between DPPC molecules [129]. In Infasurf, the effect of particles on isotherm is a little bit different. Both hydrophilic and hydrophobic particles shifted the curve to smaller surface areas, increasing the compressibility of the film, as well as decreasing the ability to reduce surface tension upon compression and inhibiting the compression-expansion activity (Fig. 10A, B) [191,198]. It is proposed that hydrophilic particles can bind to the polar end of DPPC molecules, leaving the hydrocarbon tails pointing out. This attraction also causes the adsorption of lipid molecules to hydrophilic particles, and the lipid-particle complexes would further aggregate to sink [129], though this adsorption is not as strong as that driven by hydrophobic interactions [97]. When incubated in the mixture of DPPC and DPPA, hydrophobic particles would insert into lipid vesicles and be associated with the formation of multilamellar lipid bilayers, while the relatively hydrophilic ones stayed outside lipid bilayers vesicles and related to the formation of unilamellar lipid bilayers (Fig. 9B) [97]. The translocation behaviors that were studied by molecular dynamics simulations showed that both hydrophilic and hydrophobic particles would transport through the surfactant, but in different manners. The hydrophilic particles directly penetrated the monolayer, while the hydrophobic particles were wrapped by the surfactant and passed through the DPPC monolayer [200]. For DPPC bilayer, the hydrophobic particles directly penetrated the membrane and were embedded within the inner hydrophobic core of the bilayers, while hydrophilic particles became wrapped by the lipid bilayers [173]. The combined in vitro and in silico research conducted by Hu et al. [191] on Infasurf suggests that hydrophobic polystyrene particles caused the formation of high protrusions as the film was compressed, resulting from the encapsulation of the particles, while hydrophilic hydroxyapatite particles translocated quickly across the film. Hu also revealed that the Fig. 8. The overall force law of interaction energies and individual contributions vs. separation distance for trans azobenzene trimethylammonium bromide (azoTAB) bilayers. The hydrophobic energy is dominating as the bilayer-bilayer distance deceases to about 1 nm, indicating that the force between hydrophobic objects becomes pure hydrophobic interaction at short range [194]. Republished with permission from Donaldson et al. [194], Proceedings of the National Academy of Sciences, 2011; 108: 15699. (20,30,40, and 50 mN/m). P02A: acid-terminated poly(D,L-lactide-co-glycolide) (PLGA), P103E: ester-terminated PLGA, PST: polystyrene. The hydrophobicity increases as a manner that P02A is the least hydrophobic, P103E is the medial and PST is the most hydrophobic. The resolution of AFM images at 20, 30, and 40 mN/m is 50μm*50μm and the z range is 5nm. The resolution of AFM images at 50 mN/m is 20μm*20μm, z ranges are: Infasurf, 20 nm; Infasurf + P02A, 250 nm; Infasurf + P103E, 350 nm; and Infasurf + PST, 120 nm. The image at 50 mN/m is depicted in 3D. NPs are indicated with white arrows. The presence of NPs is positively related to hydrophobicity. After the monolayer-to-multilayer transition, all three types of NPs are spotted at the surface [198]. Reprinted with permission from Valle et al. [198]. Copyright (2014) American Chemical Society. (B) Cryo-TEM images of 2:1 mixture of DPPC: DPPA with particles CeO 2 (a,b) and BaSO 4 (c,d) [97]. NPs interacted with lipid vesicles with polyhedral shapes. Onion-like multilamellar vesicle structures were present with CeO 2 while unilamellar vesicles were shown with BaSO 4 . Reprinted with permission from Konduru et al. [97]. Copyright (2018) American Chemical Society. inhibition induced by hydrophobic polystyrene particles to Infasurf is faster than that caused by hydrophilic hydroxyapatite particles. However, the hydrophobicity of particles does not cause much difference when interacting with the hydrophobic SP-A and hydrophilic SP-D. These two surfactant proteins are selectively adsorbed to the particle and interplay with the lipids, hence the alveolar macrophage uptake of hydrophobic and hydrophilic particles in the unilamellar vesicles are at comparable levels [174].
Though previous studies on natural pulmonary surfactant such as Survanta [199], BALF [97] indicated that some lipids could be adsorbed onto the hydrophobic particles via strong hydrophobic interactions, there was no significant change in the domain formation and structure of Survanta after the introduction of particles in Tatur's experiments, which was speculated due to the merging of the particles into the hydrophobic surface proteins existed in the natural surfactant [108]. However, Valle et al. [198] observed that the size of the phospholipid domains of Infasurf was reduced by both hydrophilic and hydrophobic NPs, along with the disturbance on the conformational monolayer-tomultilayer transition. This difference is due to the particle size. In the former study, the particles with core diameter of 2 nm were comparable to the size of SP-B and SP-C and easily accumulated with these hydrophobic proteins in the LE phase; while the sizes of particles in the latter experiment were 260, 350, and 95 nm which led to the formation of protrusions on the monolayer. Both hydrophobicity and surface charge play important roles in the phase behavior and structure of pulmonary surfactant, whereas the disrupting effect of surface charge is related to electrostatic interaction, which influences the orientation of DPPC molecules. Thus the surface charge always causes stronger effects on the biophysical properties and domain structure of DPPC (Fig. 10C) [189]. Nevertheless, the amount and type of lipids adsorbed onto the particles are determined by hydrophobicity rather than surface charge [97].
The effect of particle shape
Kondej et al. [129] noticed that the particle shape also led to differences in the phase behavior of DPPC. Plate-like bentonites and halloysite nanotubes showed different impacts on the phase behavior of DPPC, despite that they are all hydrophilic and surface-inactive. The dissimilar behaviors of the bentonite and halloysite particles compared with spherical silica particles suggest that the squeezing-out of particle-lipid complexes during compression was amplified for the non-spherical particles [129,189]. A systematic molecular dynamics study has been conducted to validate the influence of particle shape. For hydrophilic particles, the simulation showed that they penetrated DPPC monolayer, but the rod particles seldom disrupted the packing structure of DPPC while the barrel and disk particles caused an obvious change in the structure. Barrel particles led to the most obvious disturbance because of the largest contact area, while rod particles The data were obtained with constrained drop surfactometer at 37°C and cycled at a physiological relevant rate (3s/cycle). P02A: acid-terminated PLGA, P103E: ester-terminated PLGA, PST: polystyrene. The hydrophobicity increases as a manner that P02A is the least hydrophobic, P103E is the medial and PST is the most hydrophobic [198]. Reprinted with permission from Valle et al. [198]. Copyright (2014) American Chemical Society. (C) BAM images (311 μm*418 μm) of pure DPPC on pure water subphase, and DPPC on SiO 2 (1 wt %) and carbon black (CB) dispersions at Π = 7.5 mN/m. Compared with CB, the stronger distorting effect on the domain size and shape caused by SiO 2 is mainly from electrostatic attraction [189]. Reprinted with permission from Guzmán et al. [189]. Copyright (2011) American Chemical Society showed the smallest contact area with DPPC molecules to induce the lowest influence. Shape also affects the penetration ability as the rod-like particles bore the highest penetration ability. This simulation suggests that the length-to-diameter aspect ratio of cylindrical NPs is the key parameter to determine the penetration ability of particles and structural disturbance on DPPC monolayers [201]. Kondej et al. [169] examined the influence caused by nanotubes and nanohorns, the results showed that the nanomaterials with higher specific surface areas induced more frustration to lung surfactant. Capillary force is speculated as one of the reasons why particle shapes influence the particle dynamics and disruption of lung surfactant. The capillary force is higher for the particles with sharper edges because of the pinning of the air-water interface at the edges, especially for cylindrical or cubical shapes [202,203].
In conclusion, the strength of capillary force increases with the length of air-water-solid interface [203]. Besides, shape anisotropy and the initial orientation of particles also contribute to the shape effects on particle-lipid interaction [204].
The effect of adsorbates, ambient dust and other polymer composites
PM carries many different kinds of chemicals. One of the most toxic components in PM is polycyclic aromatic hydrocarbon (PAHs) which has been proved to cause carcinogenic effect to organisms. Several recent papers have discussed the interactions of some PAHs with pulmonary surfactant. Zhao et al. [109] revealed that natural surfactant increased the solubilization of anthracene significantly, depressing the adsorption of anthracene to nanoparticles, thus the adsorption of anthracene to pulmonary surfactant was enhanced, which led to increased toxicity of the inhaled particles. Zhao et al. [205] confirmed that the mixed phospholipids in natural surfactant were responsible for the solubilization of PAHs. A molecular dynamics simulation [112] and experimental results [76] indicated that benzo[a]pyrene caused a condensing effect on surfactant monolayer and reduced the hydration of the monolayer, leading to a decreased fluidity. Benzo[a]pyrene also destabilizes DPPC/DPPG monolayer by decreasing the surface pressure. Nevertheless, during compression, benzo[a]pyrene may be expelled to water subphase due to the presence of DPPG. These results suggest that the accumulation of benzo[a]pyrene in surfactant monolayers not only impairs the surfactant function but also attenuates the barrier role of the surfactant with an easier access to the underneath fluid [76]. Another simulation about dibenz[a,h]anthracene and its metabolite 3,4-diol-1,2-epoxide interacting with lung surfactant bilayers showed that they formed aggregates in the lipids and subphase, as well as at the interface. The metabolite was more likely to diffuse through the film to the subphase [111].
Besides pure synthetic compounds, some complex PM derived from ambient dust including tobacco smoke, diesel exhaust dust, biofuel combustion dust, etc. have also been investigated with lung surfactant and the models. The particle emissions from biofuel combustion were revealed to deposit in alveoli, increasing the compressibility of lung surfactant and lowering the surface pressure, thus destabilizing the monolayer. Meanwhile, the particles can also predispose the collapse of alveoli, leading to respiratory distress [114]. Sosnowski et al. [110] revealed that benzo[a]pyrene released from soot particles interacted with the hydrophobic part of lung phospholipids and inhibited the dynamic and mechanical properties of lung surfactant. The pulmonary clearance rate was also reduced. Kendall reported that a lot of DPPC and amino acids in bronchoalveolar lavage were adsorbed to urban PM 2.5 particles, which would sequester the lung surfactant [115]. Electronic cigarettes are used as an alternative to cigarettes, but a high concentration of the major adsorbate glycerol and propylene may induce a decrease in the surface tension and hysteresis of the surfactant during compression. This effect would be more obvious in people who have already had lung diseases [206]. The side effect caused by ambient PM is not easy to be overcome by lung surfactant, as the inhibition gets stronger with time, leading to alveolar atelectasis [114]. Moreover, the adsorption of surfactant components to PM would interfere with innate immunity [115].
Polymeric materials have been utilized in industry in huge amounts, and as a result, ambient PM contains increasing amounts of polymeric particles. Moreover, as a popular platform in the application of nanomedicine, the potential toxicity of polymeric materials has become a concern. Chitin[poly(b-(1-4)-N-acetyl-D-glucosamine)] is a biopolymer that has been utilized in food, cosmetics and pharmaceutical industries due to its biocompatibility. Recent papers showed that chitin would affect the structure and function of lung surfactant by electrostatic attraction. Chitin could adsorb the phospholipids, hindering the formation of LC phase. Under higher pressure, the chitin particles were squeezed out along with some adsorbed lipids, thus the order of the monolayer was disturbed [207]. Poly(styrene) and poly(lactide) are two common polymers that have been applied in nanomedicines. However, these polymers always provoke a decrease in lung activity by aggregation and protein depletion [131,208]. These adverse effects can be resolved with coatings. For example, poloxamer and bioinspired poly (2-methacryloyloxyethyl phosphorylcholine) coatings can prevent particle aggregation and the adsorption of surfactant to particles by steric shielding, thus the impairment of the biophysical functions can be significantly attenuated [105,208,209]. The exploration of the interaction between various polymers and pulmonary surfactant could provide inspiration to the development of lung drug delivery platforms with high biocompatibility and low toxicity.
Health Effects Associated with PM invasion
When PM readily interacts with pulmonary surfactant and deposits on it, besides physical properties, physiological properties of the surfactant are also influenced. PM penetrates through the surfactant into lung epithelial cells and causes further physiological disturbance to lung and other organs [210]. It is speculated that bone marrow can be stimulated by PM deposition and cause systemic inflammatory response, leading to chronic diseases such as cardiovascular disease, because cytokines were detected in the circulation in response to PM [211]. It is also hypothesized that nanoparticle is able to penetrate through epithelial into the circulation directly and cause diseases, since the inhaled particles were found at the sites of vascular disease [212].
In addition to PM and the toxic chemicals adsorbed to PM, another type of adsorbate on PM-microorganisms can also strongly affect human health. A two-day exposure to high levels of PM 2.5 /PM 10 significantly changed in the composition of the pharyngeal microbiota [213]. Compared with pre-smog swabs, the relative abundance of 38 phyla, including Firmicutes, Fusobacteria, and Actinobacteria, increased in post-smog swabs. Among them, Leptotrichia, Corynebacterium and Veillonella were the top three genera with more than 20,000 reads for each. In addition, there were 11 new phyla detected in postsmog swabs. This change may increase unexpected risks, especially respiratory infections, to human. It should be noted that the relative abundance of two respiratory pathogens H. influenza and M. catarrhalis were increased by 240% and 150%, respectively, which would lead to morbidity and mortality of diseases such as pneumonia. Influenza A virus significantly changed the metabolism of alveolar type II cell in surfactant lipid, leading to surfactant dysfunction [214]. Coronavirus SARS-CoV-2 that triggered the current pandemic could damage both type I and type II alveolar cells, giving rise to reduced production and secretion of pulmonary surfactant to the alveolar interface, as well as inhibiting gas exchange between blood and the alveoli. Middle East respiratory syndrome (MERS)-CoV could also infect type I cells [215].
Physiological effects on pulmonary surfactant
Schürch et al. [216] reported in 1990 that the latex particles deposited in hamster lungs were submerged in the subphase and coated with osmiophilic film. A direct physiological change of pulmonary surfactant with deposition of PM is the elevation in the amounts of surfactant, which would lead to surfactant dysfunction. Murphy et al. [217] discovered that crystalline quartz led to an increase of extracellular surfactant, resulting from the increased number or secretion of type II cells. The augment of single surfactant component was also observed in rat lung with the inhalation of fly ash [218]. In this study, total phospholipids, phosphatidylcholine (PC) and phosphatidylethanolamine (PE) significantly increased in lungs. Meanwhile, PC, especially DPPC in lung surfactant, and microsomes were much higher upon PM exposure. It is due to the CDP-choline pathway and N-methylation of PE in lung cells. Another in vivo study on rats exposed repeatedly to diesel exhaust also testified the overproduction of phospholipids in pulmonary surfactant, indicating the risk of chronic lung injury [219]. The production of certain surfactants is a defense mechanism to protect the lung from further injury and avoid alveolar collapse caused by PM [218]. When the pulmonary macrophage fails to clear the increased surfactant, alveolar lipoproteinosis occurs [220], leading to shortness of breath.
Microbial virus adsorbed to PM also induces changes in the composition of lung surfactant. For example, Pseudomonas aeruginosa flagellum caused the production of exoproteases to degrade SP-A [221], making the surfactant and the alveoli susceptible to infection. Protease IV secreted by Pseudomonas aeruginosa was proved to degrade SP-A, SP-B and SP-D, changing the surface tension-reducing function of the surfactant in addition to the reduction of host defense function [222]. The decrease in SP-B may also lead to the change in the permeability of the surfactant [223], making the membrane more susceptible to PM and the adsorbates. A mucoid strain of Pseudomonas aeruginosa decreased the content of DPPC by reducing the mRNA synthesis of phosphocholine cytidylyltransferase (CTP) which is a key enzyme for DPPC synthesis [224], resulting in reduced surfactant function.
Inflammation
It is known that specific PM can deposit and translocate within pulmonary surfactant. Many studies present that both PM and the adsorbates (e.g., microorganisms, metals) on PM can lead to inflammatory response. When PM penetrates through the surfactant, alveolar macrophages can trigger phagocytosis to clear PM, which releases inflammatory mediators such as leukocytes and neutrophils to induce inflammation. Furthermore, the disturbance on the film structures may cause mechanical damage on the surfactant and epithelial cells, which may lead to inflammation as well. The acute effect of PM on living animals and humans mostly shows lung inflammation at first [225]. As the immune system's response to harmful stimuli, inflammation activates cellular and molecular events to remove the stimuli and tries to heal [226]. Inflammation is exhibited as the increase of free cells and high proportion of neutrophils in lavage. Uncontrolled inflammatory response may give rise to chronic diseases.
Crystalline quartz could induce surface inflammation, increase lung permeability and cause the type II cells to release their plasma membrane components, resulting in progressive damage, while the damage triggered by amorphous ultrafine silica regressed [217]. In this study, ultrafine/fine carbon black did not cause change in lung permeability or induce inflammation. Pan et al. [227] also proved recently that pure carbon black did not induce inflammation to human bronchial cells. Nevertheless, when carbon nanoparticles formed adducts with Pb 2+ and incubated with human lung cells, the expression of the long novel noncoding RNA which is responsible for the regulation of inflammation was depressed. Pb 2+ does not induce inflammation individually, just like Cr (VI), but it is revealed that the co-existence of these two species in PM 2.5 caused cytotoxicity in lung cells [228]. The traffic-related PM is also reported to induce inflammation in both lymphocytes and lung cells [229], where the induction effect is stronger for PM with higher PAHs levels [230]. Additionally, the in vitro low dose exposure of rat lung to silica particles for 24 h exhibited inflammatory response [231], while the in vivo long-term, repeated and high-concentration diesel exhaust exposures of rats showed chronic inflammation [219].
Oxidative Stress
Reactive oxygen species (ROS) are highly reactive and unstable. The accumulation of ROS may lead to the oxidation of cellular components when the level is beyond the elimination ability of antioxidants [232], giving rise to oxidative stress. Oxidative stress may cause many chronic diseases, such as cancer, diabetics, cardiovascular diseases, and other degenerative diseases [233]. The exposure to PM would lead to oxidative stress to lipids, proteins, and DNA. When PM readily interacts with lung surfactant and penetrates through the membrane to interstitium and epithelial cells, the surface of toxic particles (e.g., metallic particles as catalysts) and the adsorbates including transition metals (copper, iron and manganese, etc.) and PAHs can generate free radicals to cause oxidative stress. Additionally, the mechanical damage brought about by PM within the cells could also trigger oxidative stress [234].
It is found that the presence of traffic-related PM led to an increase of ROS generation and oxidative DNA damage in human lymphocytes, alveolar epithelial adenocarcinoma cells [229], and type II lung epithelial A549 cells [235]. The extent of oxidative stress, cytotoxicity and epithelial activation on pulmonary cells induced by diesel exhaust particles increased with the content of PAHs [230]. Soltani et al. [236] investigated the influence of TiO 2 and Fe 2 O 3 micro and nanoparticles on lung and marrow tissues, figuring out that the particles increased the baseline level of lipid oxidation and antioxidant enzyme activity. The toxicity caused by TiO 2 nanoparticles was more serious than that caused by the microparticles and the Fe 2 O 3 nanoparticles. The acute induction of oxidative damage can also be observed with in vivo study. The Superoxide Dismutase (SOD) activity in rats was reduced after the exposure of the rats to ambient PM, which indicates the oxidative stress caused by the particles [237].
Besides lung cells, it is also reported that PM may aggravate oxidative stress in kidney. The in vitro exposure of human kidney cells to traffic-related particles indicated that the particles reduced the viability of the cells, increased mitochondrial ROS and decreased mitochondrial membrane potential, which led to kidney disease [238].
Other Adverse Health Effects of PM
In addition to inflammation and oxidative damage responses, PM is also associated with many other toxicity and diseases. Silver nanoparticles were discovered to induce autophagy and apoptosis in mouse embryonic fibroblast cells [239] and cytotoxicity in human lung cells [240]. The in vivo study on mice lasting for one month proved that PM is the initiator of pulmonary fibrosis since lung inflammation and incipient fibrosis symptoms were discovered after the exposure [241]. For susceptible rats with hypertension, the heart rate and heart rate variability were found linked to the industrial exhaust [242]. The short-term in vivo studies of humans revealed that ambient PM 2.5 was related to low resting cerebrovascular flow velocity and high resting cerebrovascular resistance, suggesting that the endothelial function in the cerebral vasculature would be harmed by PM 2.5 [243].
The pathological symbols, such as chronic inflammation, increased mucus and phospholipids production, that are exhibited by patients with chronic obstructive pulmonary diseases would be aggravated with time and PM concentration. However, in the chronic study on young and normal rats, the pathological symbols remained stable after 12-18 months of exposure to PM of medium and high concentrations. It implied that young and normal rats were resistant to chronic diesel exhaust exposures [219]. Recent papers discussing long-term in vivo studies presented some new physiological responses upon exposure to PM. Lepeule et al. [244] reported that the traffic particles (mostly carbon black) caused an additional rate of decline in forced vital capacity and forced expiratory volume, which indicated a lower baseline lung function. The particles also accelerated the decline of lung function in the elderly. PM can affect the function of other organs as well. Liang et al. [245] let the rats be exposed to PM 2.5 once every three days for one month. Then the rats got vascular endothelial injury and inflammation, and meanwhile, fibrin thrombi and bleeding occurred on the lung tissue. All of these responses suggest that PM 2.5 would eventually lead to disseminated intravascular coagulation. Long-term gasoline vehicle exhaust exposure is proved to induce erectile dysfunction in rats [246].
Transgenic mice were used to test with PM samples [247]. After 4 h exposure, it was found that the PM induced the increase of CYP1A1 (a gene regulated by aryl hydrocarbon receptor), which indicated a carcinogenic effect. Besides, PM led to an increase of endothelin-1which exhibited as a dysfunction of epithelial, and an increase of metallothionein-II resulting in reduced scavenging of metal toxicity. The exposure to PM does not cause a widespread change in gene expression. This is consistent with the results of the study on silica particles [231]. PM 2.5 was only reported to cause oxidative damage to DNA in humans [237]. However, a recent study that exposing zebra to carbon nanoparticles showed a disturbance on DNA methylation of the genes in heart tissue, revealing the unregulated gene expression caused by PM [248].
Summary and Perspectives
It is revealed that exposure to PM significantly impacts on pulmonary surfactant and human health through altering the physiological, biophysical and morphological properties of lung surfactant. Different types of particles including synthetic and ambient PM, which are not equally toxic to pulmonary surfactant and health, have been proved by both experimental investigations and molecular dynamics simulations. It has been found that the strength of particle-surfactant interaction increases with the extent of hydrophobicity and surface charge density, while each type of particles bears its own critical particle size that shows the strongest impacts on surfactant. Particle shape and chemical nature of PM also influence the phase behavior and morphology of pulmonary surfactant.
Many different characterizations and methodologies have been used to investigate PM-surfactant interaction. The interpretability of interdisciplinary approaches greatly promotes a comprehensive understanding of the effects of PM. Nevertheless, quantitative analyses of, for example, the interaction forces between particles and lung surfactant, and the surfactant domain change upon the contact with particles, are seldom presented. Future studies of the associated interfacial forces can be carried out with nanomechanical tools such as atomic force microscopy and surface forces apparatus. The effect of PM on the morphology is generally investigated via visual characterization (e.g., AFM, BAM). Further investigation utilizing fluorescence microscope can provide useful information on the morphological changes upon PM deposition. Meanwhile, the continuous monitoring of the change in various properties of lung surfactant after the deposition of PM is mostly reported with molecular dynamics simulations. It would be more convincing with direct and real-time experimental visualization before, during and after the deposition of PM on pulmonary surfactant.
The different results obtained for naturally derived surfactant and DPPC after PM deposition suggest that the functions of other lipids and surface proteins cannot be ignored. Though DPPC is the major component of lung surfactant and responsible for maintaining the phase behavior and stability of alveoli during respiration, the results could be more conclusive when naturally derived surfactant is considered. It is reported that surfactant proteins, especially SP-B, affect the domain change significantly by increasing the line tension and dipole density difference, etc. [147] Future studies of the interactions between PM and lung surfactant in the presence of surfactant proteins will provide useful insights into a more complete understanding of the physiochemical interaction mechanisms.
Synthetic particles are utilized to validate the interaction mechanism because of their known chemical composition, which furthermore provides a guideline for nanotechnology safety. However, the influence of actual ambient PM could not be fully represented by synthetic particles, since the existence of other components in ambient PM interferes with the interaction. Studies on natural PM would give rise to more practical and environmentally relevant information. Meanwhile, the composition of PM varies with time and location, but the research about how different environmentally derived PM affects the phase behavior of lung surfactant is still lacking. In some literature, the concentrations of PM used are much higher than the actual exposure dose. Though this approach is convenient to obtain acute responses, the feasibility of the inference on the actual health effect should be noted.
The studies of PM-surfactant interaction are all performed on planar interfaces with the utilization of LB-trough and other imaging instruments. However, the curvature effect of spheric alveoli has been proved to impact the morphology and dynamics of the lung surfactant. Sachan et al. [144] found that when the radius of Survanta monolayer-covered bubbles decreased to 100 μm, which is comparable to the size of alveoli, the LC domains changed dramatically from dispersed circles to meshing stripes, separating the original continuous LE matrix to a discontinuous phase. This change comes from the anisotropic bending energy [141,146]. This interfacial curvature effect on the monolayer also leads to changes in surfactant adsorption and the dilatational modulus [144]. The observation resulted from alveoli-sized curvature implies that future investigations of PMsurfactant interaction on curved interfaces could provide more practical information at alveolar dimensions. Instruments such as captive bubble surfactometer, pulsating bubble surfactometer and constrained drop surfactometer can be used to study on curved surfaces. Besides curvature effect, other factors such as temperature and rate of film oscillation also limit the physiological relevance of the in vitro studies on surfactant films. Constrained drop surfactometer can be used to investigate the surfactant activity and inhibition under physiologically relevant conditions [198,249]. The surfactant films are constrained in a sessile drop, and the compression and expansion of the droplet are controlled at physiologically relevant rates. This technique has great potential to study PM-surfactant interactions at molecular level to provide insight into the physiochemical effects of PM.
The in vitro studies on PM-surfactant interactions provide physical evidence of how PM affects lung surfactant, implying the impairment on physiological properties of lung surfactant. However, the direct mechanisms of how PM induces pulmonary dysfunction physiologically are still unclear since the in vivo measurement of the changes in surfactant functions upon PM deposition is challenging. Riva et al. [250] conducted an in vivo experiment to investigate how lung mechanics changes upon low dose instillation of ambient PM. The elastic and viscoelastic components of lung mechanics were increased, indicating the impaired lung function. The authors attributed this mechanical alteration to the inflammation and oxidative stress caused by the penetration of PM into the alveolar regions. Further experimental evidence is needed to illustrate the correlation between physiological responses and biophysical change.
Although the changes in the biophysical properties of modal or replica lung surfactant under the effect of engineered NPs and some environmental PM 2.5 have been investigated, correlating in vitro studies with the in vivo systems still remains a challenge. Biological relevant surfactant models integrating natural surfactant components and monolayer, bilayer and multilayer structures are required to study the PM-surfactant interaction. A number of studies on PM and naturally derived surfactant have been conducted, as well as the investigations of PM on bilayer and multilayer structures. Future studies between PM and complex surfactant structures using techniques that can mimic physiological conditions are still needed. Moreover, the use of environmental PM 2.5 in addition to engineered nanoparticles is beneficial to evaluating the actual effects of air pollutants. Though the biophysical in vitro studies provide a good implication of PMinduced health effects, the physiological studies directly related to the biophysical changes are still lacking. It is noted that nonequilibrium state may exist due to the relative humidity gradient in the alveolar space, giving rise to disparity in the composition and layered structures of lung surfactant at the interface. This thermodynamic condition should be considered when studying the structures of the multilayered films [251].
There is a big concern that even though the PM pollution is within the range of Environmental Protection Agency annual air quality standards, the long-term exposure still causes adverse health effects [244,252]. This observation is an implication to governmental actions that appropriate and strict regulations should be complemented according to the discrepancies on different types of PM and chronic effects.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2020-08-19T13:05:08.602Z | 2020-08-19T00:00:00.000 | {
"year": 2020,
"sha1": "7895cdf10cd01d52e7baca1e42747d4ae6b99b56",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.cis.2020.102244",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "2af4b8505d36ba329944d4d1dde4f274a4991b0e",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
270588727 | pes2o/s2orc | v3-fos-license | Factors Associated With Reporting Attitudes of Work-Related Musculoskeletal Disorders Among Direct Care Workers in South Korea
Background: Workers’ reporting of work-related injuries or illnesses is important for treatment and prevention, yet research often focuses on reporting barriers. This study aimed to identify factors related to work-related musculoskeletal disorder (WRMSD) reporting attitudes and their connection to reporting intention and behavior. Methods: We analyzed data from 377 direct care workers employed in 19 long-term care facilities in South Korea. A self-administered questionnaire collected demographics, job characteristics, physical and psychosocial factors, musculoskeletal symptoms, reporting attitudes, and WRMSD reporting intentions and behavior between May and August 2022. We used a generalized linear mixed model with a random intercept by employers to identify factors influencing reporting attitudes. To explore the relationship between reporting attitude and reporting intention and behavior, simple logistic regression was also conducted. Results: We achieved an 86% response rate. The majority of the study participants were female (87.2%), married (95.9%), and non-immigrant (72.8%). Of the study participants, 48.9% had no intention to report WRMSDs, and 44.3% held negative reporting attitudes. Among 200 workers with WRMSDs, 86.5% did not report them. Attitudes were associated with work duration, safety training, management safety priority, WRMSD experience, and symptom severity and frequency. Management safety priority did not moderate this relationship. Significant links existed between attitudes and reporting intention and behavior. Conclusions/Applications to Practice: This study highlights the vital influence of workers' attitudes on reporting work-related injuries and illnesses. Occupational health providers should employ strategies, such as tailored safety training and management commitment, with a focus on addressing the unique needs of long-tenured and musculoskeletal-exposed workers. Fostering a safety culture that promotes open and timely reporting is crucial, and implementing these strategies can significantly enhance workplace safety and health.
Introduction
The International Labour Organization estimates that approximately 350 million occupational accidents occur annually, resulting in high mortality rates (International Labour Organization, 2015).However, researchers have pointed out that the true extent of occupational health issues remains concealed due to the significant underreporting of occupational injuries and illnesses (Kyung et al., 2023).According to the Occupational Safety and Health Administration (OSHA) in the United States, it is mandatory for workers to promptly report all workplace incidents, hazardous conditions, and near misses to their management (Occupational Safety and Health Administration [OSHA], 2022).In an effort to encourage worker injury reporting, OSHA also safeguards workers' rights to report injuries without fear of retaliation, and it prohibits employers from taking any adverse actions against workers who report incidents (OSHA, 2016).Despite these measures, many workers still encounter obstacles when attempting to report work-related problems to their management (Kyung et al., 2023).A recent review study revealed that U.S. workers, ranging from 20% to 74% across various job types, chose not to report their workrelated injuries or illnesses to their management (Kyung et al., 2023).
According to behavioral theories such as the Theory of Planned Behavior, personal attitudes play a pivotal role in shaping behavioral intentions and behaviors (Ajzen & Fishbein, 1977;Gavaza et al., 2011;Jiang et al., 2018;Pfeiffer et al., 2010 In a study involving U.S. transportation workers, Jiang et al. (2018) observed that instances of underreporting workplace aggression and near-miss events were more prevalent among workers with unfavorable attitudes toward safety-related reporting (Jiang et al., 2018).Attitudes are believed to be malleable and subject to change based on individual experiences and work environments (Petty & Cacioppo, 1986).However, there is a dearth of research investigating factors that influence attitudes toward reporting work-related injuries or illnesses.
Direct care workers face an elevated risk of work-related musculoskeletal disorders (WRMSDs), but a significant number of these injuries go unreported (Caponecchia et al., 2020).Siddharthan et al. (2006) noted that nursing personnel often tolerated WRMSDs, regarding them as a natural part of their job, unless these issues interfered with their work activities, ultimately leading to the underreporting of WRMSDs (Siddharthan et al., 2006).Recognizing the crucial role of reporting attitudes in injury reporting, gaining a comprehensive understanding of these attitudes can offer valuable insight for further development of effective interventions aimed at motivating workers to report such incidents.However, existing research has primarily focused on identifying barriers to reporting occupational injuries or illnesses.This study aimed to (1) describe WRMSD reporting attitudes among direct care workers in long-term care facilities, (2) identify factors associated with WRMSD reporting attitudes, and (3) investigate the relationship between WRMSD reporting attitudes and reporting intentions and reporting behavior.
Methods
This study employed a cross-sectional design, utilizing a convenience sample comprised of 377 direct care workers from 19 long-term care facilities in South Korea.These facilities represented 5.4% of all long-term care facilities located in Gyeonggi-do, the most populous province in Korea (Ministry of Health and Welfare, 2022).In South Korea, long-term care facilities are categorized as either long-term care hospitals or nursing homes.Long-term care hospitals provide in-patient services, and extended care for individuals requiring longer rehabilitation stays.They are obligated to staff healthcare professionals such as medical providers and nurses (H.Kim et al., 2015).On the other hand, nursing homes primarily offer social services to individuals aged 65 or older who cannot live independently but do not generally need the level of medical care provided in long-term care hospitals (H.Kim et al., 2015).
For the purposes of this study, direct care workers were defined as trained staff responsible for delivering direct patient care, such as feeding, bathing, dressing, and toileting (J.-Y.Kim & Tak, 2018).Eligibility criteria included that direct care workers had been employed in their current positions for a minimum of 3 months and were able to read, write, and understand Korean.The initial 3 months of employment were considered a probationary period during which workers acclimated to their new roles and assessed whether the job was a suitable fit (Borofsky et al., 1995).
Recruitment and Data Collection
A flyer containing contact information was posted on the bulletin boards of the respective department in all 19 long-term care facilities after obtaining the necessary permissions.Data collection occurred between May and August 2022, utilizing a self-administered questionnaire that had been pilot-tested with 20 direct care workers in a long-term care hospital.The study's questionnaire was distributed and collected during each institution's monthly staff meetings or training programs, which were provided by the National Health Insurance Service.Informed consent was obtained from all participants, and a token of appreciation in the form of $10 (12,000 won) was provided to each participant upon completion of the survey.A total of 403 direct care workers participated in the survey, resulting in a response rate of 86% (ranging from 70% to 81% in three long-term care hospitals and from 86% to 95% in 16 nursing homes).After excluding 11 direct care workers who had been employed for less than 3 months and 13 direct care workers who had not responded to 5% or more of the questionnaire items, the final sample included 377 direct care workers.Ethical approval for the study was granted by the Committee on Human Research of the University of California, San Francisco, and the Public Institutional Review Board in South Korea.
Demographic and Job Characteristics
Demographic characteristics encompassed age, sex (male or female), immigration status (immigrant or non-immigrant), marital status (married or single), and education (elementary school, middle school graduate, high school graduate, and
Applying Research to Practice
The tasks of direct care workers are often challenging while offering few extrinsic reward.Despite the high risk of work-related injury or illness, many direct care workers did not report it to their management and tended to normalize it.This study identified that injury reporting attitudes were associated with duration of work, safety training for injury reporting, management safety priority, work-related injury/illness experience, and severity and frequency of the symptom.Organizational commitment to the priority of worker safety and safety training focusing on injury reporting is needed for workers especially those frequently exposed to musculoskeletal problems and with longer duration of employment to improve workers' attitude toward injury reporting and facilitate actual reporting.college 1 year or more).Job characteristics included the type of long-term care facility (long-term care hospital or nursing home), duration of employment as a direct care worker, and work arrangement (permanent, temporary, or independent).
Physical Work Factors
Physical work factors comprised physical exertion and the number of assigned patients.For physical exertion, respondents were asked to rate the physical demands of their current job on a scale from one ("not strenuous") to five ("extremely strenuous") (Neupane et al., 2020).
Psychosocial and Organizational Factors
Psychosocial and organizational factors were assessed using job stress and management safety priority.Job stress was evaluated using the Korean version of the Effort-Reward Imbalance (ERI) Questionnaire, which included effort (six items), reward (10 items), and overcommitment (six items) (Eum et al., 2007;Siegrist, 1996).Effort reflects the job demands or obligations placed on workers and reward refers to something that workers can acquire from their work such as monetary compensation, esteem, career opportunities, and job security (Van Vegchel et al., 2005).Overcommitment defines a set of attitudes, behaviors, and emotions reflecting excessive striving for approval and appreciation (Hasselhorn et al., 2004).In the ERI model, a lack of reciprocity between efforts spent and rewards received at work arouse emotional distress and subsequent adverse health outcomes (Siegrist, 1996).All items in these scales used a four-point Likert-type scale ranging from one ("strongly disagree") to four ( "strongly agree"), with higher values indicating higher effort, reward, or overcommitment.The effort, reward, and overcommitment scores were calculated as the sum of item responses: the ERI ratio was obtained by dividing effort by reward, with a correction factor of 3/5 applied to adjust for the unequal number of items in the effort and reward scales (Siegrist et al., 2004).For management safety priority, respondents were asked to indicate whether the health and safety of workers were considered a high priority by the management in their workplace, with response options of "yes" or "no" (Kines et al., 2011).In regard to safety training for injury reporting, respondents were also asked to indicate if they had ever received training regarding the reporting of workplace injuries or illnesses from their organization.
Musculoskeletal Symptoms
The assessment of musculoskeletal symptoms encompassed various aspects, including the experience of WRMSDs, the frequency of musculoskeletal symptoms, and the severity of pain.These aspects were evaluated using a modified questionnaire originally employed and validated in the Nurses' Work Life and Health Study (J. A. Lipscomb et al., 2002).This modified questionnaire, featuring a single item for each question, adhered to the definition of musculoskeletal symptoms as outlined in the Nordic Musculoskeletal Questionnaire, which includes pain, aching, stiffness, burning, numbness, or tingling in various body regions.To collect relevant data, respondents were queried about their encounters with musculoskeletal pain or discomfort in the neck, shoulder, back, upper extremities, or lower extremities, within the past 12 months.They were also asked to indicate whether this pain or discomfort was either aggravated or caused by their work.Subsequently, participants who reported experiencing symptoms within the prior 12 months were presented additional inquiries regarding frequency and severity of these symptoms.To assess the frequency of musculoskeletal symptoms, a sixpoint Likert-type scale ranging from one ("never") to six ("daily") was employed.The severity of pain was evaluated using a five-point Likert-type scale, ranging from one ("none") to five ("extreme").
Reporting Attitudes
Reporting attitudes were gauged using a modified version of a four-item questionnaire developed and validated by Probst and Graso (2013) with a Cronbach's alpha of 0.76 (Probst & Graso, 2013).The original questionnaire was adapted for direct care workers in long-term care facilities by changing "accidents and injuries" to "work-related injuries or illnesses."The modified English version of the questionnaire was translated and backtranslated into the Korean language by two independent bilingual people and the Korean version of questionnaire was finalized through consultation with a third bilingual person.
Respondents were asked to indicate their injury reporting attitudes as follows: "Work-related injury or illness investigations are mainly used to assign blame," "Nothing gets fixed, so why bother reporting an injury or illness," "Reporting a work-related injury or illness hurts my chances for job-related rewards," and "Injury or illness is a normal part of my job.They can't all be prevented."Participants used a 7-point Likert-type scale, ranging from one ("strongly disagree") to seven ("strongly agree"), with higher scores indicating more positive attitudes toward reporting.Reporting attitudes scores were calculated as the mean of item responses.Reporting attitudes were also dichotomized into two groups using a cutoff at the median score of four.The Korean version of the questionnaire used in this study had a Cronbach's alpha of 0.80, indicating its reliability.
WRMSD Reporting Intention and Reporting Experience
Reporting intention was assessed using a single question "If you experience work-related injuries or illnesses, would you be willing to report the disorders to your management?" Responses to this question were recorded as "yes" or "no" (Conner & Heywood-Everett, 1998).For individuals who had experienced WRMSDs within the past 12 months, they were asked whether they had reported WRMSDs to their management.Respondents were also queried about their experience witnessing the injury reporting behaviors of their colleagues or co-workers.
Data Analysis
Data analyses were conducted using STATA version 16.0 (Stata Corporation, College Station, TX).Descriptive statistics were employed, including frequency and percentage for categorical variables and means with standard deviation for continuous variables.To handle missing data, responses missing 5% or more of the questionnaires were initially excluded from the study.For multi-item measures, multiple imputation was used to address missing data effectively.A generalized linear mixed model was utilized, with a random intercept by employers, to identify significant factors influencing reporting attitudes.This model incorporated demographic and job characteristics, physical work factors, psychosocial work environments, musculoskeletal symptoms, and the experience of witnessing injury reporting, as guided by the literature on injury reporting behavior.Subsequently, the interaction effect of management safety priority was introduced into the model to assess whether the influence of management safety priority on injury reporting attitudes remained consistent across different long-term care facilities.The results were reported by Beta coefficients.Simple logistic regression analysis was employed to explore the relationship between reporting attitude and intention to report, as well as the behavior of reporting WRMSDs.Odds ratio (OR) and 95% confidence interval (CIs) were calculated to measure these relationships.Statistical significance was defined as a p-value less than .05.
Characteristics of the Study Participants
The study included a total of 377 direct care workers, with 139 individuals in long-term care hospitals and 238 in nursing homes (Table 1).Approximately 87% of the participants were female, and 27% were immigrants.The majority of the respondents were married (95.9%), had a high school education (68.6%), and held temporary or independent work arrangements (73.3%).On average, the participants had a mean age of 60.7 years and had worked as direct care workers for an average of 5.9 years.Twothirds of participants (68.5%) perceived that worker safety was a priority in their organization, and 91.2% had received safety training related to injury reporting.More than half of the respondents (59.1%) had witnessed the injury reporting of others, within the past 12 months, 54.6% had experienced WRMSDs.Nearly half of the participants (48.9%) expressed no intention to report their WRMSDs, and the majority of those who had experienced WRMSDs did not report them to their management (85.5%).The mean score for WRMSD reporting attitudes was 3.8.
Factors Associated With Reporting Attitudes
Table 2 presents the results of a generalized linear mixed analysis examining the factors associated with direct care workers' attitudes toward reporting WRMSDs.Direct care workers with a longer duration of employment (coefficient = −0.01,p = .01)and a higher frequency of musculoskeletal disorders (coefficient = −0.18,p < .01)were less likely to hold positive reporting attitudes.Reporting attitudes were positively associated with safety training for injury reporting (coefficient = 0.65, p = .03),experience of WRMSDs (coefficient = 0.71, p = .01),higher severity of musculoskeletal disorders (coefficient = 0.20, p = .03),and management prioritizing worker safety (coefficient = 0.5, p = .001).A moderating effect of management safety priority was added to the model, but no significant effect was observed (data not shown).
The Relationship Between Reporting Attitudes and Reporting Intention and Behavior
Table 3 illustrates the association between WRMSD reporting attitudes and both WRMSD reporting intention and actual reporting.Direct care workers who held positive reporting attitudes were significantly more likely to express an intention to report WRMSDs to their management (OR = 13.17,95% CI = 7. 76-22.36).Direct care workers with positive attitudes toward WRMSD reporting also had 2.88 times greater odds of actually reporting WRMSDs when compared to those with negative attitudes (OR = 2.88, 95% CI = 1.26-6.61).
Discussion
This study investigated the factors associated with WRMSD reporting attitudes among direct care workers in long-term care settings in South Korea.Several key findings emerged from this study.A significant proportion of direct care workers expressed a lack of intention to report WRMSDs (51.1%), and only a small fraction of those who experienced WRMSDs actually reported them to their management (13.5%).This highlights the presence of barriers to reporting within this population.Furthermore, WRMSD reporting attitudes were found to be significantly associated with both reporting intention and actual reporting behavior, underscoring their critical role in shaping reporting practices.
This study found that direct care workers with longer length of tenure tended to have less positive reporting attitudes.These findings are in line with previous research among pharmacists, suggesting that individuals with more years of experience may have less favorable attitudes toward incident reporting (Gavaza et al., 2011).They observed a negative correlation between incident-reporting attitudes and years of experience in pharmacy practice (r = −0.136,p = .008).One possible explanation for this trend is that workers with longer job tenure may have witnessed or experienced punitive disciplinary actions in response to injury reporting in the past.In earlier years, there was a prevailing perception that work-related injuries or illnesses were often attributes to individual negligence (Frederick & Lessin, 2000;Gavaza et al., 2011).This historical context may have led to a sense of frustration among direct care workers with longer experience in the field.They may recall instances where injury reporting led to negative consequences, such as reprimands or disciplinary actions, and this could contribute to their less favorable attitudes toward reporting.Overcoming the legacy of punitive measures and ensuring that reporting is met with support and solutions rather than blame is essential for promoting positive reporting attitudes among all workers, regardless of their tenure.This study sheds light on the relationship between WRMSD experience and reporting attitudes among direct care workers.It is evident that workers who have experienced a WRMSD within the past 12 months were more likely to hold positive reporting attitudes.This connection can be understood by considering the criteria for reporting injuries and complaints under the OSHA.In Korea, workers have the right to report work-related injuries or illnesses (Oh, 2014).However, for reporting to be valid, the injuries must be proven to be work-related (Oh, 2014).In such cases, workers who have suffered from work-related injuries or illnesses are responsible for providing evidence of the workrelatedness themselves (Oh, 2014).Consequently, those who meet these criteria may perceive reporting as beneficial, leading to more positive attitudes.
Severity and frequency of musculoskeletal symptoms also played a role in shaping injury reporting attitudes, but their influence differed.Severity of musculoskeletal symptom had a positive relationship with reporting attitudes in this study, consistent with findings that highlight symptoms severity as a significant factor contributing to actual injury reporting (Kyung et al., 2023).In the context of South Korea's Serious Accidents Punishment Acts, severe symptoms may prompt workers to recognize the necessity of taking action (Korea Legislation Research Institute, 2021).They may feel compelled to report their condition to management to explore potential solutions, such as requesting sick leave or seeking job modification or intervention to mitigate the risk of further injury.Conversely, symptom frequency was inversely associated with injuryreporting attitudes in this study.This finding aligns with prior research that identified symptoms frequency as a barrier to actual reporting (Siddharthan et al., 2006).Siddharthan et al. (2006) revealed that workers who already reported more than three injuries were nearly twice as likely to avoid reporting additional work-related injuries compared to those who had reported three or less injuries.This may be linked to a fear of negative repercussions.Workers who experience recurrent injuries or illnesses may worry about being stigmatized as negligent workers and believe that injury reporting is primarily used to assign blame.Frequent injuries may also contribute to a sense of normalcy within the workplace, leading workers to perceive such incidents as routine and not worthy of reporting.
Safety training was another significant factor affecting reporting attitudes.The study's findings regarding the positive impact of safety training on workers' reporting attitudes are consistent with previous research conducted by Green et al. (2019).These studies demonstrated the effectiveness of educational interventions in improving attitudes toward injury reporting.Green et al. (2019)'s study involving janitors showed that the intervention group experienced a significant reduction in barriers related to injury reporting, such as perceiving injuries as a part of the job (reduced from 8% to 2%) and fearing negative consequences (reduced from 8% to 2%).This suggests that educational programs can effectively address misconceptions and fears that may hinder reporting.Similarly, Jansma et al. found that workers who received patient safety education showed significant improvement in incident reporting attitudes and intentions.This improvement persisted even 16 days after the education was provided, indicating that the positive effects of training can endure over time.In light of these findings, it becomes clear that safety and health training should be considered an essential component of any organization's effort to create a safe workplace.Providing workers with knowledge about their rights and the importance of injury reporting can contribute to a culture of safety where employees feel empowered to report incidents without fear of negative consequences (Green et al., 2019).This study's findings highlight the critical role that organizational safety culture and management priorities play in shaping workers' attitudes toward injury reporting, which aligns with earlier research (H.J. Lipscomb et al., 2015;Probst & Graso, 2013).Probst and Graso (2013)'s study among copper mining workers stressed the importance of a safe climate.Workers who perceived that their organization prioritized the safety of workers tended to have more positive attitudes toward injury reporting.This suggests that an organizational culture that emphasizes worker's safety can contribute to fostering a more favorable reporting environment.H. J. Lipscomb et al. (2015)'s research provided further evidence of the impact of safety climate on injury reporting.Their study showed that when management did not prioritize worker's safety, both the prevalence of non-reporting and non-reporting without fear increased significantly (H.J. Lipscomb et al., 2015).Specifically, workers who perceived a low priority on worker safety were 1.7 times more likely to experience underreported injuries and 1.4 times more likely to feel unable to report without fear compared to workers with a high emphasis on worker's safety (H.J. Lipscomb et al., 2015).The safety climate within an organization can send clear signals to workers about whether reporting incidents will be encouraged or met with punitive measures.When management actively promotes a safety-oriented culture and provides the necessary resources and support, workers are more likely to see the value in reporting injuries and feel comfortable doing so without fear of reprisal.
The significant relationship found in this study between injury reporting attitudes and injury reporting intention reinforces the notion that attitudes play a pivotal role in shaping reporting behavior.This consistency with earlier research underscores the importance of attitudes in influencing the intention to report incidents (Gavaza et al., 2011;Pfeiffer et al., 2010).Given these findings, Pfeiffer et al. (2010) integrated attitudes into a psychological framework that explains factors influencing the intention to report incidents.According to the theory of planned behavior, individual behavioral intention is assumed to be affected by attitudes and is considered a primary contributor to actual behavior (Ajzen, 1991).This finding highlights the notion that influencing reporting behavior begins with shaping attitudes.
This study found a significant relationship between injury reporting attitudes and actual reporting behavior, aligning with previous research that demonstrated a positive correlation between favorable reporting attitudes and increased rates of reporting occupational accidents (Probst & Graso, 2013).Many studies underscore the role of negative attitudes as obstacles to injury reporting (Evans, 2006;Pompeii et al., 2016).While researchers have been examining the complex interplay between attitudes and behavior for many years, the evidence regarding the attitudes-behavior association has exhibited variability and mixed results (Glasman & Albarracín, 2006).To improve the prediction of behavior, it is advisable to focus on attitudes closely aligned with the specific behavior of interest.To the best of our knowledge, this study is the first to explore WRMSD reporting attitudes in a sample of direct care workers in South Korea.Nonetheless, several limitations need to be acknowledged.First, the data were collected from a nonprobability sample of direct care workers, primarily in nursing homes, within a single province in Korea.This limited scope may affect the generalizability of our findings to other settings.However, it is worth noting that our sample comprised participants from 19 different long-term care facilities, and we achieved a high response rate (86%), which may enhance the generalizability of our results.Second, the small sample size, particularly for reporting behavior, could have limited the statistical power of our analysis.Third, as the data relied on self-reported questionnaires, responses may have been influenced by recall or reporting bias, potentially leading to underestimation or overestimation of results.Finally, due to the cross-sectional design of the study, we cannot establish causal relationships between variables.
Implications for Occupational Health Practice
Timely identification of work-related injuries or illnesses is crucial for promoting workplace safety and health, and workers' willingness to report incidents to management represents the initial step in this process.This study highlights the significant role of workers' attitudes toward injury reporting in shaping their reporting intentions and actual behavior, and these attitudes may be moderated or mediated by the safety culture within the organization.Various factors, including the duration of employment, safety training, management's safety priorities, experience with WRMSDs, and the severity and frequency of musculoskeletal symptoms, were identified as influencing reporting attitudes.To improve workers' attitudes toward injury reporting and facilitate actual reporting, organizations should demonstrate a strong commitment to worker safety.This includes providing safety training that emphasizes injury reporting, particularly for workers with extended tenures and those frequently exposed to musculoskeletal issues.Future research employing a longitudinal study design, is recommended to validate and expand upon these findings.
Table 1 .
Demographic, Job, Psychosocial, and Health Characteristics Among Direct Care Workers in Long-Term Care Facilities in Korea (N = 377).
Table 2 .
Linear Mixed Model Analysis of Factors Associated With Attitude Toward Work-Related Musculoskeletal Disorder (WRMSD) Reporting (N = 288)
Table 3 .
The Relationship Between Reporting Attitudes and Reporting Intention and Actual Reporting of Work-Related Musculoskeletal Disorders (WRMSDs): Using Bivariate Logistic Analysis | 2024-06-20T06:16:15.036Z | 2024-06-18T00:00:00.000 | {
"year": 2024,
"sha1": "bb4d1e34cca9257af3bbab338b02152d910e1b98",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21650799241247078",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8602d667af3b00b03a1ed59194c49c04c969b80c",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9626461 | pes2o/s2orc | v3-fos-license | The role of specimen banking in risk assessment.
The risk assessment process is described with a focus on the hazard identification and dose-response components. Many of the scientific questions and uncertainties associated with these components are discussed, and the role for biomarkers and specimen banking in supporting these activities are assessed. Under hazard identification, the use of biomarkers in defining and predicting a) biologically adverse events; b) the progression of those events towards disease; and c) the potential for reversibility are explored. Biomarker applications to address high-to-low dose extrapolation and interindividual variability are covered under dose-response assessment. Several potential applications for specimen banking are proposed.
Background
As citizens, we are all concerned that our health, and the health of our children could be compromised or endangered by exposure to toxic chemicals and other potential health hazards in the air we breathe, and in our food and drinking water. Public concern over this potential for harm resulting from exposure to environmental pollutants has led to a demand for protection from environmental risk, either real or imagined. This demand has prompted public health officials, environmental scientists, and regulatory agencies to pursue processes to define, explain, and mediate environmentally related health risks (1,2). The U.S. Environmental Protection Agency (U.S. EPA) has responded to public demand by adopting a paradigm that was proposed initially by the National Academy of Sciences (NAS) (3). This approach to risk assessment provides a format and data for estimating the potential adverse health effects of human exposures to environmental hazards that, in turn, provides the cornerstone to risk management decisions ( Figure 1). In this article, selective components of this risk assessment process are described as well as some of the The views expressed in this paper are those of the authors and do not necessarily reflect the views or policies of the U.S. Environmental Protection Agency. The U.S. government has the right to retain a nonexclusive, royalty-free license in any, and to any, copyright covering this paper.
Address correspondence to Dr scientific questions and uncertainties that accompany these components. The role for biomarkers and specimen banking in this process will be assessed. The discussion focuses on issues related to human exposure and health assessment rather than the broader role of biomarkers in toxicology (e.g., mechanistic studies in animals). Although much has been written on the contribution of biomarkers to risk assessment, the challenge to participants in this symposium is to define the potential contributions to be gained from specimen banking activities.
Although this paper is focused on risk assessment, it should be recognized that information required for this process is also critical for numerous other collateral and interrelated activities and actions that support risk management decisions. Biomarkers and specimen banking may also contribute to these activities, and, in fact, their role may be even more apparent in these other contexts. For example, information derived from activities that may be referred to as monitoring (exposures) or surveillance (health status and trends) are essential to establishing baseline (reference) values, directing pollution prevention options, assessing the efficacy of corrective actions, or anticipating/detecting emerging environmental problems. Such data when combined with risk assessment activities may also help define and prioritize the legitimate environmental risks for the public. This approach can, in turn, ensure that public and private attention, expertise, and resources are directed appropriately. An appreciation of the interplay between these collateral activities and risk assessment should be incorporated into defining the role and criteria for biomarkers and specimen banking activities purported to support these processes. For this paper, biomarker will be defined "as any measurable biochemical, physiological, cytological, morphological, or other biological parameter obtainable from human tissues, fluid, or expired gases, that is associated (directly or indirectly) with exposure to an environmental pollutant" (4).
The Risk Assessment Process
Simply defined, risk assessment is the attempt to understand the relationship between human exposures and potential health effects. This understanding requires identifying the factors that result in human exposure and then defining the cascade of events that must occur to create a health risk. This analysis entails delineating the pharmacokinetic and pharmacodynamic processes that govern this cascade. As presented in Figure 2, biomarkers research has separated historically this continuum into exposure and health compartments. More contemporary efforts have acknowledged that biomarkers of dose can serve as the common denominator for linking these Environmental Health Perspectives events. Future efforts should approach understanding these events from a continuum perspective.
The NAS risk assessment paradigm defines four components that can be overlaid on this continuum ( Figure 3 Conventionally (and conveniently) biomarkers can be defined by three, interrelated categories, namely, biomarkers of exposure, effect, and susceptibility that can be related to the components of the risk assessment process. These relationships are discussed in the following sections. Since much of this symposium is devoted to human exposure, the focus of this paper will be primarily on biomarkers as they relate to hazard identification and dose-response assessment.
Biomarkers and Hazard Identification
Hazard identification defines whether an agent can cause an adverse effect and its relevance to human health and disease. This evaluation examines all available data including human, test species, and in vitro data with close scrutiny to dose-response and Do-R =spon. Aseesemen dose-effect relationships. Conclusions as to the hazard potential for a given agent are based upon a weight-of-evidence summation.
There are several issues that must be addressed in hazard identification in which human biomarker data could contribute (Table 1). Perhaps most critical is understanding the biologic significance of biomarker(s) of effect whose occurrence can be measured at very low exposure levels. Because of increasing evolution and sophistication in measurement methods and instrumentation, changes in baseline levels can be detected more readily. Given an appropriate study design, these changes may be found to be statistically significant. Less certain is whether these biomarkers represent an adverse, or potentially adverse event, i.e., their value in predicting human health disease or dysfunction. (For the remainder of this paper, "disease" will be used to imply either a dysfunctional state or actual disease.) An example of a success story is blood lead levels in children and the demonstrated relationship to neurotoxicity that has provided key information for the existing lead standard. On the other hand, the biologic significance of moderate changes in plasma and red blood cell levels of acetylcholinesterase (AchE) remains uncertain. Although widely accepted as a biomarker of exposure to certain dasses of pesticides, the role of peripheral levels of AchE in predicting toxicity to the central nervous system (CNS) is less dear.
Ideally, biomarker(s) of effect should provide insight into current health status and, if present, the stage of the disease. An understanding of the potential for reversibility associated with a decrease or discontinuation of exposure is equally important. Biomarkers of reversibility however, must distinguish between true recovery (absence of pollutantinduced effects) and the failure to detect adverse effects as a result of adaptation or biologic compensation which may mask existing impairment. There is also a need to understand the relationship between the current biomarker and silent processes that may underlie the [eventual] appearance of disease. The common denominator for a biomarker of effect that provides information on the latency, stage and progression, and reversibility is an understanding of the putative mechanisms of the disease under study.
Potentially, biomarkers may also play a role in determining if a threshold exists, and, if so, what level of exposure is necessary to exceed that threshold and pose a health risk. Equally important is whether this threshold varies for different populations (e.g., young vs adult, rural vs urban). Biomarkers that can identify/distinguish these populations may impact dramatically health risk assessment and risk management decisions.
Biomarkers and Dose-Response Assessment
Critical to any risk assessment is an understanding between exposure (dose) and the occurrence/magnitude of adverse effects. Ideally, this relationship would be approximately linear (i.e., increasing risk with increasing exposure). However, depending on the target and its inherent properties to respond to toxicity (e.g., repair), a matrix of exposure and effects scenarios is more likely ( Table 2). The situation becomes more complicated when an individual operates concurrently under more than one exposure situation (e.g., chronic, low-level exposure with periodic high excursions) and experiences multi-chemical exposures. The use of biomarkers of dose and pharmacokinetic modeling offers great promise for better defining exposure (dose)-response relationships.
Henderson et al. (5) have proposed that a suite of biomarkers be employed to reflect recent as well as past and, potentially, cumulative exposures. Such a suite would accommodate varying rates of disposition (e.g., different half-lifes) of the parent compound, its metabolites, and any other surrogate markers that reflect an interaction between the agent and a biologic target (Figure 4). Yet, as seen in Figure 5, even an accurate estimate of dose may not predict effect status. Again, an understanding of the pharmacokinetic behavior of an agent must be synthesized with hypotheses/insights into the processes and mechanisms of the disease in question to provide biologically plausible dose-response assessments.
Although such biologically based models are desirable, such approaches, to date, have had limited application, primarily focused on cancer and favoring a no threshold hypothesis (i.e., the interaction of a single molecule in a single cell will result in an adverse effect). The prospect of nongenotoxic, carcinogenic mechanisms has suggested that thresholds may, in fact, exist for certain environmental carcinogens.
A threshold is assumed to exist for most noncancer health effects. That is, there is a range of exposures from zero to some finite level that can be tolerated with essentially no adverse effect. These assessments most often rely on defining a no observable adverse effect level (NOAEL) or a lowest observable adverse effect level (LOAEL) from the available data. This estimate is then adjusted downward by application of a series of uncertainty factors to (Table 3) produces what is now widely termed a reference dose (RfD) or reference concentration (RfC) depending on whether the exposure route is oral or inhalation, respectively. For purposes of this paper, discussion will focus on the uncertainties associated with high-to-low dose extrapolation and the across human (interindividual) variability. Irrespective of whether biologic modeling or an RfD/RfC approach is employed, uncertainties associated with these two factors will be present. Biomarkers may have a substantial role in determining the necessity and/or magnitude ofthese uncertainty factors.
High-to-Low Dose Extapolation
Although human health data for the exposure situation of concern is what is desired, the majority of data on which human risk assessments are based is derived from test species or humans in elevated exposure settings (e.g., occupational). The risk assessor is then required to determine risk for individuals operating in environments characterized by much lower exposures. The shape and slope of the curve of a dose-response The tendency historically has been to accept these assumptions with research then focused on identifying biomarkers of mechanism and dose present at low exposure levels. This approach assumes implicitly a linear relationship between dose and risk. Perhaps a more systematic approach would be to identify biomarkers nearer the dose range of the experimental data and then progress in a descending, steplike fashion toward the human exposure range of concern. Thus, initial efforts would compare biomarkers from humans with exposures dosest to the experimental data ( Figure 6). Such data would most often come from the occupational setting.
High-to-low dose extrapolations may assume initially that the highest exposed individuals are biologically representative of the general population and differ only in terms of exposure. Clearly, other factors pose limitations to this overall generalization. For example, the healthy worker effect in occupational settings may produce exposure-response data that underestimate health risk for the general population even at lower exposures. Conversely, if the highest exposed also represents groups with compromised health status (e.g., the poor, the elderly), extrapolation may overestimate the effect(s) for the general population.
Interndividual Variability
The examples presented above may be considered to be a subset of the many factors associated with interindividual variability in response (i.e., biomarker) in a given environmental setting. Other terms often used Exposure (Dose) Figure 6. Overlay of a possible human exposure distribution on the extrapolated dose-risk curve (dashed line). The band demarcated by vertical lines represents a human subgroup with the highest exposures. The question mark reflects uncertainty as to the shape of the actual curve below the observed data.
Volume 103, Supplement 3, April 1995 I 0 2 z L) a 0 U to account for interindividual variability are differences in sensitivity or susceptibility of the individual or subpopulation to a specific environmental insult. Whether these phenomena, in fact, reflect the same, underlying biologic processes is debatable and certainly has ramifications for interpretation of biomarker data. However, this question could serve as the basis for the entire symposium and will not be addressed in this paper.
The major premise is that, although individuals may experience similar environmental exposures, individual differences in pharmacokinetic or pharmacodynamic processes may greatly influence the dose that reaches the target site and/or the degree of response. A number of factors including age, diet, and health status will obviously influence these processes. Increased or decreased responsiveness (susceptibility) may also be acquired wherein previous exposures sensitize the individual to subsequent exposures. An immunologic basis is likely for this phenomena.
However, genetic predisposition seems to be the major determinant. For example, inherited differences in metabolic capabilities (e.g., polymorphism for activating/deactivating enzymes) can greatly influence the concentration and maintenance of the biologically effective dose at the target site. Similarly, genetic differences in repair or compensatory mechanisms, reserve capacity, and other biologic processes may influence the magnitude of the toxic response.
The existence of interindividual variability in response implies that the individuals at greatest health risk may not be synonymous with those that experience the greatest exposures. The interplay between these two distributions is not well understood (Figure 7). Biomarkers that provide such insights will greatly assist efforts to quantify human risk estimations.
The Role of Tissue Banking
Based upon the preceding discussion, biomarkers of exposure, effect, and susceptibility would appear to have major roles in improving risk assessments. How the retention and preservation of these samples (specimen banking) may further enhance these estimations is IC Figure 7. Interplay between exposure and responsiveness population distributions. less dear. Moreover, the application will usually be retrospective, i.e., banking specimens today that may improve, refine, or reaffirm a risk assessment addressed in the future. Such an application places a tremendous burden on the population sampling design for a specimen bank since it is critical that individuals/ groups sampled today be representative of the exposed population in which disease is observed in the future. Some potential, interrelated applications can be offered that have implications for hazard identification and dose-response assessment. a) Reaffirm biologic significance/predictive validity: This application requires retrospective comparisons, namely, determining the relationship between previously obtained biomarker(s) and current exposure/health status. The ability of specific biomarkers to predict disease progression and reversibility may also be ascertained. Such evaluations would allow greater confidence to be placed on predinical, low dose biomarkers as the basis for a risk assessment in the absence offrank disease. b) Provide historical baseline (reference) values: The ability to ascertain whether an agent has elevated health risk can be strengthened by comparison to concurrent control values which have been placed in the context of historical values. This comparison of control cohorts may allow for the discrimination and quantification of temporal versus pollutantinduced changes in a given health measure. c) Reassess mechanistic hypotheses: As noted previously in this paper, identifying and understanding pharmacokinetic and pharmacodynamic mechanisms are key to ensuring more biologically sound risk assessments. Specimen banking may allow the retrospective testing of hypotheses regarding putative mechanisms for diseases, especially those with long latencies. Again, the current and future cohorts must be similar enough to allow such linkages to be valid. This application is facilitated if the specimens were obtained on the actual target tissues (e.g., lung, liver, etc.), or if concentrations in biologic fluids have been demonstrated to truly reflect target dose. d) Confirming exposure-dose effects linkages under changing exposure scenarios: As exposure conditions of a population or subgroup change over time a corresponding change in health status (i.e., biomarker values) should occur if previously hypothesized associations and attendant risk estimations are valid. e) Identifying new high risk groups: Factors that may elevate the risk to an environmental pollutant for certain individuals or subgroups (e.g., increased exposures; increased susceptibility) will impact the risk assessment for that agent. Banked specimens may provide the referents to aid such identification.
Conclusions
The concept of tissue banking is to provide for the long-term storage of biologic specimens. The premise is that a bank of tissue samples, collected and archived appropriately, provides scientifically preserved and documented samples for retrospective and prospective cohort studies. This resource, by providing human material, would also seem to hold great promise for reducing many of the uncertainties associated with assessing the health risks associated with exposure to environmental pollutants.
The design and implementation ofa specimen bank should benefit from participation of diverse areas within the scientific community (e.g., epidemiologists, toxicologists, industrial hygienists, statisticians, risk assessors, etc). This broad input is critical to determine whether a design for specimen banking can be developed that will accommodate divergent interests/ needs within the public health community. To that extent, the compatibility of risk assessment needs relative to other applications will require further exploration. | 2014-10-01T00:00:00.000Z | 1995-04-01T00:00:00.000 | {
"year": 1995,
"sha1": "a10dabe6fd643324f0f4b0e2cc9656b0aa790ce0",
"oa_license": "pd",
"oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.95103s39",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a10dabe6fd643324f0f4b0e2cc9656b0aa790ce0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
} |
147458438 | pes2o/s2orc | v3-fos-license | Fortschritt und Verantwortung! Education as a rallying cry in Luxembourg’s general elections of 1974
Abstract Education became a rallying cry in Luxembourg’s general elections of 1974. For the first time in the country’s post-war history, the Socialists and Democrats entered the government, with new plans for education. The unbroken rule of almost 30 years by the Christian Democrats was over. New ‘global’ educational concepts were employed to introduce changes in the national curriculum, the aim being a transformation from an elite to a mass system of participation. One of these changes was the idea of the comprehensive school, which divided the electorates, parties and press respectively. Yet, this fundamental change has received little attention in the academic literature. What were the differences between the parties when it came to education policy? By intersecting politics, globalisation and education, this paper examines the impact of the election events of 1974 on Luxembourg’s political discourse. The conclusion points to the central role the parties played in the proliferation of new educational norms.
Introduction
In her thought-provoking The Politics and Economics of Comparison, Steiner-Khamsi invites researchers of international education to explore how the language of globalisation has been used as a valuable rhetorical tool to introduce new policies into national education systems, especially since the early 1960s. 1 'I would like to suggest that, whenever there is a change in policy, we should assume that a reference to "globalization" or "international standards" was made' , she asserts. 'Somewhere along the way to a decision, policy actors most likely resorted to either one or both of these references to accelerate change. ' 2 In other words, the significance of globalisation to national education agendas should not be underrated, especially in relation to reform, and it is the complex relationship between the 'national' and 'global' that should be given greater attention when new educational practices are discussed. In this paper, I argue that Steiner-Khamsi's statement also raises a number of important questions concerning the new role of education in West European political realms of the early 1970s. Whereas I have previously explored Finnish and West German educational developments during this era, 3 also more recently in reference to globalisation, 4 my purpose is now to extend this research to one of the smallest nation-states in Europe, namely the Grand Duchy of Luxembourg.
Politically, the justification for Luxembourg as a case study is twofold. First, education became a powerful rallying cry in Luxembourg's general elections of 1974. 5 Just like in the majority of European countries during the Cold War, different 'global' educational concepts such as socialisation and differentiation, equality of opportunity, self-determination and participation, responsibility and cooperation, lifelong learning, democratisation and integration, and rationalisation and individualisation, 6 were buzzwords used to mobilise the divided electorates. They were employed to introduce changes in the national curriculum. Second, whereas the Christian Social People's Party (CSV)which had ruled the country uninterruptedly since 1945 -was ousted from office, the Luxembourg Socialist Workers' Party (LSAP) entered the government with completely new plans for education. 7 As their senior partner, the Democratic Party (DP) formed a coalition with the LSAP, and attempted to gain a firmer foothold in Luxembourg's political system by placing education, particularly the controversial idea of the comprehensive school (polyvalente integrierte Gesamtschule), high on its agenda. 8 By challenging the former hegemony of the CSV, the new coalition thus signalled fundamental changes in Luxembourg's post-war history.
Methodologically, this study analyses election manifestos of the three parties in 1974, and identifies their specific electoral and ideological strategies with regard to education and curriculum developments. Here, the term 'globalisation' is broadly defined as 'the intensification of worldwide social relations which link distant localities in such a way that local happenings are shaped by events occurring many miles away and vice versa' , 9 or 'those processes by which the peoples of the world are incorporated into a single world society, global society' . 10 Against this background, the following research question will be answered: As carriers of different socio-political ideas incarnated in dynamic politicians, what were the similarities and differences between the LSAP, DP and CSV when it came to education policy? The election manifestos as sources are complemented by a selection of press articles around the year 1974: a number of writings from the daily papers Luxemburger Wort (closely affiliated to the CSV), and Tageblatt (closely affiliated to the LSAP), and the weekly paper D'Lëtzebuerger Land, 11 are examined. Here, attention is paid to the role of mass media in interpreting specific education policies to the public, and how the different newspapers reacted to and accelerated the political divisions. Did the new international environment -or 'globalization optique' to use Carney's vocabulary -have an effect on the way the parties and press were to contribute towards educational change? By intersecting partisan politics, global considerations and education, this paper ends with an assessment of the impact of the 1974 general election campaigns on Luxembourg's public and political discourse.
These questions are also interesting for international audiences, since they deal with similar educational issues -whether failed or successful reforms -that took place in a number of European countries during the 1960s and 1970s. In this sense, although the implementation of comprehensive schools was unsuccessful in Luxembourg, the country's reform proposals surrounding more equal opportunities form part of the wider European movement, which demanded democratisation and social integration in the sphere of education. 12 As authors such as Susanne Wiborg have shown, the success or failure of implementation depended largely on domestic power struggles; social and political factors which either undermined or accelerated change. 13 What were these 'endogenous factors' in Luxembourg?
The parliamentary elections of 1974 coincided with the beginning of the global economic crisis, which led to the restructuring of the traditional iron and steel industries in the south of the country, rising levels of immigration and unemployment, demographic crisis and reorganisation of labour. 14 Whereas European integration had taken a decisive step by establishing the Merger Treaty of 1965, new forms of education were equally to play a crucial role in the new international order, which required urgent increases in educational investment. 15 It was in this changed structural context of society -and as its result heated political climate -that the comprehensive school idea became controversial in Luxembourg when it was placed on the political agenda in 1974. 16 Looking at the three election manifestos superficially against this background, it seems as if the parties come across as remarkably similar. The use of vocabulary and phrasing -repetitive, unapologetic and flamboyant throughout -appears alike, yet a closer investigation reveals radically different connotations attached to the same concepts. In this respect, different moral views, values and ideologies relating to education and social justice -'the good society' 17 -are of crucial importance 11 the political orientation of D'Lëtzebuerger Land is less obvious, for the paper claimed it did not belong to either the right or Left camp. However, it took an active part in educational debates by allowing both views in its coverage, and is thus included in this study as a more independent source. 12 see, above all, susanne Wiborg for analysis, for 'They describe the practical arsenal of politics … in which not only specific policy objectives and paths, but also the self-image of the political system, the political class and social conflicts, are very clearly expressed and contested' . 18
Revolutionary vs. evolutionary education
The first apparent difference between the parties can be drawn between revolutionary and evolutionary education. The LSAP and DP shared a belief in the productive forces of education. In other words, they endeavoured to change society through schooling, and thereby aimed at solidarity, emancipation and social integration -via the introduction of the comprehensive school for instance 19 -while the CSV believed in education's potential in mitigating social cleavages, and by that, had their goal in social stability. 20 Gradual reform of the existing system became an important electoral strategy for the CSV, which was in line with the general ideology of Christian Democracy, as Van Kersbergen suggests: 'Christian democracy's model was in no sense an attempt to create universal solidarity. Rather it was a procedure for moderating societal cleavages while reinforcing social groups and group identities … to gain as broad support as it could possibly obtain. ' 21 Nevertheless, it is worth pointing out that, at least before 1974, the CSV was not against all those 'revolutionary' education policies pushed forward by the DP and LSAP. Albeit being cautious, the party at times sympathised with some of the ideas of the comprehensive school, and the creation of uniform technical secondary education. To take an example, Pierre Frieden's views on this were published in Luxemburger Wort on 19 January 1974: One should also not obstruct rash alternative solutions. The comprehensive school is for sure a school model worthy of discussion, even if there are increasing, serious objections against it. But whoever says today that nothing but the integrated comprehensive school brings the solution to our educational problems must be as wrong as the one who wants nothing but a rigid continuation of the existing school forms. 22 Put differently, for the CSV progress was to take place at a slower speed, based on a stepby-step approach, cooperation and gradual evolution, not on experimentation or a 'revolutionary overthrow' of the existing system, as suggested by the future governing parties. 23 18 thomas Mergel, Propaganda nach Hitler: eine Kulturgeschichte des Wahlkampfs in der Bundesrepublik 1949Bundesrepublik -1990 (Göttingen: Wallstein Verlag, 2010), 12, 14. 'sie beschreiben das praktische arsenal der Politik ... in der nicht nur spezifische politische Ziele und Wege, sondern auch das selbstbild des politisches systems, der politische Klasse und die gesellschaftlichen Konflikte sehr klar ausgedrückt und umkämpft werden. ' all translations from french and German in this paper are my own. 19 see tageblatt, 'die Gesamtschule im Gespräch' , october 28, 1972, 11, or robert Krieps, 'discours lors du congrès pédagogiques de l'association Européenne des Enseignants' and 'Je ne regrette rien' , in Robert Krieps: In principle, the CSV believed in equal opportunities as a democratic right, but it believed even more firmly in educational pluralism (Pluralismus), freedom of choice and individual (or family) liability; again, as a means towards social stability, and as its end-product, national progress. 24 For the CSV, the restructuring plans posed by the DP and LSAP were overly hasty, lacked clearly defined outcomes, and needed more time for maturation. In the eyes of the CSV, more research was needed to evaluate whether or not the reforms would work in Luxembourg. 25 In this sense, the CSV acted as the moral guardian of Luxembourg's educational traditions: 'Is it not that a bird in the hand is worth two in the bush?' 26 By contrast, the LSAP claimed that in implementing reformative and progressive policy, and radical transformation of schooling conditions, the state and economy should also act responsibly and democratically, to safeguard genuine equality of opportunity. That is to say, by guaranteeing good educational levels for all pupils, the state could act as a lever to reduce those inequalities produced by the market. There were new societal demands attached to education, as trade unionist René Gregorius summarised the situation: 'One goes to the doctor for his health. So he should go to teachers and educators for his education. ' 27 The DP supported the views of the LSAP but added that a closer link between education and economics should be warranted, which followed from the talent reserve discussions of the early 1960s. 28 By ensuring the use of those talents still lying idle, i.e. utility maximisation the state could maintain a more productive workforce.
In its election manifesto (Fortschritt und Verantwortung: Aktionsprogramm der Luxemburger Sozialisten) -formulated by Robert Goebbels and Lydie Schmit -the LSAP also criticised the authoritarian structures of Luxembourg's education system. 29 The party saw the current system as being contaminated by the influence of the CSV, which stemmed from larger socio-economic power relations. The CSV as the party of conservative capitalism and supporter of large foreign conglomerates (such as banks) was mentioned. 30 Thus, the major goal of the LSAP was 'to uncover the actual power relations in our society, and thereby to show that the socialist social order can only be achieved by changing the balance of power' , 31 for 'Our society is dominated by capitalism' . 32 Therefore, the party also believed in radically changing society through education. The CSV, under the presidency of Nicolas Mosar, in turn prioritised current labour market demands, scientific knowledge, technical progress and economic growth. 33 Although the party manifesto (Grundsatz-und Aktionsprogramm der CSV) could be seen as fairly progressive at the time, at least in the sense that there were constant references to democratisation, equality of opportunities, tighter interplay between social policy and education, and the new role of women, 34 there was no mention of the comprehensive school as a genuine alternative to the current fourtier system of secondary education. 35 With regard to the curriculum, Francis Hierzig, Secretary General of the General Federation of Luxembourgish Teachers (Fédération Générale des Instituteurs Luxembourgeois, FGIL), pointed out that ' A meaningful comparison [of different school subjects] requires profound curricular research, as it has never been done before in our country' . 36 The LSAP stressed the importance of political education in civic studies (Gesellschaftslehre), which would consist of history, social studies and geography, and be taught four times per week between the fifth and tenth grades. 37 The goal was to teach pupils social competences, cooperation and tolerance, improve creative and practical working, and encourage independent thinking, i.e. learning should be not just about gaining pure 'knowledge' , but also about changing of attitudes and behaviour through participation in society (Mitbestimmung), cooperation (Zusammenarbeit) and personal independence (Selbständigkeit): 'Thus, the employee citizens must not be limited to the periodic exercise of the right to vote. Rather, it must be ensured that the population permanently participates in the political decision making and objective political information of the state and communities. ' 38 Social emancipation in education was to be understood in this context as 'the ability to recognise dependencies, to understand heteronomy, in order to be free to determine oneself, and thus to be able to realise oneself … to get to know the phenomena behind social and individual reality and their dependence on needs, interests, authority and economy' . 39 The above -in stark contrast to the CSV, which claimed that educational reforms must stem from (and reinforce) 'actual' societal conditions 40 -was linked to a fierce separation between the church and society: 'The Socialists [LSAP], however, actively disapprove of any abuse of religion for power political purposes. ' 41 For them, in democratic socialism (demokratischer Sozialismus), it was naive to assume that capitalism would correct itself when it came to education policy, but optimal conditions for the realisation of equality, democracy and solidarity would have to be created by policy edicts: 'But it would be unrealistic to assume that the capitalist society would reform itself voluntarily, against one's better judgement perhaps, and grow into socialism. ' 42 In the upper secondary school (Lycée), previously unknown art and music education (Kunst-und Musiksektion) should be incorporated in the curriculum. 43 In their manifesto (Programmpunkte und Optionen 1974), headed by incoming Prime Minister Gaston Thorn, the DP also insisted that current citizenship education (Bürgerkunde) was too limited and should be substantially expanded to include consumer education (Konsumentenbildung), in order for the pupils to learn about new economic models and employment relations, meet the demands of modern productive forces, and recognise their future potential and shortcomings. 44 In addition, the reform proposals surrounding the comprehensive school implied a complete revision of the existing curriculum: demands for more flexibility and broader lesson plans, interaction between different subjects, course-based programmes, full-time schooling (Ganztagsschule, journée continue), common core syllabus (Kernfächer, tronc commun), employment studies (Arbeitslehre, matières travaux manuels), and a large number of optional and specialisation modules tailored to individual needs. 45 At a European level, meanwhile, more comprehensive curricula were being rendered mandatory for future labour market mobility: Schools should be concerned about the motivation and aspirations of young people, who for the most part are under-motivated and unsure about their educational goals. While taking contemporary realities into account, they must discover what gives the pupil a sense of participation, and then provide guidance in an educational sense.... Labour mobility requires a type of education that has no specific orientation and avoids narrow specialisation. 46
Democracy, progress and responsibility
The second distinction between the parties can be made between the concepts of 'democracy' , 'progress' and 'responsibility' . By 'democracy' , the LSAP argued for the better educational representation of the masses, i.e. the current system was seen as serving only the minority interest (Goebbels and Schmit called this 'half-democracy' , halbe Demokratie). 47 For the purpose of democratisation, it was important to them that the education system could better represent the large majority (a process they termed 'real democracy' , echte Demokratie). 48 By writing that Luxembourg's developments should not be studied in isolation from other nations because of the small size of the country, in the comprehensive school they saw an essential tool -a long-awaited chance -to promote social democracy in the Grand Duchy, and by that, to contribute towards democratisation of education. 49 Being also against the educational pluralism promoted by the CSV, the DP in turn stressed the importance of collective planning (Gesamtplanung), a comprehensive approach (Gesamtkonzeption), and speedy abolition of all financial hurdles for the benefit of greater educational equality at all school levels. 50 The comprehensive school was seen as the party's long-term objective (Fernziel) to realise these goals. 51 Meanwhile, the CSV favoured a slow and cautious speed of change, freedom of choice and reforms based on well-established models rather than on quick fixes and experimentation. 52 Yet, the party had already recognised the acute need for reform under the former Frieden, Werner-Cravatte and Werner-Schaus governments: 'Nobody is arguing that there are no deficiencies in education, like in other societal branches and institutions, and that changes and improvements are possible and necessary. ' 53 But, typical of the Grand Duchy, it was important first to follow the implementation of the reforms abroad, and then assess their potential applicability to Luxembourg. In this 'global' frame, German-speaking literature was particularly and frequently cited to catch up with the newest developments in educational research. 54 From Sweden, the Alva-Myrdal-Report of 1971 was acknowledged. 55 Austrian examples were mentioned in reference to special classes and their problem solution. 56 For more cross-national perspectives, the Annual Report of 1973 by the Council of Europe was discussed. Given the huge German influence on Luxembourg's primary education system, it is no surprise that French-speaking literature was often only mentioned in passing. 57 In short, globalisation was used as a prism through which domestic reforms were justified, to accelerate change and pressure the current system, viewed as outdated or out of touch with reality. 58 By 'progress' , the LSAP referred to the urgent need to modernise Luxembourg's education so as to bring the system into line with 'international standards' . The party was especially worried about the country's alleged backwardness and lack of permeability (Durchlässigkeit) when compared with other countries, which in its view had been caused by the long rule of the CSV. 59 They were against the notion that talent was somehow innate or unchangeable, but rather was conditioned by learning and socialisation processes -'the social milieu' (soziales Milieu) -which in the current system was characterised by an 'unjustified typification' (ungerechtfertigte Typisierung). 60 The DP added a more 'global' dimension to this by claiming that 'Fifty years of conservative education policy has brought only partial adjustments. Global reforms can no longer be avoided. ' 61 Indeed, foreign developments and keeping up with international advances seemed of crucial importance for the DP, such as when the comprehensive school was under discussion: 'Globally, all these adjustments prepare the way towards the comprehensive school. ' 62 The CSV put less emphasis on global trends, and focused more on specific Luxembourgish issues, such as the unusual language situation, by laying stress on educational pluralism, albeit that the party also recognised the need for cooperation at (and even partial harmonisation of) secondary levels. 63 Yet, at the core of the CSV's programme stood the staunch preservation of free choice, the need for private schools and their state aid: 'The free choice of school should not only be allowed but also made materially possible. Therefore, in the context of a private school law, state aid should be granted to private schools. ' 64 This followed the initiative 'Equal chances' (Chances égales), led by Christian Democratic Member of Parliament Georges Margue. 65 Simultaneously the LSAP, being anti-religious and campaigning against tax-financed private schools (which were also supported by the Catholic Church), highlighted the importance of public schools: the '[b]asic requirement for a democratic education system is first of all the public school, which is the best protection against forced opinion and the most appropriate preparation for the democratic activities of our institutions' . 66 The party saw the private school as an antithesis of progress and modernisation, social inclusion and mobility, risk management, fair competition and equal life chances. To complement this outlook, the DP campaigned for more complementary classes for migrant children (e.g. in French), smaller class sizes and increased opportunities in further education. 67 By 'responsibility' the CSV referred to increased personal responsibilities and family duties: 'It [the party] is committed to ensuring that citizens are permitted to act through objective information and political education as critical, active and self-responsible people. ' 68 The LSAP saw it quite differently. By 'responsibility' it meant, above all, a responsible state and economy. While the party mentioned the need to rationalise state bureaucracy and economy, it also added that this should be done with a social conscience, given the current global economic crisis. In this sense, the LSAP spoke of 'reasonable rationalisation' (vernünftige Rationalisierung). 69 At a more general level, this was coupled with the rejection of Luxembourg as a tax haven favoured by foreign business (again, seen as being sustained and even reinforced by the CSV), which now endangered the country under the influence of 'authority-impaired production forces' (autoritätshörige Produktionskräfte). 70 In education policy, 'The Socialists [LSAP] want the progressive realisation of a polyvalent, an integrated comprehensive school, which will incorporate the so-called "second cycle", with differentiated learning and support groups, today's complementary and pre-vocational school classes, and also the lower levels of middle and secondary education' . 71 The DP, in turn, favoured closer links between education and economic life and, consequently, pushed both for the harmonisation of technical secondary education and for the coming of the comprehensive school. It was maintained that a better and broadly educated workforce, and cooperation between secondary institutions, formed an essential precondition of and contributed to macro-economic efficiency. 72 As part of this, the party warned against too early tracking (Auslese), and the concomitant social, political and economic risks linked to inequity: 'Equality is not just a financial problem, but presupposes a democratically conceived school…. In this integrated comprehensive school, the variety of previous school forms with their vertical separation walls disappear. The new education path is applied horizontally and this prevents the premature selection of pupils. ' 73 On 3 March 1975, the Tageblatt also reported how the 'global' catchphrase was 'integration through differentiation': 'It [the comprehensive school] is just a starting point, creating the conditions without which a democratic school is not feasible, namely, integration of all students of the age cohort and differentiation as a means of integration. ' 74 Being sceptical about this 'idealism' , because of a lack of concrete evidence for the new school's superiority, the CSV announced: 'The question is: Can we sustain the quality of our school leaving certificates, if we suddenly have all school forms united organisationally under one roof? Because here we have a problem which cannot be swung away with empty pathos. ' 75 Or, as an anonymous Lycée teacher complained in the Luxemburger Wort on 6 February 1975: 'So why destroy the current system, without knowing by what will you replace it with, and without improving the other school types (middle and vocational education)?' 76 For many, perhaps unsurprisingly, the new school would lead to the lowering of standards (and thus cause problems at university entry levels), greater social inequality (since low-scoring pupils would not get the appropriate help they needed), and explosive expansion of private schools (following from the decreased standards of public schools). 77 The answer to this critique from the Left was to emphasise 'less controversial' global developments: differentiation and individuality in teaching would mean that pupils should have an option to deepen their knowledge in optional subjects, ensuring also that more advanced learners would benefit from the new system and its curriculum. 78 Later differentiation (spätere Differenzierung), individual support (individuelle Förderung), and a decrease in 71 ibid., 30. 'die sozialisten wollen die progressive Verwirklichung einer polyvalenten, integrierten Gesamtschule, die, als sogenannter "zweiter Zyklus", mit differenzierten Lern-und fördergruppen, die heutigen Komplementar-und Vorberufsschulklassen, sowie die unterstufen des Mittel-und sekundarunterrichts umfassen wird. ' the 'second cycle' here refers to the first three years of secondary education. social distance (soziale Distanz) between different social groups would be methods used to ensure an increase in the equality of opportunity (Chancengerechtigkeit). 79 On part of the CSV, particular attention was also drawn to the so-called 'complementary classes' (Komplementarklassen) -a kind of continuation of the primary school track at the post-primary level alongside the middle school (Mittelschule), vocational institutions (Berufsschulen) and Lycée -which were mostly composed of working-class children from migrant backgrounds who were thought to have an 'insecure future' in the proposed new order. For example, in 1974, the Luxemburger Wort asked: 'What should happen to the pupils of these classes in the planned merger of post-primary education? Will they also be integrated? Or will there perhaps be, necessarily, a kind of sidetrack set up inside the comprehensive school for each pupil who does not meet the set minimum requirements?' 80 On a different note, the LSAP recognised that, for many children attending these classes, failures in life and education were just the 'continuation of the fate of their parents' (Fortsetzung des Schicksals ihrer Eltern), 81 a situation which needed attention, since these pupils often left school after reaching the age of 16. In 1973, for example, 4000 young Luxembourgers attended these classes. 82 It would, however, also be too simplistic to conclude that the DP and LSAP formed a united front against the CSV. In effect, as the Tageblatt wrote in 1972, the new school also faced criticism from the Left of the political spectrum: 'Will the children be as well prepared for university as before? Will the famous trilingualism of our middle school students be maintained? Can we cope with this expensive school?' 83 This was coupled with scepticism on the part of the working classes, industry and trade unions: 'Does industry require difficult examinations, when its needs are met by specialists? Are there not already many working-class children in our post-primary schools?' 84 These comments were partially directed towards the 'Circle Connecting Critical Teachers' (Cercle de Liaison des Enseignants Critiques, CLEC), which complained that, in accordance with the DP and LSAP, 'The school is not an island of neutrality in a class society characterised by economic, political and cultural domination of the minority' . 85 Formed by public school teachers who petitioned for the rapid realisation of the comprehensive school, and who were supported by the FGIL, the CLEC further expounded that with failure rates of over 30% in primary schools (and rejection rates of over 60% in the Lycée), the current system had become economically and pedagogically unsustainable, posed unreasonable demands for pupils and teachers and thus needed radical reform: 'The basic evil of our school system, the equality of opportunity, is in that case not alleviated. ' 86 Breaking down of all institutional barriers in the lower levels of secondary education (i.e. grades seven to nine, or 'second cycle') -and therefore also class, gender, ethnic, regional and other divisions -was seen as an answer to this dilemma, an issue that had reached its zenith in the public discourse of the early 1970s: 'This intolerable situation persists as we cling onto the separation that characterises our school system and has its consequence in premature selection. ' 87 Not all teachers, however, shared the same concerns. To safeguard their position, the ' Association of Secondary School Teachers' (Association des Professeurs de l'Enseignement secondaire et supérieur, APESS) joined forces with the CSV and opposed comprehensive school reform. 88 To maintain the selective nature of the Lycée, many secondary school teachers, such as François Thill, Pierre Lech and Jean-Pierre Kraemer, campaigned against the alternatives of the LSAP and DP. However, after 1974 it was also claimed that the left-wing paper, Tageblatt, had intentionally discredited and politicised the APESS, which in principle had always been an apolitical and professional organisation. 89 Here, it must be also underlined that the conservative wing of the CSV, surprisingly silent in the first half of the 1970s, had by 1979 grown substantially stronger, and become more radical in character. For instance, on 24 March 1978, it was argued in the D'Lëtzebuerger Land that the comprehensive school -now a 'socialist doctrine' (doctrine socialiste), based on a 'utopian reform' (réforme utopique) or 'collectivist equalisation' (égalisation collectiviste) -would lead to a slow ruin of Luxembourg's education system. 90 It would set unreasonable pedagogical requirements for teachers and their training, disadvantage more talented pupils given its heterogeneous clientele, and be impossible to manage because of its large size and complicated organisational structure. 91 This would cause further problems for students in foreign universities (due to their having gained lower skills compared with their counterparts in other countries), and ignore the contribution and value of national elites: It has become a national reality that our graduates permanently lose contact with other universities and the elite schools of neighbouring countries…. The trouble is that we are not at all in a hurry to put the cart before the horse, to tolerate the sabotage of our youth and the future of our country. And that is why we say no, and we will say it again and always. 92
Conclusion
For Levin, most of the problems facing education reforms have to do with their structural aspects, which are fairly easy to change through policy edicts. Nevertheless, he sees little optimism for any groundbreaking alterations since 'the changes have been deeply influenced by dominant ideas rooted in the economic systems such as managerialism, choice, markets, and incentives' . 93 By intersecting politics, globalisation and education, this paper has assessed the impact of the general elections of 1974 on Luxembourg's public and political discourse. When we look at education policy there in 1974, it is perhaps no surprise that the election manifestos of the LSAP, DP and CSV stem from Levin's larger socio-economic power relations: education policy at this time became connected to wider conflicts and trends in society. As bearers of different ideologies, the parties showed very different approaches to education. Then, as carriers of these different socio-political ideas incarnated in dynamic politicians, what were the similarities and differences between the LSAP, DP and CSV when it came to education policy?
In Luxembourg's political discourse of the early 1970s, a strong belief that education could and should change society dominated the political landscape from the viewpoint of the LSAP and DP, their major aim being a transformation from an elite to a mass system of participation. The mind-set of the CSV was different: the aim of education was not to change society but to reconcile those societal conditions seen as already given. This is not to say that the CSV was somehow less ideological in its views, but simply that its relationship to change was circumscribed by a very different stance to policy altogether. This distinction was also purposefully translated into new curriculum proposals, as the parties applied their specific moral values to education: the importance of political education in civic studies (LSAP), consumer education (DP), and technical progress and scientific proficiency (CSV). In other words, the LSAP and DP aimed at the changing of attitudes. They saw the comprehensive school as a miniature of society, which would teach tolerance, cooperation and mutual respect, while the CSV stressed 'knowledge' in a more traditional sense, and saw the new school as a threat to individual freedom, parental choice and pluralism.
In the parliamentary elections of 1974, electoral strength also depended on the appeal of the parties' narrative over new forms of education, such as the comprehensive school, while the new role of the mass media -especially the emerging influence of the left-wing paper, Tageblatt -helped to polarise the heated political environment in favour of the LSAP and DP, and to challenge the dominance of the right-wing paper, Luxemburger Wort, affiliated to the CSV. 94 It thus follows that the parties and press played a central role in the proliferation of new educational norms, values and moral concepts. The different newspapers reacted to and accelerated deepening political divisions, while for the parties, the new international environment provided an additional opportunity to introduce and accelerate change. The support of the DP in 1974, in turn, enabled Robert Krieps (Minister for National Education, LSAP) and Guy Linster (Secretary of State for National Education, LSAP) to push through controversial education legislation towards the end of the 1970s. education, the Lycée technique (by the law of 21 May 1979), 95 and the first integrated cycle of common core syllabus, tronc commun (by the law of 23 April 1979), 96 albeit that the failure of the implementation of the latter after 1979 could be explained as part of a general refusal to move far too quickly, especially on the part of the CSV under Education Minister Fernand Boden. 97 Although the comprehensive school never materialised in Luxembourg, the political discourse of 1974-1979 opened up the former education system by paving the way towards democratisation of education. For example, one of the milestones of 1979 could be seen in the later abolition of the complementary classes, which were absorbed by technical secondary education.
To conclude, whereas I have recently argued that globalisation put strong pressure on national education systems during the Cold War of the 1960s and 1970s, 98 the case of Luxembourg in 1974 seems to suggest almost the complete opposite. In Luxembourg, it was globalisation that was employed to justify domestic reforms, not the other way around, as has been proposed by previous literature. 99 In this respect, this paper has challenged the notion that global reforms were made legitimate by using domestic rhetoric, or 'to paint these innovations with a specifically national brush' , as Rohstock and Lenz have it. 100 Rather, globalisation was used as a matrix through which national reforms were justified, in order to put pressure on Luxembourg's own system, often viewed as backward or in need of thorough reform. This was particularly true regarding the policies of the DP and LSAP, while the CSV more often resorted to nationalistic arguments to play down the calls for change. Education therefore came to exist in the critical conjuncture between 'domestic' and 'international' . Ultimately, the endless and emotional rhetoric of 1974-1979 surrounding the new school form -its global dimensions, curricular changes, broad press coverage and partial institutionalisation in 1979 -radicalised the conservative wing of the CSV, which aided the party back into office in 1979. In a word, opposition had a strategic role to play. It helped to swing the LSAP and DP to the political Left, even if this was not necessarily what the CSV had originally intended, at least in the sense that it forced the party to re-evaluate some of its education policies for the 1980s.
All this was largely in line with similar developments in many other European countries. 101 The topic therefore also has contemporary relevance for larger audiences, since it deals with issues surrounding education's relationship to democracy and equal life chances, and how these concepts were understood in fundamentally different ways by different political forces. This study thus agrees with Wiborg in that 'there is a need to return social inequality to the top of the agenda as societies have become increasingly unequal. Globalization has engendered forces that have dislocated traditional bonds, fragmented societies, and reinforced conflict and division. ' 102 However, it is worth mentioning that, especially in 95 comparison with Germany and the Scandinavian countries, for instance, legislation took place quite late in Luxembourg, perhaps given the pervasive role of Christian Democracy in the country. In Germany, where partial implementation occurred, the first comprehensive schools were built in the mid-1960s. In Scandinavia, where total transformation took place, this was already in place in the early 1960s. In relation to contemporary proposals for comprehensive school reform in other European countries, such as the Dutch polarisation of liberals and social democrats in this regard, 103 the example of Luxembourg also shows how educational change can indeed be introduced and then reversed politically in a very short time period.
Ultimately, the 1974 battles over the direction of education are perhaps best understood as a historical sequence of events that were marked by both change and persistence. It did make a difference which party took office and which was eventually thrown out. Politics mattered. Or, to be more precise, political parties differed in their education practices. The more the Left pushed for reform, it seems, the more the Right defended the existing system, which accelerated the radicalisation of party politics from both sides of the political spectrum: the ideological gap between the parties grew wider. This contributed to a polarisation of party policies, intra-party volatilities and an unoccupied political centre, untypical of Luxembourg's post-war politics. To understand these fragmentations helps us to comprehend why the search for national integration through educational equality wasand continues to be -such a contested endeavour. | 2019-05-08T13:28:30.915Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "833bf761a5f8441bdb31ea6d7971a1cc3eb6e596",
"oa_license": "CCBYNCSA",
"oa_url": "http://orbilu.uni.lu/handle/10993/22124",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "b5e995fe81061f3f50e5638d34261a5f1f152be5",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
12360 | pes2o/s2orc | v3-fos-license | Applicability of the ReproQ client experiences questionnaire for quality improvement in maternity care
Background. The ReproQuestionnaire (ReproQ) measures the client’s experience with maternity care, following the WHO responsiveness model. In 2015, the ReproQ was appointed as national client experience questionnaire and will be added to the national list of indicators in maternity care. For using the ReproQ in quality improvement, the questionnaire should be able to identify best and worst practices. To achieve this, ReproQ should be reliable and able to identify relevant differences. Methods and Findings. We sent questionnaires to 17,867 women six weeks after labor (response 32%). Additionally, we invited 915 women for the retest (response 29%). Next we determined the test–retest reliability, the Minimally Important Difference (MID) and six known group comparisons, using two scorings methods: the percentage women with at least one negative experience and the mean score. The reliability for the percentage negative experience and mean score was both ‘good’ (Absolute agreement = 79%; intraclass correlation coefficient = 0.78). The MID was 11% for the percentage negative and 0.15 for the mean score. Application of the MIDs revealed relevant differences in women’s experience with regard to professional continuity, setting continuity and having travel time. Conclusions. The measurement characteristics of the ReproQ support its use in quality improvement cycle. Test–retest reliability was good, and the observed minimal important difference allows for discrimination of good and poor performers, also at the level of specific features of performance.
INTRODUCTION
Client experiences are considered to be important independent indicators for health care performance (Valentine, Bonsel & Murray, 2007;Valentine et al., 2003). Being relevant for its own sake, client experiences also affect clinical outcome through several pathways (Campbell, Roland & Buetow, 2000;Sitzia & Wood, 1997;Wensing et al., 1998;Williams, 1994). For example, clients who truly understand the explanation of their caregiver are more likely to comply to treatment or to change lifestyle, and arguably a patient-unfriendly clinical staff or an intimidating hospital-setting will not support recovery.
The routine measurement and use of client experiences play an indispensable role in systematic quality improvement (Haugum et al., 2014;Weinick et al., 2014). For that purpose, the client information can be used in a two-stage quality cycle. In the first stage, care providers that perform above or below average are identified. This process is also called benchmarking (Department of Health, 2010;Ellis, 2006;Ettorchi-Tardy, Levif & Michel, 2012;Kay, 2007). In the second stage, assumed underperformers are invited to improve their results followed by an internal quality cycle, where above-average performers ('best practices') may give guidance. Translated technically, the quality cycle starts with quantification of individual client experiences and clinical outcomes (casemix-adjusted), followed by the ranking across providers. Next, after defining thresholds, under-and best-performing units are defined. Finally, client experiences and other outcomes are analyzed in more detail. Preferably this break down of data is combined with face-to-face interactions among professionals. This more refined analysis offers tangible targets for improvement, unlike the global outcomes used in benchmarking.
To include clients' experiences with maternity care in the routine care quality evaluation and quality improvement, and in view of professionals, clinical organizations and health insurance companies who urged for measuring quality from the perspective of the client, we developed the Repro Questionnaire (ReproQ) (Scheerhagen et al., 2015b). This integral measure covers the period from first antenatal intake up to the postpartum period. The ReproQ consists of eight domains (33 experiences items), following the so-called WHO Responsiveness model (Valentine, Bonsel & Murray, 2007;Valentine et al., 2003). All items strictly focus on service delivery from the clients' perspective.
Previously we demonstrated the feasibility, internal consistency and construct validity of the ReproQ (Scheerhagen et al., 2015b). In current paper we focus on psychometric properties needed to assess the suitability of the ReproQ for the two-stage quality improvement process. This suitability rests on two pillars: (1) are the judgments of pregnant women reliable, or stated otherwise, if the client survey is repeated do we get the same average and the same ranking of units; and (2) if we observe a quantitative difference between the average judgments of two care units-say 0.2 ReproQ points in our case-is this difference a relevant one? Epidemiologists developed a robust method to decide which differences are relevant in case of difficult to grasp clinical outcomes, the so-called minimal important difference (MID) approach. We tested these properties of the ReproQ to establish whether the ReproQ is suitable for both global benchmarking with a summary score (hence the assignment of below-and above-average performance), and for detailed profiling of providers or units or client groups once the underperforming units of client groups have been identified using the MID.
The data presented were collected during the provisional implementation of ReproQ measurement in about 1/3 of all perinatal units (hospitals with nearby midwife practices) in the Netherlands between October 2013 and January 2015. In 2015, the ReproQ was appointed as a national client experience questionnaire and will be added to the national list of indicators in maternity care (CPZ, 2015). Before the ReproQ was added to this national list, indicators only measured clinical outcomes (e.g., mortality, morbidity or complications) or parameters of professional performance. Adding the ReproQ to this list of indicators meets the WHO's recommendation to measure performance of health care systems also from the client's perspective. As indicator of performance the ReproQ should meet the conditions for a successful quality improvement cycle. This study explores two of these conditions: the ReproQ's reliability of the performance measurements and the MID as an aid to identify relevant differences between clients or perinatal units. The focus in this paper is on client's experiences with labor, because this is the key-event in maternity care. Antenatal care aims to create the best possible situation or starting point for labor. Antenatal risk assessment will be performed, and if necessary preventive measures and treatment of these risks are embedded. Provision of postnatal care is provided to reflect the outcome of the delivery for mother and child. Additionally, care during delivery is comparable in most Western countries, while antenatal and postnatal care are subject to more variation across countries or health systems.
Repro questionnaire
The questionnaire consists of two analogous versions: version A covers the experiences during pregnancy (antenatal) and version B covers the experiences during birth and the postnatal period. Version A is presented at about eight months gestational age, version B about six weeks postpartum. Each version asks for experiences at two instances, in case of version B postpartum experiences during labor, and experiences in the subsequent postpartum week respectively. As questions only differ with respect to the context referred to (say, experienced respect is asked for first antenatal visits, late in pregnancy, during labor, and during post-partum care), the resulting dataset represents a similar measurement covering four time intervals. In this article we focus on data from version B on the experiences during labor, the 3rd time point.
The 8-domain WHO responsiveness model is the conceptual basis of the ReproQ. Responsiveness is the way a client is treated by the professional and the environment in which the client is treated. Responsiveness is operationalized as four domains represent interactions with health professionals (dignity, autonomy, confidentiality, and communication), and four domains that reflect experiences with the organizational setting (prompt attention, access to family and community support, quality of basic amenities, and choice and continuity of care) (see Table 1) (Valentine, Bonsel & Murray, 2007;Valentine et al., 2003). The response mode of the experience items uniformly used four categories: ''never, '' ''sometimes,'' ''often,'' and ''always,'' with a numerical range of 1 (worst) to 4 (best). An additional question which two domains are considered the most important, allows for a personalized scoring. Additional questions provide information on: (1) the rating of the global experience; (2) the process of care process, the location of care (e.g., home or hospital) and the primary health professional being responsible (e.g., midwife or obstetrician); (3)
Dignity
Receiving care in a respectful, caring, non-discriminatory setting.
Autonomy
The need to involve the individuals in the decision-making process to the extent that they wish this to occur; the right of patients of sound mind to refuse treatment for themselves.
Confidentiality
The privacy of the environment in which consultations are conducted by health providers; the confidentiality of medical records and information about individuals.
Communication
The notion that providers explain clearly to the patient and family. The nature of the illness, and details for the required treatment and options. It also includes providing time for patients to understand their symptoms and to ask questions Prompt Attention Care provided readily or as soon as necessary
Social considerations
The feeling of being cared for and loved, valued, esteemed and able to count on others should the need arise.
Basic Amenities
The extent to which the physical infrastructure of a health facility is welcoming and pleasant
Choice and Continuity
The power or opportunity to select, which requires more than one option.
the clinical outcome of both mother and child, as perceived by the mother; (4) information about previous pregnancies; and (5) client's socio-demographic characteristics. Content validity of the ReproQ-version-0 was determined through structured interviews with pregnant women, women who recently had given birth, and health care professionals. All Responsiveness domains were judged relevant. Construct validity of the adapted ReproQ-version-1 was determined through a web-based survey, and based on response patterns; exploratory factor analysis; association of the overall score with a Visual Analogue Scale; and known group comparisons. The exploratory factor analysis supported the assumed domain structure and suggested several adaptations. Correlation of the VAS rating and overall ReproQ score supported validity for the antenatal and postnatal versions of the ReproQ. Further details are described elsewhere (Scheerhagen et al., 2015b).
Data collection
In current study, data were obtained from three sources. The majority of data were collected by three postnatal care organizations (organizations that deliver postnatal care over a period of seven to 10 days). Additional data were collected by the national Birth Centre Study (a university-based research organization), and from 10 perinatal units (a hospital with associated midwifery practices). There were no exclusion criteria regarding organization, health care professional or client.
Data collection implied that clients were invited to participate by their care provider on behalf of the research team. With their consent, name and e-mail address were obtained and provided to the organization that distributed the digital survey. Women provided formal informed consent at the beginning of the questionnaire. For the Birth Centre Study and 10 perinatal units, the research team received client's name and e-mail information for recruitment after written informed consent had been obtained. The person who included the woman can, theoretically, be the same as the health care professional in charge of the delivery (usually gynecologist or community midwife), but this is highly unlikely to be the case and not typical of our obstetric care system.
During data collection, an extensive data privacy protocol applied. The Medical Ethical Review Board of the Erasmus Medical Center, Rotterdam, the Netherlands, approved the study protocol (study number MEC-2013-455).
Data were collected in two waves. The first wave was between October 2013 and January 2015. Six weeks after the expected date of labor, all participating women received an invitation to fill out the postnatal ReproQ questionnaire. Non-responding women received a reminder two weeks later. These data were used to determine the MID and compare the known groups. The second wave occurred during October 2014 and January 2015. All women who previously filled out the postnatal ReproQ measurement in the first wave were invited to fill out their experiences again for the test-retest comparison. Excluded from invitation were women whose answers in the postnatal ReproQ were largely incomplete. The intended test-retest interval was 14 days. Since women's situation might change during the test-retest interval, we added the following item for verification. ''Have you experienced something important in the last two weeks?''
Participating women
Sample size was not formally calculated since we had no prior data to use as input data. Additionally, a formal sample size calculation seems questionable since statistical testing does not a play role in the estimation of the MID. Moreover, we anticipated that the provisional national implementation of this survey would provide sufficient numbers of responses for the study questions. For the MID and known groups comparison, we included all usable responses. For the test-retest, we aimed at 200 usable questionnaires.
In the first wave, we invited 17,867 women who recently had given birth, of whom 5,760 responded to the survey (32%). We excluded 877 women, because they filled out less than two of the following characteristics: ethnicity, educational level, care process, and experienced outcome of the mother and baby. We considered these background data as critical to describe the study participants in sufficient detail, and to understand and interpret the ReproQ scores and the associated MIDs. In the second wave, we invited 915 women for the retest, of whom 265 responded (29%). We excluded 57 women for the retest, because their situation changed negatively or was unknown. We did so because, a test-retest analysis requires that context and conditions between the test and retest situations remain unaltered (De Vet et al., 2011). To judge representativeness, we compared the characteristics of 208 women in the test-retest with the 4,675 women who filled out the test once using standard Chi square tests.
ReproQ score model
We used two scoring models to summarize women's experiences: the proportion women with negative experience(s) (in short: 'percentage negative') and the mean score. Both were calculated for the eight individual domains, the four personal domains, the four setting domains and a total score across all domains. Percentage negative was defined as filling out the response category 'never' in at least one of the domains and/or filling out 'sometimes' in a domain that the client identified as most important. The percentage negative method avoids compensation of a negative experience by positive experiences on other items of domains, whereas the mean scores allow the compensation of negative experiences. The mean scores were computed as unweighted average-scores, treating never (1), sometimes (2), most of the time (3) and always (4) numerically.
Minimally important difference
We determined the MID using (1) the anchor-based (or the difference in score between two adjacent levels of an anchor-question (Copay et al., 2007)) and (2) distribution-based method (or the difference in distribution of observed scores (Revicki et al., 2008)), each having their merits.
As anchor-question we used the global rating of a client's experience: ''Overall, how would you rate the care received during your labor and care after birth?'' (in short: 'Global rating'). This anchor-question emerged as best option in a review study of the Picker Institute (Graham & Maccormick, 2012). Women could respond to this question on a 10-point VAS. We determined the mean score and the percentage negative of the individual domains, personal, setting and total scores for the VAS ratings 7, 8 and 9. We used the global rating of '8' as reference category, being the mode in our data (Copay et al., 2007). Next, the MID was calculated by subtracting these mean scores of the adjacent categories 7 and 9 from the mean score of the reference, being 8, to check if the differences 7-8 and 8-9 were equal (Copay et al., 2007). The same procedure was used to calculate the MID of the percentage negative. The distribution-based MID was only calculated for the mean score.
To determine the MID with distributed-based methods, we calculated the standard error of measurement (SEM) (Wyrwich, Tierney & Wolinsky, 2002), and one half of the standard deviation (½SD) (King, 2011;Norman, Sloan & Wyrwich, 2003). The SEM is estimated by the baseline SD of the measurement multiplied by the square root of 1 minus its reliability coefficient (ICC from the test-retest assessment) (Rejas et al., 2011;Vernon et al., 2010;Wyrwich, Tierney & Wolinsky, 2002). A difference larger than 1 SEM is thought to indicate a true difference between groups (Copay et al., 2007;Revicki et al., 2008). The ½SD margin is regarded as a relevant difference as well (Copay et al., 2007;Norman, Sloan & Wyrwich, 2003;Revicki et al., 2008).
Clinical known-group comparison
We used six so-called known group comparisons (in terms of clinical outcome) to assess the discriminative validity of the ReproQ. Here we determine if women from different 'known groups' also have different mean experience scores and percentage negative (setting, personal, overall), and if these differences exceed the anchor-based MIDs for 7-8 and 8-9.
We made the following 'known groups': First, we compared the scores of women who before the labor did and did not meet the health care professional who supervised their labor, this being a proxy of professional continuity (Saultz & Lochner, 2005). Second, for setting continuity, we compared the scores of women who were entirely low risk versus women who shifted from low-risk to high-risk during parturition. These women have the highest mortality and morbidity risk (Evers et al., 2010;Poeran et al., 2015). Third, we compared the scores of women who started their labor in office hours (8:00 am-5:00 pm, Mondays to Fridays) versus past office hours (Gould, Qin & Chavez, 2005;Gould et al., 2003;Stephansson et al., 2003;Urato et al., 2006). Fourth, we compared the scores of women who had to travel 15 min or more with women who had to travel less than 15 min. In agreement with literature, we only included women in this comparison who were transferred from home to hospital during parturition and whose birth was unplanned (Poeran et al., 2014;Ravelli et al., 2011). Fifth, we compared the scores of women who had an emergency with women who had a planned caesarean section (Elvedi-Gasparovic, Klepac-Pulanic & Peter, 2006). Finally, as proxy of concentration of care, we compared the scores of women who delivered in small hospitals (<750 labors annually (first quartile)) vs. large hospitals (≥1500 labors annually (fourth quartile)) (Finnstrom et al., 2006;Moster, Lie & Markestad, 1999;Moster, Lie & Markestad, 2001;Phibbs et al., 1996;Tracy et al., 2006). Table 2 presents the characteristics of responding women who filled out the test (n = 4,675) and women who filled out the retest (n = 208). Mean age was 31 years (SD = 4.3); 398 (8%) women were of non-Western background; and 368 (8%) women reported to have a low educational level (both percentages slightly below national average). About half of the women gave birth for the first time (52%; about national average), and 2,313 (48%) women did not know the health care professional who supervised labor. 527 (11%) women were referred to secondary care during their pregnancy; 1,724 (36%) were referred during parturition (about the national average) and 618 (12%) women had a cesarean section (below the national average of 18%). The characteristics of women who filled out the retest differed significantly in terms of ethnic background (more Western women), setting continuity (more women were referred to secondary care during pregnancy), and global rating (women gave a higher global rating). Table 3 shows the test-retest reliability. All experience items combined, 47% of the women reported one or more negative experiences filling out the test. When filling out the retest, 40% of women reported one or more negative experiences. The absolute test-retest agreement of 'having a negative experience' was 78.8% CI [72.6-84.2]. The ICC of the total scores Notes. a The percentage of missing data was below 5% in all characteristics, and will therefore not be presented. b Significant difference between the participating women of the test and women participating the retest.
Test-retest reliability
(mean test = 3.79; mean retest = 3.78) was 0.78, showing good reliability. The mean test-retest difference of the total score was 0.01; limits of agreement were +0.31 and −0.31. The reliability of the personal and setting scores was similar to the reliability of the total score. The level of agreement regarding negative experiences within individual domains was excellent, except for the domains Autonomy and for Choice and Continuity that showed Table 3 Test-retest reliability of the experience during labor, on percentage women with a negative experience and mean score (n = 208).
Score
Negative experience a Mean experience Table 4 shows the MID results, using the two scoring models, including the results for the 7-8 and 8-9 differences. Using the percentage negative experience, the MID was 11.0%, based on the difference between the global ratings of 7 and 8. This means that the respondents rating their overall experience with the global rating scale with 7 showed 11% more cases of negative experiences compared to the respondents with the rating 8. When comparing the rating of 8 with 9, the MID was 9.2%. If we focus on the personal score, the MID using the 7-8 difference was 8.5%, which was comparable to the MID of 8.9% using the 8-9 difference. For the setting score, the MID 7-8 was 5.4%, which was smaller than the MID 8-9 (6.2%). The MIDs of the individual domains were all ≤8%. Using the ReproQ overall mean instead of the percentage negative experiences, the anchor-based MID based on the 7-8 distance was 0.15; when based on the 8-9 rating difference the MID was 0.10. The mean-MIDs of the personal score were slightly larger than the mean-MIDs of the setting score, and the domain MIDs showed some heterogeneity; both patterns were also observed using MIDs for negative experiences. The use of the mean score also allowed the computation of a distribution-based MID. The distribution-based mean-MIDs of the 7-8 differences of the personal, setting and total score were similar to the anchor-based MIDs. In case of the individual domains, all distribution-based mean-MIDs were a somewhat larger than the anchor-based mean-MIDs. Figure 1A shows the impact of six known groups with an assumed influence on client experiences, using the percentage of negative experiences as scoring model. Two out of six comparisons showed differences in agreement with expectations. Already knowing the professional who supervised labor (i.e., continuity of professional), had a considerable impact: the differences in total score and personal score of women who knew and did not know their professional were larger than the associated MIDs (7-8 difference). Similarly, referral during labor (i.e., discontinuity of setting) was associated with differences in total, personal and setting scores larger than the MID. Figure 1B shows the same known groups comparison, now using the mean ReproQ scores and the associated MID. The difference in mean overall, setting and personal scores between women who received only primary care and women who were transferred during parturition was larger than the corresponding MIDs (7-8 difference). All three differences scores of personal continuity and setting continuity were larger than the MIDs (8-9 difference). Further details are presented in File S1.
DISCUSSION
To determine the suitability of ReproQ in the two-stage quality improvement cycle, we assessed its test-retest reliability and determined the MID according to two methods. Test-retest reliability was good for both scoring models. The anchor-based MID of the percentage negative experiences was 11%; the anchor-based MID of the mean score was 0.15 (on a range of 1-4). The distribution-based MIDs (SEM) proved about similar to the anchor-based mean-MID of the overall, personal and setting scores. However, for the domain scores the SEM exceeded the anchor-based mean-MIDs. The known-group comparisons showed that knowing the professional that supervised your labor and not being referred during labor had considerable impact on the experiences scores. As the = 4,883). Professional continuitydifference supervisor of the delivery is known vs. unknown (52%/48%) Setting continuity-difference primary care only vs. referred during labor (37%/36%) Onset of delivery-difference in vs. outside office hours (30%/70%) Travel time-difference women had to travel <15 min vs. ≥15 min, when transferred from home to hospital during labor (17%/11%). Cesarean section-difference planned vs. emergency cesarean section (4%/8%) Hospital size-difference < 750 deliveries per year vs. ≥ 1,500 deliveries per year (3%/12%).
observed ReproQ scores deviated more than the MID, we believe this instrument can be used as a benchmark with an interpretation of meaningful differences beyond statistical significance. Thus, the ReproQ can successfully identify areas that need improvement in subgroups of clients. One should be aware that the MID cannot be used to identify changes in (poor) experiences within clients.
Applying the percentage negative on the test-retest reliability showed that the reliability of the domains was higher than for the summery scores. This is surprising, because the likelihood to report a negative experience in both the test and retest is considerably larger for the summary scores than for the domains.
For the individual domains, fewer women reported a negative experience when filling out the retest than the test. The domains Autonomy and Choice and Continuity showed similar percentages of negative experiences in the test and retest, though the reliability of these domains was low compared to the other domains. This indicates that women who reported a negative experience filling out the test are not the same women that reported a negative experience filling out the retest. Possible explanations for these effects are recall bias and/or response shifts, e.g., women adjust their opinion due to sharing their experiences with family and friends.
The summary scores showed higher reliability than the domain scores when using the mean score method. The explanation is that ICCs invariably increase when the summary scores include more items. When calculating a summary score, differences within a domain can be compensated by differences between the domains. This increases the stability of the summery scores.
The reliability of the domains Confidentiality and Social considerations was somewhat lower than for the other domains. It is possible that women feel that these concepts are difficult to judge, which increases the fluctuations in domain scores.
Both the negative-MID and the mean-MID varied across adjacent response categories. When the global rating increases neither the percentage negative decreases nor the mean scores increase linear in all scores. This suggests that a gain in client experience as the result of quality improvement is not similar to the loss in client experience as the result of deterioration. One explanation is that clients do not weigh all domains are equally in the global rating. Another explanation is that respondents are not inclined to use the extreme response categories.
The distribution-based MIDs (SEM) were similar to the anchor-based mean-MID of the overall, personal and setting scores. However, for the domain scores the SEM exceeded the anchor-based mean-MIDs, because the SDs of the domain scores were larger than the SDs of the summary scores, and because the domain ICC scores were lower.
The known-group comparisons were based on previously reported differences in clinical outcomes. Professional and setting continuity also resulted in large and relevant differences in experience scores. These differences are probably due to the deviation of the expected or planned process of care, which might result in a stressful event, even when this deviation is clinically necessary. Differences in experiences of the other clinical known-group comparisons were not relevant. It is possible that these experiences are not or only partly correlated with the clinical outcomes. Another explanation might be that the experiences were reported in retrospect. Perhaps women's experiences were biased afterwards by a good maternal or child outcome, by better, sufficient or intensive postnatal care when complications occurred during labor. It is also possible that women's experiences of the process of labor were affected by hormones and stress, or that women lacked information on what normal maternity care is. (Note that about half of the women was primiparous).
Strengths
To our knowledge, this is the first study to clarify the meaning or relevance of score differences obtained with client experience questionnaires. So far, current studies towards the MID mainly focussed on quality of life scores (Brozek, Guyatt & Schunemann, 2006;Copay et al., 2007;Guyatt et al., 2002). Secondly, the use of global ratings is debated due to their unknown validity and reliability (Copay et al., 2007). By using the overarching question of the British National Patient Survey Coordination Centre as anchor-question we met this critique: this global rating is extensively tested and has a good content and construct validity (Graham & Maccormick, 2012). Thirdly, we explored the differences between 7-8 and 8-9 changes in global rating. By doing so, we were able to check the assumption that both differences were similar. Inevitably, preference scales are to some extent non-linear in interpretation, which applies to both the ReproQ and to scales used for anchoring. At the upper or lower ends of the scale the interpretation of gains and losses may differ, and the 'degree of relevance' of one step higher (7-8) or lower (8-9) decreases. Since benchmarking is usually based on the comparison of averages, the impact of non-linearity is probably small.
We previously introduced the percentage negative experiences as an alternative scoring to the frequently used mean score. Three remarks should be made. Firstly, we deliberately focused on the percentage negative experiences instead of percentage positive experiences. Focusing on the latter may contribute to the validity of the findings. However, from a practical perspective, we chose to emphasize the percentage negative experiences because in quality improvement cycles most benefit can be obtained when poor performing providers or centers are identified and improvements can be implemented. The percentage negative experiences therefore seems more relevant for quality improvement than the percentage positive experiences. We expect that the benefit of quality improvements for centers with a high percentage of positive experiences is less than the benefit for poor performing centers. Secondly, both the percentage negative experiences as well as the mean score can be used for benchmark purposes. Despite differences in approach, both may lead to the same identification of relevant differences in subgroups (see Fig. 1). Finally, one could argue that our approach of the MID is conservative, as it actually defines the size of a relevant minimal difference between averages on the group level on the base of differences in individual global ratings.
Limitations
First, we sent the postnatal questionnaire six weeks after the expected date of labor, but it is unknown if this timing was optimal. An invitation later than six weeks could result in recall bias due to exposure to other influences (e.g., women return to work, assuming their usual habits and patterns), and/or in non-response because sharing one's birth experiences may seem less relevant. An invitation before six weeks is not necessarily a better option. It may result in better recollection of the experiences but the risk of mood swings and hormonal disturbances might affect responses and response rates.
Related to this: the postnatal questionnaire was not sent six weeks after the actual date of labor. We only had the expected date of delivery as anchor. To protect women's privacy, we were not allowed to collect the precise date of childbirth in the ReproQ. Since the expected date may deviate from the true date, women may have been surveyed earlier (but not more than two weeks earlier) when they delivered after the expected date, or about four to five weeks later for most women when they delivered before the expected date. In both cases, postnatal care had already ended, and it is unlikely that differences in timing of the invitation of the ReproQ may have resulted in different ReproQ scores between these groups.
Secondly, women with a low educational level and non-Western women were underrepresented despite considerable efforts to have them participate. Most likely this is selective non-response, as non-Western women report more negative experiences than Western women (Scheerhagen et al., 2015a), and non-Western women are more often low educated and/or more health illiterate (Agyemang et al., 2006;Engelhard, 2007;Fransen, Harris & Essink-Bot, 2013). Addition of the non-response group is likely to widen the gap between poor and good experiences. This does not necessarily affect the estimated MID. Our non-Western women reported both a lower ReproQ score as well as a lower global rating than Western women. Repeating the MID calculations without this subgroup (non-Western women with a low educational level) resulted in about similar results. When this subgroup was excluded, the percentage negative MID decreases maximum −0.1% and increases max. +1.2%. Similar, the mean MID varies −0.01 to +0.03. Hence, the underrepresentation of these subgroups has limited impact on the estimated MID. Regrettably, we could not find additional evidence on the influence of selection bias on psychometric properties of other client experience surveys with similar characteristics in terms of study population, length and mode of administration (e.g., read out loud by clinicians vs. stand alone, self-report) (Rejas et al., 2011;Vernon et al., 2010). The impact of care process, birth outcome and socio-demographics on experiences scores, test-retest reliability and MID requires further study.
Thirdly, the MID is often used to identify changes in a patient's situation over time (Brozek, Guyatt & Schunemann, 2006;Copay et al., 2007;Guyatt et al., 2002). Given the small time window of the labor phase, it is unfeasible to validly assess changes in surveybased experiences within clients. Therefore, our MID estimates are based on cross-sectional comparisons. Our MID cannot be used to identify changes within a client, but only between health care providers, or within health care providers over time. These provider differences are more relevant than changes within clients for improving the quality of maternity care by the two-stage quality cycle.
Finally, we aimed at suitability of the ReproQ survey across countries, by using the universal WHO Responsiveness concept, by following an accepted strategy for survey development, and by avoiding any preferences towards providers, specific professionals or organizational structures. It is unclear if clients in other countries have the same importance ratings, experiences and MIDs as Dutch clients. Other self-report instruments in maternity care, such as the Women's Experience of Maternity Care Questionnaire of the NHS, overall indicate very good experiences (Peterson et al., 2005;Redshaw & Heikkila, 2010;Smith, 2001Smith, , 2011. Therefore, the MID in other developed countries will probably be in about same magnitude as our MID estimates.
Future use
The psychometrics of the ReproQ appear adequate for benchmarking for targeting quality improvement based on the profile of domain scores, and for monitoring of domain specific quality improvements. As part of a routine two-stage quality improvement cycle, as proposed by the ICHOM (ICHOM, 2015), we can identify relevant differences between birth care units who perform better or worse. The MID based percentage negative discriminates (known) groups better than the mean-MID. Furthermore, we recommend to use a multi-item questionnaire for benchmarking, such as the ReproQ, instead of a single-item benchmark: the reliability of a single-item benchmark is much lower and, unlike the ReproQ, single items are less effective in guiding specific improvements.
To increase the response rate, alternative modes of data collection should be explored. One suggestion is to invite women to directly fill out the questionnaire when waiting for their health care professional in the waiting room. Another suggestion to minimalize selection and response bias is to send all women the questionnaire including informed consent, without involvement of individual health care professionals. A third suggestion is to translate and provide the questionnaire in several languages for non-Western women.
Additionally, future use should pay attention to ethnicity and socio-economic background, beyond routine case-mix adjustment procedures. Adjustment always bears the risk that unintentionally worse experiences are neutralized, taking away the incentive for improvement.
With many benchmarking activities into place, the second part of the quality cycle urgently needs more attention and explicit implementation. Evidence-based routine quality cycles are still rare. Implementation requires true information-guided cycles in some detail. The benefit of such an approach has been demonstrated in the evaluation of innovations (Haugum et al., 2014;Weinick et al., 2014). The introduction of the MIDs in quality cycles may convince stakeholders that progress through innovation is meaningful.
CONCLUSION
Maternity care is continuously developing, partly based on the measurement of client experiences. The ReproQ questionnaire, based on the WHO Responsiveness model, is suitable to be used in quality improvement cycles: we showed good test-retest reliability, and by determining the minimally important difference relevant differences can be identified. | 2017-09-24T22:56:04.060Z | 2016-07-13T00:00:00.000 | {
"year": 2016,
"sha1": "d61fa890e2db9f622f4d876087fcf5bf4cccb71a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.2092",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d61fa890e2db9f622f4d876087fcf5bf4cccb71a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240541896 | pes2o/s2orc | v3-fos-license | Influence of thermal treatment on Anthocyanin, total phenolic content and antioxidant capacity of Pigmented Maize (Zea mays L.)
Pigmented maize (Zea mays L.) is a healthy crop due to its perfect proximates and phytochemicals. Thermal treatment was widely used to enhance phytochemical constituents in different kinds of crops. This research evaluated the impact of temperature (100, 115, 130 °C) and duration (10, 15, 20 min) in roasting to anthocyanin, total phenolic content and antioxidant capacity of pigmented maize. Results showed that thermal treatment at 115 °C in 10 min significantly improved anthocyanin in pigmented maize; however, this content would be lower at higher temperatures or prolonged exposing time. Meanwhile, total phenolic content and antioxidant capacity in the pigmented maize were recorded at the highest level when being roasted at 100 oC for 10 min. This research proved that phytochemical constituents and antioxidant capacity inside the pigmented maize would be seriously damaged at high temperatures and extended duration in roasting. By this, producers should pay more attention to thermal conditions in roasting.
Introduction
Pigmented maize (Zea mays L.) is an important crop in Vietnam. It contains high carbohydrates, proteins, lipids, anthocyanins, minerals and phenolics (1). Abundant anthocyanins and phenolics are located in the aleurone and pericarp monolayer of the cereal contributing to the pigment of the maize species (2). Pigmented maize seed has a high content of anthocyanin in the aleurone layer and lowered in the starchy endosperm (3). Anthocyanins and phenolics play functional properties against chronic diseases due to their antioxidant and anti-inflammatory activity (4,5). They offer protection against mutagenesis (6). Maize is commonly utilized for animal feed, cornmeal, grits, starch, flour, tortillas and snacks (5). Pigmented maize is roasted to turn into bread in the bakery industry.
Roasting involves applying dry heat to change the physicochemical, nutritional and phytochemical properties of raw material (7). It is one kind of thermal treatment widely applied in grain processing to improve nutritional bioavailability, phytochemical efficiency, organoleptic property and deduct the toxic components (8). The roasting of maize grains improved aroma, antioxidant capacity, food quality of semi and final products (9). Phytic acid in oat flour was greatly minimized by roasting to support calcium bioavailability (10). The mineral bioavailability in millet and biofortified bean flour was also significantly improved by roasting treatment (11,12). Roasting induced modification in the proximate composition and biological properties of the coffee bean, supporting the release of derivative antioxidants (13). Roasted rice wine had a better flavour compared to unroasted rice wine (8). Rice powder had decreased levels of free amino acids by roasting at high temperatures and extended duration (14). Roasting had a positive impact on bioactive constituents in soybean (15), wheat (16), barley (17), pistachio nuts (18), cocoa beans (19), coffee beans (20) and wattle seeds (21). The objective of this study was to verify the influence of temperature (100, 115, 130 o C) and time (10, 15, 20 min) in convective roasting to anthocyanin, total phenolic content and antioxidant capacity of pigmented maize.
Material
The pigmented maize was collected from Mỹ Xuyên district, Soc Trang province, Vietnam. It was harvested at maturity and dehydrated in an infrared drying oven to 15% moisture content. Chemical reagents were all analytical grade supplied from Merck (Germany) and Sigma Aldrich (USA).
Researching method
500 gm of each pigment maize seed sample was roasted at temperature (100, 115, 130 o C) and duration (10, 15, 20 min) in the oven (Memmert, model Universal oven UF30). The chamber load was exposed to defined temperatures at atmospheric pressure in the interior of a drying oven. Thermal energy was transferred to the chamber load by convection and radiation. The roasted seed was then cooled to an ambient condition, ready for analysis. Anthocyanin content (mg/100 gm) was measured following Abdel-Aal and Hucl (22). Total phenolic content (TPC, mg GAE/100 gm) was examined by Folin-Ciocalteu reagent assay (23). Free radical scavenging activity (DPPH, mg Trolox/100 gm) was evaluated by the method described by Bakar (24).
Statistical analysis
The experiments were run in 5 replications with different groups of samples. The data were presented as mean±standard deviation. Statistical analysis was performed by the Statgraphics Centurion version XVI.
Results and Discussion
Anthocyanin exerted a major activity against oxidative stress (6). The impact of roasting temperature and duration on anthocyanin content in the roasted pigmented maize was shown in Table 1. It was realized that the highest anthocyanin content (31.15±0.02 mg/100 gm) in the pigmented maize was noticed by roasting at 115 o C for 10 min. This content would be lower at higher temperatures or prolonged exposing time. Anthocyanin was highly sensitive to high temperatures (25). Anthocyanin was seriously damaged at higher roasting temperature (> 115 o C) and longer drying times (> 10 min) due to the decomposition of anthocyanin molecules through the hydrolysis of glycosidic links (26). The temperature could seriously affect anthocyanin's stability and its pigment intensity (27). Anthocyanin in black rice grain decreased dramatically with an increase in temperature from 100-140 °C (28). Anthocyanin in glutinous rice powder was seriously decomposed by spray-drying at 160-180 °C (29). One study reported the decomposition of anthocyanin in potatoes treated at a temperature >100 o C (30). In a similar research, anthocyanin decomposed after the rupture of glycosidic moiety and the establishment of other chalcones (31). The most suitable roasting temperature and duration for achieving the highest yield of anthocyanin was 100 °C up to 20 min for pigmented rice and 200 °C for 20 min for nonpigmented rice (32).
Major phenolics from maize were ferulic acid and anthocyanin (5). The efficiency of roasting temperature and time on total phenolic content in the roasted pigmented maize was presented in Table 2. It is noticed that the highest total phenolic content (169.52±0.03 mg GAE/100 gm) in the pigmented maize was noticed by roasting at 100 o C for 10 min. Longer exposure time and higher temperature resulted in low phenolic content. It is suggested that the roasting process significantly decreased total phenolic content. Roasting induced an accumulation of total phenolic content due to thermal modification in chemical constituents via cell wall disruption in quinoa seed (33). Degradation in insoluble phenolics and an accumulation of soluble ones were noticed in peanut seeds under roasting at 170 °C (31). The total phenolics in black soybean were greatly degraded by roasting at 210 °C for 30 min (34). Phenolics would be accelerated by roasting from 30-90 min at 150 °C, however, it was significantly decomposed by roasting at an extended period (35). Roasting was considered one of the most innovative processing techniques to improve total phenolics in broomcorn millet (36). Phenolic content in pistachio was accelerated under roasting at 110 °C for 16 min (37). Total phenolic content in fenugreek seed was greatly improved by roasting at 130 °C for 7 min (38). In another research, the total polyphenol content in roasted maize dramatically increased with higher roasting temperature and longer roasting time (39).
The durable DPPH radical with maximum absorption at 515 nm is commonly applied to estimate the free radical scavenging activity of hydrogen-donating antioxidants in cereal (40). The efficiency of roasting temperature and time on DPPH antioxidant capacity in the roasted pigmented maize is presented in Table 3. It is noticed that the highest DPPH antioxidant capacity (89.15±0.02 mg Trolox/100 gm) in the pigmented maize was recorded by roasting at 100 o C for 10 min. Longer exposure time and higher temperature induce low antioxidant capacity. Antioxidants in pigmented maize might mostly be covalently bonded with insoluble polymers (41). Mild heating induced cell wall disruption and release of antioxidants from insoluble particles of maize. The difference in antioxidant capacity could originate from the formation of derivative elements with potential antioxidant capacity and decomposition under excess temperature and duration (42,43). Heated samples showed chain-breaking and oxygenscavenging activities (44). Roasting was reported to enhance antioxidant potential by alteration of biochemical ingredients of cereal grains (33). Antioxidant activity in fenugreek seed was greatly improved by roasting at 130 °C for 7 min (38). Antioxidant capacity in the unpolished grain of nonpigmented rice was increased by roasting at 60 °C for 3 min (45). The most suitable roasting temperature and duration for achieving the highest total phenol content and antioxidant capacity was 100 °C in 20 min for pigmented rice and 200 °C for 20 min for nonpigmented rice (31).
Conclusion
Pigmented maize included a significant amount of nutrients, minerals, vitamins and specific flavours. The anthocyanin, total phenolic and antioxidant capacity in pigmented maize were unstable and susceptible to degradation by high temperature. They would be maintained effectively by roasting at temperature 100 0 -115 o C within 10 min. Bioactive constituents in the pigmented maize would be significantly damaged by excess thermal treatment. Therefore, cereal processors should concentrate on thermal conditions to minimize the harmful impacts on phytochemical components. Roasting could be considered an important pretreatment step to prolong the food stability and enhance the efficiency of further processing steps. | 2021-10-20T16:43:30.769Z | 2021-09-12T00:00:00.000 | {
"year": 2021,
"sha1": "775cbf23af7b442d44d5ac8fd0215e74b2a523b8",
"oa_license": "CCBY",
"oa_url": "https://horizonepublishing.com/journals/index.php/PST/article/download/1294/1031",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "22c8d54e9e4b1b0589fa80475900f94b66284e6b",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
119201836 | pes2o/s2orc | v3-fos-license | On Testing Entropic Inequalities for Superconducting Qudit
The aim of this work is to verify the new entropic and information inequalities for non-composite systems using experimental $5 \times 5$ density matrix of the qudit state, measured by the tomographic method in a multi-level superconducting circuit. These inequalities are well-known for bipartite and tripartite systems, but have never been tested for superconducting qudits. Entropic inequalities can also be used to evaluate the accuracy of experimental data and the value of mutual information, deduced from them, may charachterize correlations between different degrees of freedom in a noncomposite system.
INTRODUCTION
Properties of composite quantum systems, i.e. systems containing subsystems, have been extensively studied during the last few decades, which resulted in numerous practical applications. These systems were also described in terms of classical information theory [1] in the quantum domain [2] and their information and entropic characteristics were investigated, including the von Neumann entropy and quantum mutual information, discord related measures, entropic inequalities, contextuality, causality, subadditivity and strong subadditivity conditions.
On the contrary, the idea of using noncomposite quantum systems for quantum technologies was suggested [3][4][5] and quantum correlations in such systems have been analyzed only in recent times [6,7]. The latter opened a way of mapping information and entropic measures for composite quantum systems on the noncomposite quantum systems [6][7][8][9][10][11].
Along with the development of quantum information theory, tremendous progress has been made in experimental control over quantum systems. In particular, experiments with superconducting circuits, based on Josephson junction devices [12,13], have been rapidly developing recently [14]. Specifically spectroscopical [15,16] and time-domain [17] properties of such systems were studied both theoretically and experimentally. With the improvement of coherence time of superconducting qubits it became possible to obtain the density matrices of such systems, using quantum state tomography [18] as well as Wigner tomography [19].
In this work, we aim to verify the entropic and information inequalities using experimental 5 × 5 density matrix of the qudit state (j = 2), obtained using direct Wigner tomography in a superconducting circuit [19,20]. The inequalities were obtained using approach [6][7][8][9][10] to get analogs of subadditivity and strong subadditivity conditions, well-known for bipartite and tripartite systems, for a single qudit state.
SUPERCONDUCTING CIRCUITS
Superconducting circuits with Josephson junctions are macroscopic quantum objects, that can be several micrometers wide while still preserving quantum properties. This happens because they are artficially isolated from the environment which leaves them with a single degree of freedom. The intrinsic parameters of these circuits can be engineered as desired and adjusted with an external parameter (for example, a magnetic field). Such superconducting circuits are thereby often called "artificial atoms".
Josephson junction
The Josephson junction in superconducting circuits serves as a non-dissipative nonlinear element (namely, the nonlinear inductance). It consists of two superconductors, separated by a thin insulating layer, through which Cooper-pairs can coherently tunnel. This system was described by Brian Josephson [21], who showed that superconducting current across the junction depends on the phase difference between the superconductors: Here I c stands for the maximum current, which can flow through the junction without any dissipation, i.e. arXiv:1504.08203v1 [quant-ph] 30 Apr 2015 the critical current. Josephson also showed that when the voltage is applied across the junction the phase difference changes in time, which leads to the oscillations of the critical current with the angular frequency ω: (2) When we substitute this into the time derivative of Eq. (1) and compare it to the Faraday's law, we obtain the Josephson inductance: As the Josephson junction has some intrinsic capacity C it behaves as a nonlinear oscillator with angular frequency ω p : FIG. 1: The tilted washboard potential and quantized energy levels inside one of the potential wells.
The total current flow trough the junction can be written as J = I c sin φ + CV . SubstitutingV = ( /2e)φ from Eq. (2) we obtain: ∂U ∂φ is a tilted washboard potential for a particle with mass C/2e, shown on the Fig. 1(a).
Superconducting qudit
A closer look at one of the wells in the tilted washboard potential in Fig.1(b) with the quantized energy levels gives us a perfectly suitable d-level system (qudit). Varying the potential by an external magnetic field, we can achieve a desired number of energy levels in the well. The physical implementation of this system is called the Josephson phase circuit [22,23] and is shown in Fig. 2.
The quantum state of the Josephson phase circuit is controlled via DC and microwave pulses of bias current. The measurement of the state employs the escape from the potential well via tunneling. For example, to measure the occupation probability of state |1 one can pump microwaves at frequency ω 41 , which will induce a |1 → |4 transition. Then the state will rapidly tunnel due to the large tunneling rate Γ 4 . When the tunneling occurs, a voltage appears across the junction, which can be measured directly by an on-chip SQUID.
In this paper we utilize the results, obtained in the experiment by Shalibo et al. [19? ,20], in which the Wigner distribution of the Josephson phase circuit was directly measured using simple tomography pulses.
ENTROPIC INEQUALITIES
Quantum states are generally described by the density matrix operatorρ, which has the following properties: We consider a 5 × 5 density matrix for a qudit with j = 2: We can rewrite this as a 6 × 6 matrix, by adding one more zero row and zero column: While looking at this system one can realize that it can be viewed as tensor product of two subsystems -a qubit and a qutrit. So, using an invertible mapping of indices 1 ↔ −1 − 1/2; 2 ↔ −1 1/2; 3 ↔ 0 − 1/2; 4 ↔ 0 1/2; 5 ↔ 1 − 1/2; 6 ↔ 1 1/2, we obtain the density matrix, which describes the bipartite qubit-qutrit state. The density matrices of the subsystems are generally derived by taking the partial trace over the corresponding indices. We propose a simplified approach by dividing the density matrix into several blocks with fewer dimensions: Then the density matrices of the subsystems are: Now we can take a look at correlations in our system. One of the most important correlation characteristics is entropy. In this work we deal with the von Neumann entropy [24]: For the von Neumann entropy of the bipartite system one can write the subadditivity condition: − Tr ρ ln ρ ≤ − Tr ρ 1 ln ρ 1 − Tr ρ 2 ln ρ 2 Now we can repeat this process for another partition of the 6 × 6 density matrix: So the subadditivity condition takes the form: Next we add two more zero rows and columns to this matrix to get an 8 × 8 matrix. The system, described by this density matrix, can be divided into three subsystems (represented by 2 × 2 matrices) by the following mapping of indices: Here, we use the same approach of dividing the matrix into blocks to calculate the partial traces and get the matrices for the subsystems.
The density matrices that we are using hereinafter are the matrix of the second subsystem, R 2 , and two joint matrices of the "qubit-qubit" subsystems, ρ 12 and ρ 23 : For this kind of tripartite system one can write the strong subadditivity condition [25]: − Tr ρ ln ρ − Tr R 2 ln R 2 ≤ − Tr ρ 12 ln ρ 12 − Tr ρ 23 ln ρ 23 (22) VERIFYING EXPERIMENTAL DATA Next, we calculate the density matrices of the subsystems from the experimentally obtained 5 × 5 density matrix. This density matrix corresponds to the qudit, mentioned in section Superconducting qudit, and was measured in [19,20,26]. One can also find this matrix in the Supplementary material.
The density matrices in Eq. 10 and 11 are as follows: Analagously, for the other way of dividing the system into subsystems (Eq. 15 and 16) we obtain the following density matrices:
CONCLUSIONS
We have checked that the experimentally measured density matix of a superconducting qudit [19] satisfies the new entropic inequalities for non-composite systems, given by equations (13), (17) and (22). These inequalities can be further used to evaluate the accuracy of the experimental data. Moreover, the value of mutual information, deduced from entropic inequalities, may charachterize correlations between different degrees of freedom in a noncomposite system. There also exist other inequalities for the von-Neumann and q-entropy, which will be checked in future publications. | 2015-05-20T07:08:17.000Z | 2015-04-30T00:00:00.000 | {
"year": 2015,
"sha1": "e07eb18b8f14a7005749b415da1bb864089dbe60",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1504.08203",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e07eb18b8f14a7005749b415da1bb864089dbe60",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
236941994 | pes2o/s2orc | v3-fos-license | Difficulty manipulation and feedback strategies on perfor- mance during a novel fine motor coordination task
Improving the acquisition and retention of a new motor skill is of great importance. The present study (i) investigated the effects of difficulty manipulation strategies (gradual difficulty), combined with different modalities of feedback (FB) frequency on performance accuracy and consistency when learning a novel fine motor coordination task, and (ii) examined relationships between novel fine motor task performance and executive function (EF), working memory (WM), and perceived difficulty (PD). Thirty-six, right-handed, novice physical education students volunteered to participate in this study. Participants were divided into three progressive difficulty groups (PDG), 100% visual FB (FB1), 50% FB (FB2), and 33% FB (FB3). Progressive difficulty was increased by the manipulation of the distance to the target; 2 m, 2.37 m, and 3.56 m. Three FB modalities were investigated (i.e.: 100% visual FB (100% FB), 50% reduced feedback condition (50% RFB), and 33% reduced feedback conditions (33% RFB)). Performance assessments were conducted following familiarization, acquisition, and retention learning phases. Two stress-conditions of dart throws were investigated (i.e.: free condition (FC) and time pressure condition (TPC)). After the learning intervention, data showed that, under the free condition, the 100% FB group had a significant improvement in accuracy during all learning phases. Under time pressure condition, for the 50% RFB and the 33% RFB group, the measured variable (accuracy and consistency) showed a significant linear improvement in performance. The association between the percentage of RFB frequencies and the task difficulty (50% group) may be a more appropriate and manageable cognitive load compared to the 33% RFB and the 100% FB group. The present findings could have practical implications for practitioners because, while strategies are clearly necessary for improving learning, the efficacy of the process appears to be essentially based on the characteristics of the learners.
Introduction
Skilled movement is fundamental for success across many activities, however, the learning of such movements can be impacted by numerous conditions. Factors, such as amount of practice, type of feedback, practice schedule [1], specificity, positioning, and timing [2], are all vital in skill acquisition. However, it is perhaps more important to understand how to optimize learning, as well as identify factors associated with the acquisition and retention in motor learning [3].
Manipulation of difficulty level is a learning strategy used to improve motor task performance [4,5] and task difficulty (TD) is defined as a subjective perception assessed by task doers [6]. Task complexity is sometimes used in an interchangeable sense with TD. It is assumed that the more complex a task, the greater the negative effect on response time and accuracy [7]. Earlier findings have shown that difficulty manipulation leads to durable throwing accuracy and consistency [8]. These authors reported that the durability in accuracy and performance was related to a significant decrease in perceived difficulty for the same learned task. Additionally, Sawers and Hahn [9] investigated the influence of gradual vs. sudden training during retention performance, concluding that large difficulty increases in sudden protocol training may not be necessary when learning a novel locomotor task. Despite the fact that further explanation was provided regarding the complex relationships with the learner, the task, and the practice variables, the concept of TD remains an elusive construct [10]. Moreover, there is no well-known approach to adjusting practice conditions such that the functional difficulty of the task corresponds to the optimal challenge point [11]. In addition, the most efficient mechanism to adjust the functional difficulty that can match the optimal challenge point is still unknown [12].
Although numerous studies have investigated how different sources of feedback optimize learning [13], there remains a paucity of evidence in the area of motor learning in children [14]. Previous studies demonstrate that providing augmented feedback, information related to the achievement of a motor skill, improves performance during motor learning [15,16]. The knowledge of results (KR), provided by extrinsic augmented feedback, is effective in facilitating student engagement, promoting a positive ability perception, and improving performance in challenging tasks [17]. According to the guidance hypothesis, frequent KR leads to negative effects on motor learning [18]. Providing frequent visual feedback, has been widely investigated as a modality that may improve skill acquisition [19,20] and when evaluating a subjects skill and hypothesis, allows them to modify their own errors [21]. In addition, frequent use of extrinsic feedback may enable learners to use relevant sources of information, develop improved capacity of intrinsic feedback evaluation [20], and improve future performance [22]. Contrary wise, it appears that while the guidance effect of extrinsic feedback benefits immediate skill learning, it does not contribute to a persistent performance in retention. Results show subjects who practiced a motor task with reduced FB performed better in a delayed retention test compared to subjects who practiced with augmented feedback during or following every practice trial [23,24]. Based on augmented FB manipulation, it has been demonstrated that TD and task-related experience may interact with number of trials and that RF frequencies can benefit the learning of a simple striking task for both novice and experienced participants [25]. However, the concept of withholding feedback could increase TD, thus making practice more challenging and allowing for improvement in error detection and correction [26]). With regard to complex skill learning (involving bimanual coordination), research suggests that frequent visual FB (100 %) may be beneficial [27,26]. In addition, it is well known that frequent visual FB promotes strong feedback dependency, particularly for simple tasks [28]. However, contradiction still exists when considering the same concept for complex tasks. It has been demonstrated that performance on the delayed retention test was better after a reduced FB condition vs. every trial practice condition [23,24,28]. In addition, the effectiveness of online visual FB has been demonstrated in retention tests after practicing complex tasks [29]; however, in contrast, Fujji et al. [30] reported better performance retention in the 100% FB condition. Several studies have replicated the finding that frequent visual feedback (100%) can be beneficial in complex skill learning (e.g. involving bimanual coordination) [27,26], as well as in the acquisition, retention, and transfer tests (e.g. dart throwing task) [31].
Previously, it has been concluded that children use feedback in a different manner from that of adults and that children may require longer periods of practice, with gradual feedback reduction [14]. The learning method through reduced FB could improve performance in throwing tasks via enhancement of the implicit process [14]. In addition, previous studies have suggested that learning acts independent from the working memory, and generalization of proposed method requires further investigations [21]. In contrast, a crossover interaction was observed between working memory and intervening TD. Individuals with low working memory scores benefited more when TD was easy vs. more difficult, but individuals with high working memory scores demonstrated the opposite effect [32]. In addition, working memory has been shown to reach mature performance only in the transition phase between late adolescence and early adulthood [33]. Based on this, and the fact that information behind a late provided extrinsic feedback was kept in the working memory to improve error detection, it is still contentious as to whether children are able to appropriately combine intrinsic and extrinsic feedback [20].
Furthermore, it is worth noting that the effects on learning of practice and augmented feedback (AF) variables are much more complex than initially believed [34]. Although previous research has shown the relevance of the learning methods based on reduced BF frequency and TD, little is known about how children learn motor skills despite these well-known age-related performance discrepancies [14]. It should be acknowledged that reduced FB frequency and TD are traditionally investigated separately.
To the best of our knowledge, studies simultaneously investigating the manipulation of both strategies (manipulation of TD and reduced FB frequency) during a novel fine motor coordination learning and retentions task are lacking. Therefore, the aim of this study was to determine (i) whether TD strategies (i.e., gradual manipulation of level difficulty), combined with reduced FB, influenced learning performance of a novel fine motor coordination task (dart throwing) and (ii) to explore relationships between accuracy and consistency variables in a dart throwing task, and executive function, working memory, and perceived difficulty. It was hypothesized that differences in performance variables would be observed between combined TD and reduced FB frequency groups regarding acquisition and retentions tests, in relation to cognitive performances and associated with perceptions.
Participants
Thirty-six right-handed children (age = 10.72 ± 0.89 years, body height = 149.61±8.94 cm and body mass = 41.33 ± 10.49 kg; mean ± SD) volunteered to participate in this study. The different groups were fixed with the constraint that participants were approximately matched to pre-test performance (i.e., throwing nine darts to strike as close as possible to the bullseye), from the regular distance (i.e.: 2.37 m) [8,35] (Ong et al., 2015, Elghoul et al., 2018 and following two experimental stress conditions (with and without time-pressure) (Elghoul et al., 2018). They were assigned to either a 33% feedback group (33% RFB; n =13), a 50% feedback group (50% RFB; n = 11) or a 100% feedback group (100% FB; n =12). In the acquisition phase, participants in the 100% condition received visual FB after every trial. However, the FB frequency in the 50% condition was reduced to one visual FB every two trials (54 trials out of 108 practice trials). For the last ones, the FB frequency in the 33% condition was reduced to one visual FB every three trials (36 trials out of 108 practice trials). The level difficulty was manipulated by increasing the distance from the dartboard every three blocks (2 m; 2.37 m and 3.56 m) [8,36]. Participants declared no experience in dart throwing. The protocol was explained in full and informed consent was obtained before participation. All procedures were conducted according to the declaration of Helsinki.
Procedures
Subjects performed the dart throwing task across two experimental sessions. There was a pre-test followed by an acquisition phase and immediate post-test during the first session, then, delayed retention tests one and two weeks after, respectively. Test sessions were performed at the same time of day; on testing days a 10 min standard warm-up, including running and static stretching exercises [37], was followed by three dart throws. During the test session, two conditions were investigated. In the first, free condition (FC), subjects threw a trial of nine darts and were instructed always to aim for the bullseye. In the second, time pressure condition (TPC), participants were instructed to complete the set of throwing as quickly and accurately as possible. The dartboard was fixed on a wall so that its center was at eye level for each subject [37]. No instructions were given to participants. Individuals' posture and throwing techniques were maintained the same in the test conditions. For the non-visual feedback trials, an opaque curtain, 2m wide, were placed in front of each participant following throws without feedback [38]. In this condition, as soon as a participant released the dart, the experimenter, who stood one meter away from the line of throw, raised the opaque curtain to occlude the view of the impact of the dart and prevent knowledge of the result during the throwing task [31]. The pre-test consisted of nine trials. During the acquisition phase, participants were asked to complete a set of nine blocks of 12 trials of dart throwing. The acquisition phase was followed by an immediate post-test, which was the same as the pre-test. The delayed retention test 1, consisting of nine trials, was administered one week later [8,35]. In addition, as during the primary retention test, participants in the delayed retention 2 had to complete a set of nine trials, performed two weeks post acquisition.
Task and Apparatus:
A digital camera (SONY Corporation, HDR PJ 270E, Tokyo, Japan) was installed behind and above the participant to record the position of each throw for subsequent analysis of x (horizontal) and y (vertical) coordinates to the origin of the dartboard. The same posture and throwing techniques were maintained across different conditions [35].
For the progressive level of difficulty manipulation, we modified the difficulty level by increasing the distance to the dartboard. Three distances were maintained in this experimental condition: short (2m), regular (2.37m) and long (3.56m) [8,36]. The dartboard was fixed on a wall so that its center was at eye level for each subject [8]. No instructions were given to participants. Individual posture and throwing techniques were maintained the same in the test conditions. For the non-visual feedback trials, an opaque curtain with 2m large was placed one meter in front of each participant following throws without feedback [38]. In this condition, as soon as a participant released the dart, the experimenter, standing one meter away from the line of throw, raised the opaque curtain to occlude the view of the impact of the dart and prevent knowledge of the result during the throwing task [31].
Measures
Perceived difficulty: this scale is composed of 15 points, numbered 1-15, and is anchored at the two extremities by verbal labels -"Extremely easy" and "Extremely difficult" [39].
Trail making test
This is a test exploring mental flexibility EF (aptitude to move quickly from one task to another) [40]. In Part A, circles were numbered 1-25 and presented randomly on a sheet of paper; subjects were required to draw lines to connect the numbers in ascending order. In Part B, the circle included both numbers (1-15) and letters (A-L); as in Part A, the child was required to draw lines to connect the circles in an ascending pattern, but with the added task of alternating between the numbers and letters (i.e. 1-A-2-B-3-C, etc.). The child was also instructed to connect the circles as quickly as possible without lifting the pen or pencil from the paper. The duration of the test was 3 min for each part. The trailmaking test (TMT) measures visual conceptual and visuo-motor tracking. TMT part A purportedly measures attention, visual search, and motor function, whereas TMT part B is seen as a measure of EF, speed of attention, visual search, and motor function [41,42,43]. Outcome measures for both tasks included time to completion and number of errors [43]. Results for both TMT part A and B are reported as the number of seconds required to complete the task [44], errors committed and corrected. Higher scores reveal greater impairment.
Corsi block tapping test
The Corsi Block Tapping Test assesses short-term and working memory using a nonverbal analogue of the Digit Span procedure originally proposed by Hebb [45]. It may be seen as a spatial equivalent of the word and digit span tests that are sometimes used to measure aspects of memory. The test consists of nine uniformly small white wooden blocks, which are distributed over a rectangular board. Numbers from 1 to 9 are printed on the sides of the blocks that face the examiner. The examiner taps the blocks in a pattern that is then copied by the subject. Five sequences are tapped for each series. A correct performance is scored when the subject is able to correctly copy three of the five sequences. When a correct response is given, the next sequence is given until a series of eight blocks is given or until the subject is unable to copy three of the five sequences. For a correct response to be scored, the subject must tap only one block at a time and must tap directly on the top of the block, not to the side. The pattern of taps is repeated for every third sequence; however, the intervening taps are not repeated. Subjects tend to improve on the repeated stimuli, but not necessarily on the non-repeated sequences [46].
Score calculations:
A measure of outcome accuracy was evaluated based on mean radial error (MRE), which was defined as the absolute distance between the dart position and the center of the target. Mean RE was calculated as RE² = (x² + y²) for each block of trials.
We also calculated consistency based on bivariate variable error (BVE) with the following equation [47]: All results are expressed as mean (± SD). As data were normally distributed, the calculated and measured variables were analyzed using a mixed two-way analysis of variance (ANOVA) with repeated measurements: 3 Groups (RFB 33% vs. RFB 50% vs. FB100%) × 4 Times (pre-test, post-test, delayed retention1 and retention 2). Mean radial error (MRE) and bivariate variable error (BVE) were averaged across practice difficulty and feedback frequencies for each test session. When appropriate, a Bonferroni Post-Hoc analysis was performed. Correlation coefficients were used to assess the relationships between variables [48]. We also calculated the effect size, as partial ηp 2 , where the thresholds for describing the effect sizes as small, moderate, and large were considered as 0.01 (small), 0.06 (medium), and 0.14 (large), respectively [49]. The level of statistical significance was set, a priori, at p<0.05.
Mean Radial Error
Outcome accuracy for FC is shown in Figure 1. Results revealed no significant effect of FB frequency (F(2; 33) = 1.321; p = 0.28 ; ηp 2 = 0.074). During the familiarization, acquisition and retention phases, results reveal a significant main effect for motor learning on mean radial error (MRE) (F(3; 6) = 3.506; p = 0.018; ηp 2 = 0.096). In addition, there were no significant interaction between learning and FB frequency (F(6; 99) = 0.889; p = 0.505; ηp 2 = 0.051). The post-hoc analysis revealed that the score of the MRE was better in retention 2 than during pre-test. Furthermore, participants with progressive difficulty and 100% FB demonstrated significantly greater accuracy improvement (i.e., decreasing MRE) over all phases (retention 2 compared to retention 1, post-test and pre-test) (p < 0.05; Figure 1). Moreover, Post-hoc analysis of RFB 50% group showed also a linear trend in MRE improvement in retention 2, but only when compared to post-test and pre-test (p < 0.05; Figure 1).
Figure 1.
Accuracy as measured through mean radial error under free condition in the dart throwing task. RFB 33%: reduced visual FB condition to one visual FB by three trials; RFB 50%: reduced visual FB condition to one visual FB by two trials; FB 100%: condition received visual FB after every trial. c: Significant difference in RFB 100% compared to Pre-test at p < 0.05; d: Significant difference in RFB 50% compared to Pre-test at p < 0.05; e: Significant difference in RFB 50% compared to Post-test at p < 0.05.
Outcome accuracy for TPC is shown in Figure 2. Results reveal a significant main effect for learning on accuracy in time pressure condition during the familiarization, acquisition, and retention phases (F(3; 6) = 14.96; p = 0.001; ηp 2 = 0.312). The post-hoc analysis revealed that the score of the MRE for TPC was better in retention 2 than pre-test, posttest and retention 1 (p < 0.001; Figure 2). Moreover, analysis showed no significant effect of FB frequency (F(2; 33) = 0.344; p = 0.711 ; ηp 2 = 0.02) and no significant interaction between learning and FB frequency (F(6; 99) = 0.364; p = 0.9; ηp 2 = 0.021). Further analysis showed that accuracy under TPC in 33% and 50% conditions were better than in 100% condition. Motor learning when based on progressive difficulty combined to reduced FB in both 33% RFB and 50% RFB conditions display significantly greater accuracy improvement (i.e., decreasing MRE) over all phases (retention 2, retention 1 and post-test compared to pre-test) (p < 0.001; Figure 2).
Figure 2.
Accuracy as measured through mean radial error under time pressure condition in the dart throwing task. RFB 33%: reduced visual feedback condition to one visual feedback by three trials (36 trials out of 108 practice trials); RFB 50%: reduced visual FB condition to one visual feedback by two trials (54 trials out of 108 practice trials); FB 100%: condition received visual feedback after every trial. a: Significant difference in RFB 33% group compared to Pre-test at p < 0.05; b: Significant difference in RFB 50% group compared to Pre-test at p < 0.05; c: Significant difference in FB 100% group compared to Pre-test at p < 0.05.
Under the time pressure condition, ANOVA revealed no significant effect of FB frequency on consistency (BVE) (F(2; 33) = 0.002; p = 0.997; ηp 2 = 0.000) and a significant effect of learning (F(3; 6) = 4.743; p = 0.003; ηp 2 = 0.125). The post-hoc analysis revealed that the score of the BVE under TPC was better in retention 2 than pre-test (p < 0.01; Figure 3). There was no significant interaction (learning × FB frequency) (F(6; 99) = 0.061; p = 0.999; ηp 2 = 0.003). Further analysis showed that improvement in BVE under TPC was comparable for all groups (33% RFB, 50% RFB and 100% FB) when comparing retention 2 to pre-test (p < 0.05; Figure 3). : Consistency as measured through mean radial error under time pressure condition in the dart throwing task. RFB 33%: reduced visual FB condition to one visual FB by three trials; RFB 50%: reduced visual FB condition to one visual FB by two trials; FB 100%: condition received visual FB after every trial. a: Significant difference in RFB 33% compared to Pre-test at p < 0.05; b: Significant difference in RFB 50% compared to Pre-test at p < 0.05; c: Significant difference in FB group 100% compared to Pre-test at p < 0.05. Table 1 presents the correlations for the studied variables in 33% RFB frequency. There was a significant positive correlation between average time in TMT part B and both average time in TMT part A and committed errors in TMT part B (r = 0.82, p < 0.01; r = 0.78, p < 0.01, respectively). Results revealed a significant negative correlation between executive function (average time in TMT part B) and working memory (Corsi Forward) (r = -0.66; p < 0.05). In addition, a significant negative correlation was found between Corsi Backward and both average time in TMT-B and committed errors in TMT part B (r = -0.57, p < 0.05; r = -0.58, p < 0.05 respectively) ( Table 1). There was a significant positive correlation between Corsi Backward and Corsi Forward (r = 0.72, p < 0.01). Moreover, PD was positively correlated to executive function (average time in TMT part B and committed errors in TMT part B) (r = 0.67, p < 0.05; r = 0.65, p < 0.05; respectively) and negatively correlated to working memory measured variables in free conditions tasks (r = -0.66, p < 0.05). Under time pressure condition, perceived difficulty was correlated only to perceived difficulty in free condition (r = 0.72, p < 0.01) ( Table 1). Finally, there was no significant difference between variables in fine coordination task (throwing task) and both cognitive and perceived measured variables. A strong positive correlation was found between accuracy (MRE) and consistency (BVE) measures only under the same condition (free and time pressure condition throws) (r = 0.94, p < 0.001; r = 0.7, p < 0.01) (respectively) ( Table 1). Table 2 presents the correlations for the studied variables in 50% reduced FB frequency. Compared to the results of correlation in 33% reduced FB frequency, the condition of 50% reduced FB present less significant correlation. First, there is no significant correlation between cognitive, working memory variables and measured performance in fine coordination task. However, perceived difficulty under both free and time pressure conditions was positively correlated to executive function (average time in TMT part B) (r = .89, p < 0.001; r = .63, p < 0.05) (respectively) ( Table 2). Finally, a significant positive correlation was found between consistency under free condition (BVE FC) and accuracy (MRE FC) under the same condition (r = .84, p < 0.01) ( Table 2). As shown in Table 3, results suggest that processing time in TMT part B (average time) and TMT part A (average time) were positively related (r = 0.71, p < 0.01). Concerning the working memory test, the less (Corsi forward) and more complex (Corsi Backward) pattern of taps were positively correlated to executive function (TMT part B corrected errors) (r = 0.74, p < 0.01; r = 0.65; p < 0.05; respectively) ( Table 3). Furthermore, a significant negative correlation was found between Corsi Backward and consistency measures for the difficult throw condition (MRE TPC) (r = -0.63, p < 0.05). Consistency performance in free condition throws (BVE FC) was negatively correlated to both PD under free and time pressure conditions (r = -0.8, p < 0.01; r = 0.76, p < 0.01; respectively) ( Table 3). In addition, accuracy (MRE) and consistency (BVE) measures under the same condition (FC or TPC) were positively correlated (r = 0.63, p < 0.05; r = 0.62, p < 0.05; respectively); however there was a significant negative correlation between the same variables under different throw conditions (free condition vs. time pressure condition) (r = 0.62, p < 0.05) ( Table 3). Finally, performance measures under time pressure for both accuracy and consistency (MRE TPC and BVE TPC) were negatively related to TMT part A committed errors (r = -0.65, p < 0.05; r = -0.76, p < 0.01; respectively).
Discussion
The aim of the present study was to (i) examine whether strategies (i.e., gradual manipulation of level difficulty combined to reduced FB frequencies) used in learning a novel fine motor coordination task (darts throwing) affect performance, and (ii) to explore relationships between accuracy and consistency variables in dart throwing task, executive function, storage in working memory, and PD among 11-12-year-old boys.
In this study, the strategy based on progressive difficulty manipulation combined with reduced FB frequency benefited accuracy performance under FC for both, the 100% FB and the 50% FB group, compared to the 33% FB group. In addition, the 100% FB group showed better accuracy performance in the post-test and the delayed retention test 1 compared to the 50% FB group. Whereas, in the 50% reduced FB group, this occurred only at retention 2 compared to pre-test and post-test. In addition, the learning performance of the 50% reduced FB group seems to be more durable compared to the 100% FB group. While some studies support the guidance hypothesis that augmented FB lead to a better retention performance [50,51], others offer conflicting evidence, suggesting that a reduced frequency of FB may benefit retention [15,52]. The findings from the current study were in line with those that support guidance hypothesis showing that more FB led to a better retention performance [30]. Moreover, it has been demonstrated that practice may be less effective for the children in the reduced FB practice condition compared to those who practiced with 100% feedback [14]. We suggest that the progressive level of difficulty combined with visual FB could be a plausible explanation that more feedback led to a better performance in retention test. Moreover, it was suggested from previous studies that there may be an interaction between task complexity and feedback frequency [26,52]. Our current findings are concordant with those of Sidaway et al. [53] who revealed that, in children, the effect of FB may be mediated by the difficulty of the motor skill being learned.
Furthermore, we have shown that reduced FB frequency benefits accuracy under TPC for both the 33% FB and the 50% FB group, compared to the 100% FB group. It has been suggested that there exists a possible interaction between task complexity and FB frequency [26,52]. Fujii et al. [30] reported that a reaching task was considered as relatively complex; however, was not controlled as a study variable. To our knowledge, only a limited number of studies have employed augmented FB when considering the difficulty of the learned task. However, in a recent study, Elghoul et al. [8] demonstrated that progres-sive difficulty manipulation leads to durable throwing accuracy and consistency performance. In addition, previous research suggests that adding difficulty to the instructional process can increase learning [54,55]. It is worth noting that studies involving reduced feedback frequencies tend to be more focused on the nature of the task (simple vs. complex), rather than the association between the percentage of reduced FB frequencies and the TD, which could conceivably provide a more appropriate, challenging, environment and a more manageable cognitive load.
In the present study, a change in MRE performance under time pressure condition across the learning phases was reported. The 100% group demonstrated a decrease in BVE and MRE variables at post-test, retention 1 and retention 2, but the changes did not reach significance (Figure 2 and 3). On the contrary, the 50% reduced FB and the 33% reduced FB group under time pressure condition appeared to enhance learning. The combination of reduced FB frequencies and progressive difficulty manipulation may positively influence accuracy performance in the retention test. Regarding performance under time pressure condition, MRE and BVE (accuracy and consistency) displayed significantly improved linear performance, when compared to retention 2, retention 1, and post-test to pre-test. Previous work has demonstrated that reduced feedback practice conditions may yield increases in information-processing demands during practice, which are advantageous to the relatively permanent motor learning effects observed in delayed retention tests [14,56]. In addition, it has been posited that visual sighting of the path of a projected object may not be a critical factor in skill learning [57]. Moreover, reducing KR would allow participants to focus more intently on the movement-produced FB and strategies for error detection [56]. It seems that adding a progressive level of difficulty to the reduced FB conditions during the learning process can limit deteriorated performance in the acquisition phase, thus potentially explaining the enhancement in accuracy for the reduced FB groups in the post-test. It seems that adding a progressive level of difficulty to the learning process allows the children to take advantage of the additional FB, limiting the effect of excess information on the capacity for information processing, and reducing the dependence to this additional external FB.
Concerning the relationships between dart throwing performance with combined progressive difficulty and reduced FB frequency, as well as executive function, working memory, and PD. Findings suggest that reduced FB 50% combined with progressive TD strategy may benefit learning processes by decreasing cognitive demands imposed on the learners and could provide a more controlled degree of learning efficiency trade-off under various experimental conditions. The results of the current study show that conditions of throwing in the group with visual feedback (100%) and with the reduced FB frequency group (33%) could contribute an additional cognitive load to the cognitive processes of participants. Of particular interest are the correlations found between measured variables. These results were supported, in the case of the visual feedback group, by the significant positive correlation between measured cognitive variables (Time TMT part A and Time TMT part B) and between EF (CRE TMT part B) and working memory (Corsi Forward and Backward). Moreover, a significant correlation was found between the measured throwing performances (both MRE and BVE under TPC) and EF (CME part A)), and between working memory (Corsi Backward) and throwing performances (MRE under TPC). These results were supported by a previous study [26], showing that frequent visual feedback typically promotes strong feedback dependency.
In the case of the reduced feedback frequency group (33%), the hypothesis of an additional cognitive load was also supported by the observed correlations between cognitive components of the learner. There was a positive correlation between measured cognitive variables (Time TMT part A and Time TMT part B) and (Time TMT part B and CME TMT part B). In addition, significant correlations were found between EF (Time TMT part B and CME TMT part B) and working memory (Corsi Forward). Regarding the working memory (Corsi Backward), there was a negative significant correlation with EF (Time TMT and CME TMT part B). Based on the correlation found between PD under free condition (PD, FC) and both EF (Time TMT and CME TMT part B) and working memory (Corsi Backward)), it is likely that the deterioration in the cognitive performances may be associated with an increase in the PD level. One explanation for this may be the effect on the learning processes by reducing feedback frequency, which increases the demands imposed on the learner and requires the learner to develop their own internal error detection and correction mechanisms [26].
Learning via reduced FB (FB 50%) might provide a more appropriate environment with manageable cognitive load; particularly given the subject population (novice young learners). However, reduced feedback practice conditions can increase information-processing demands during practice [14] and recent findings suggest that only consistent performance was accompanied by a significant change in PD scores when learning a novel psychomotor task [8].
A prior study has reported that the implicit process may be independent of working memory [21]. This was supported in the current study by the lack of relationship between measured cognitive variables after the reduced FB 50% condition, when comparing to both 33% reduced FB and 100% FB conditions. Studies using reduced feedback strategy have suggested that skills may be acquired through an implicit process, which is independent of the working memory [58,59]. Previous studies have argued that, by suppressing working memory's involvement in motor learning, an implicit motor learning takes place [58]. Furthermore, the skill that is learned implicitly in comparison with explicit practice learning tends to be much less exposed to failure and loss under stress [60,61]. In this regard, implicit motor learning encourages implicit processes to support the performance by avoiding the serial transition of explicit declarative knowledge from the onset of learning [28].
Concerning the contradiction based on the benefits enhanced by the reducing feedback frequency in comparison to the augmented feedback on the learning process [26]. Guadagnoli et al. [25] previously highlighted the relationship between the task complexity and task-related experience interaction. Our results are consistent with the predictions of the Challenge Point Framework [10], which suggests that task demands, learner characteristics, and practice conditions interact to influence the level of challenge posed to the learner during practice. Based on the Challenge Point Framework, there is a point of optimal challenge that allows a learner-appropriate level of cognitive effort, maximizing benefits for learning. When the level of the challenge exceeds the optimal challenge point, the level of effort may be beyond the information-processing capability of the learner, interfering with learning benefits. The results of the current study suggest that when learning a novel-throwing task, children can benefit more from combined TD and reduced feedback frequency. Therefore, these considerations allow the individual a more appropriate level of cognitive effort and optimal information-processing capability, thereby maximizing benefits for learning.
In the present study, it is important to point out that the interaction between TD and feedback strategies, even in a novel fine motor coordination task learning, should be considered. Despite its novelty, the present study suffers from some limitations that should acknowledged. First, the small sample size (n= 36 participants) could hinder the generalizability of these results (e.g., different ability levels combined with TD and visual or reduced FB strategies). Second, we only incorporated novice learners in this study; relationships between learner and task characteristics and specific fields should be assessed to measure the finer intricacies of improved performance. Third, further research evaluating the persistence of practice strategies over time and under psychologically stressful conditions is warranted. Finally, these results pertain only to young novice learners; future studies assessing the relationship between cognitive variables and performances in learning a psychomotor task in competitive and elite athletes are needed.
Conclusions
While the FB interventions were not confirmed as a significant factor in practice, the association between TD and visual feedback was more beneficial than non-visual feedback in the acquisition of a target task. The reduced FB frequency (both 33 % and 50% strategies groups) was more efficient and may benefit retention performance in learning a novel psychomotor task. Moreover, regarding the relationship between cognitive variables and performances in learning a novel psychomotor task, the association between the percentage of reduced FB frequency and the task difficulty (50% FB), could provide a more appropriate, challenging, environment and manageable cognitive load. Consequently, physical education teachers and practitioners are encouraged to consider TD, FB practices, and the learner's idiosyncratic characteristics, to facilitate a better effective learning process of novel fine motor coordination tasks. | 2021-08-07T09:11:49.159Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "4453ac55eb6f70ea0df488f559fb75484185a302",
"oa_license": "CCBY",
"oa_url": "https://www.preprints.org/manuscript/202108.0027/v1/download",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "4453ac55eb6f70ea0df488f559fb75484185a302",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
259509179 | pes2o/s2orc | v3-fos-license | From Crashing Waves to Crashing Sodium: A Rare Case of Nearly Asymptomatic Severe Hyponatremia
Hyponatremia refers to an abnormally low serum sodium level, and it is the most common electrolyte disorder encountered in the clinical setting. Despite its prevalence, hyponatremia can be challenging to clinically identify in some cases due to non-specific symptom presentation. In this case report, we illustrate the rare clinical course of a nearly asymptomatic patient with severe hyponatremia and discuss potential explanations for this uncommon presentation.
Introduction
Hyponatremia is characterized by a serum sodium level under 135 mmol/L and is the most common electrolyte imbalance encountered in the hospital setting [1]. Abrupt changes in volume status, hormones, and medications (e.g., diuretics, antidepressants, and antiepileptics) can contribute to patients developing hyponatremia [2]. Depending on the pathophysiologic severity of the sodium imbalance, it is known to cause a spectrum of neurological, musculoskeletal, and gastrointestinal symptoms that can progress to lifethreatening complications [3].
Severe hyponatremia (serum sodium less than 125 mmol/L) has been associated with increased length of hospitalization and significantly higher mortality risk [4]. A large retrospective study conducted by Chawla et al. reported an overall mortality rate of 6.1% in hospitalized patients with hyponatremia -notably higher than the overall mortality rate of 2.3% among patients with a normal serum sodium level. Thus, early diagnosis and appropriate management of hyponatremia are crucial to prevent complications and improve clinical outcomes.
Case Presentation
A 47-year-old male patient with a past medical history of type 2 diabetes mellitus, hypertension, and alcohol use disorder presented to the emergency department (ED) for worsening headache, blurry vision, nausea, and vomiting following a fall four days prior to arrival. The patient had been knocked over by aggressive waves at the beach and experienced head trauma with a brief loss of consciousness.
On physical examination, the patient was in no acute distress with a Glasgow Coma Scale score of 15. He had a temperature of 97.3°F, blood pressure of 149/89 mmHg, heart rate of 86 beats/min, respiratory rate of 18 breaths/min, and oxygen saturation of 96% on room air. His exam was remarkable for mild generalized musculoskeletal tenderness but was otherwise normal including normal mental status and neurological exam.
The patient's ED evaluation incidentally identified severe hypoosmolar hyponatremia with serum sodium of 99 mmol/L, serum potassium of 2.2 mmol/L, serum magnesium of 1.1 mmol/L, lactate of 3.7 mmol/L, high anion gap of 14, glucose of 212 mg/dL, serum osmolality of 214 mOsm/kg, and urine osmolality of 562 mOsm/kg with adequate urine output and moist mucous membranes. Venous blood gas analysis revealed a pH of 7.62 and pCO 2 of 35, concerning for respiratory alkalosis masking an underlying metabolic acidosis.
CT head without contrast did not show any evidence of intracranial hemorrhage, epidural hematoma, or skull fracture.
The nephrology team was consulted, and the patient was admitted to the intensive care unit (ICU). Intravenous (IV) 3% hypertonic saline and 2 mcg of desmopressin twice daily were administered to gradually correct the serum sodium by 6-8 mmol/L daily. Desmopressin was utilized to prevent a large-volume diuresis and was withheld when appropriate to achieve this target correction rate. The patient was also given potassium chloride and magnesium sulfate intravenously until resolution of his concurrent hypokalemia and hypomagnesemia, respectively. He was placed on strict fluid restriction, and his electrolytes were monitored every four hours. Evaluation by medical toxicology identified early signs of alcohol withdrawal, which was treated with IV phenobarbital. The patient reported heavy alcohol use for several months in addition to ingesting approximately 320 ounces of water daily and restricting food intake. Three months prior, the patient had also begun taking 50 mg of chlorthalidone daily.
Over the next 18 hours, the patient's serum sodium increased from 99 mmol/L to 104 mmol/L. Despite still being severely hyponatremic, his mental status remained normal. The 3% hypertonic saline and desmopressin were continued for a total of five days for gradual sodium correction until serum sodium was 133 mmol/L, nearly normal.
The patient was in good condition and electrolytes were normal at discharge on the eighth day at the hospital. Chlorthalidone was discontinued from his medication regimen, and he was scheduled for a close outpatient follow-up to reduce the risk of re-admission.
Discussion
Identifying the clinical signs of mild or moderate hyponatremia can be challenging due to less pronounced symptoms and overlap with other clinical presentations. However, severe hyponatremia typically presents with life-threatening symptoms including seizures, respiratory arrest, and coma [5]. In this unusual case of a patient with a Glasgow Coma Scale of 15 and no cognitive or neurological deficits, serum sodium of less than 100 mmol/L is unexpected. The patient's presenting symptoms of headache, nausea, and vomiting are non-specific and could be attributed to many causes.
Given the severity of the hyponatremia, it is imperative to avoid rapid overcorrection, no more than 10 mmol/L in 24 hours, due to the risk of osmotic demyelination syndrome (ODS) [6]. Signs concerning ODS typically develop over several days following sodium overcorrection and include quadriparesis, dysphagia, dysarthria, seizures, altered mental status, and coma [7][8][9]. One multicenter observational study followed 36 patients with ODS over one year and reported that 31% died within one year and another 31% required lifesupporting therapies [10].
Regarding this case of severe hypoosmolar euvolemic hyponatremia, the suspected etiology was a combination of the underlying syndrome of inappropriate antidiuretic hormone (SIADH) likely secondary to chlorthalidone use, psychogenic polydipsia, and decreased solute intake from excessive alcohol use. Thiazide diuretics such as chlorthalidone are known to induce hyponatremia by enhancing antidiuretic hormone secretion, increasing urinary sodium excretion, and increasing water intake [11,12].
The treatment plan was tailored to gradually correct the serum sodium by 6-8 mmol/L in the first 24 hours using 3% hypertonic saline along with desmopressin to prevent water losses [13]. Given this patient's risk of large-volume water diuresis, desmopressin was utilized proactively to minimize free water excretion and limit autocorrection of serum sodium. It is key to restrict fluids and monitor urine output while the patient is on desmopressin; otherwise, unrestricted fluid intake can induce further hyponatremia [14].
Conclusions
This case report highlights the importance of evaluating the serum sodium level even in the case of nonspecific symptoms, as failure to do so can lead to devastating consequences. In addition to ordering a comprehensive lab work-up, conducting a thorough history and physical exam are essential in understanding the underlying etiologies of hyponatremia. Due to the variable presentation of hyponatremia, clinicians must have a high index of suspicion to facilitate prompt diagnosis and effective management. Gradual correction of serum sodium and fluid restriction are vital for reducing the risk of ODS.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2023-07-11T15:34:14.639Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "3640796cfb5a726671c49014f6d09a664f949a7d",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/158146/20230706-18546-1fj8yii.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59c664d848bdd30af825e360e8c56e80dfe55fe9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
229481388 | pes2o/s2orc | v3-fos-license | Vitamin D Deficiency in End Stage Renal Disease Patients with Diabetes Mellitus Undergoing Hemodialysis
Objectives: To assess the association of hypovitaminosis D with diabetes mellitus (DM) in patients with end stage renal disease (ESRD) undergoing hemodialysis. Methodology: This cross-sectional study was conducted at the Jinnah Postgraduate Medical Centre between July 2019 and February 2020. Patients with diagnosed ESRD who were on hemodialysis, with or without concomitant DM were registered. Vitamin D levels were categorized according to the severity of the deficiency or excess as 0-10 ng/mL, severely deficient; 11-20 ng/mL, deficient; 21-32 ng/mL; insufficient, 33-49 ng/mL, adequate; 50-65 ng/mL, optimum; and above that as high. Patients were stratified according to the status of DM. Results: In a total of 80, the mean age was 45.21±12.67 years with 51 (63.75%) males and 29 (36.25%) females. A total of 36 (45%) CKD patients had concomitant diabetes. The median vitamin D levels were 20.25ng/mL. It was found that chronic kidney disease (CKD) patients with concomitant DM had significantly lower levels of vitamin D [15.19±6.83 vs. 30.28±14.22 (p<0.001)]. Out of the 12 patients with a severe deficiency, three-fourths of the population had DM as comorbidity, while in those with ‘deficiency’, 19 (67.9%) had DM. The majority of the patients without DM had adequate or optimum levels of serum 25-hydroxyvitamin D levels. Conclusion: Current study indicated that deficiency of serum vitamin D is associated with concomitant DM in patients with CKD as the majority had a severe deficiency of serum 25(OH)D. Supplemental vitamin D may help correct the deficiency and prevent the associated complications in patients.
Introduction
Vitamin D is a fat-soluble vitamin that is produced endogenously in the skin depending on the ultraviolet rays from the sun, which when in contact with the skin triggers vitamin D synthesis in the body. For most individuals, around 90% of vitamin D is produced in this way, whereas the remaining 10% is obtained from food and dietary supplements [1][2].
Previously, it was known that this vitamin only plays a role in the regulation of calcium and phosphate in our body [2]. However, more recently low levels of vitamin D have also been linked with many other conditions including bone diseases, cardiovascular diseases, and many psychiatric ailments [3][4][5].
Provitamin from the skin or diet after hydroxylation in the liver forms 25-hydroxyvitamin D [25(OH)D]. It is converted into its activated form (1,25-dihydroxyvitamin D) in the kidneys [6]. In chronic kidney disease (CKD) patients, this final step of active vitamin D production is impaired. Moreover, the hyperphosphatemic-osteocyte-derived hormone, i.e. fibroblast growth factor (FGF-23) increases to compensate for phosphate retention and further inhibits the renal 1α-hydroxylase expression, inducing the expression of 24-hydroxylase responsible for the degradation of 1,25-dihydroxyvitamin D3. However, 24,25dihydroxyvitamin D3 levels are lower in dialysis patients than in the normal population. Thus, the impaired uptake of 25-hydroxyvitamin D3 by altered kidneys remains the main cause of 1,25-dihydroxyvitamin D3 deficiency [7].
Another important clinically significant discussion could be of the substrate deficit, i.e. decreased 25(OH)D. Patients with CKD or those on hemodialysis are believed to have reduced cutaneous synthesis than normal individuals as well as increased melanin pigmentation, even if the sunlight exposure is identical. This might be the reason for the low levels of 25(OH)D. Other reasons may be inactivity (low exposure) or inadequate 1 2 2 2 1 calcium containing diet. Interestingly, they have a direct relationship between 1,25 dihydroxyvitamin D and 25(OH)D unlike normal individuals, the exact cause of which is unknown [8,9].
Evidence has shown that increasing age, female gender, proteinuria, physical inactivity, diabetes mellitus (DM), and nutritional deficiencies are correlated with hypovitaminosis D in patients with CKD [5,6]. Nephropathy is a serious complication that develops in patients with DM. Diabetic kidney disease affects about one-third of patients with DM and currently ranks as the foremost cause of end-stage renal disease (ESRD) [10]. Low levels of vitamin D have been reported in CKD patients with concomitant DM undergoing hemodialysis therapy [5]. Furthermore, Drechsler et al. reported that deficiency of vitamin D in patients on hemodialysis leads to adverse outcomes and is associated with high mortality among these patients [6].
Despite a high prevalence of vitamin D deficiency as well as DM, especially in Pakistan the local data has been very limited [11,12]. Hence, the current study was undertaken to fill the gap in the local literature. The current study aimed to determine the association between vitamin D deficiencies in CKD patients with DM undergoing hemodialysis compared to non-diabetic patients in our setting.
Materials And Methods
This cross-sectional observational study was conducted at the Nephrology Department, Jinnah Postgraduate Medical Centre, Karachi, Pakistan between July 2019 and February 2020. Ethical approval was procured from its Institutional Review Board. A non-probability convenience sampling technique was applied and 80 patients with diagnosed ESRD who were on hemodialysis, with or without concomitant DM were registered in the specified duration. All patients were included in the study after informed verbal and written consent.
According to the World Health Organization, vitamin D deficiency is defined as the serum 25(OH) D levels of less than 20 ng/mL [13]. For this study, vitamin D levels were categorized according to the severity of the deficiency or excess among the participants. It was grouped into seven classes as: severely deficient 0-10ng/mL, deficient 11-20 ng/mL, and insufficient 21-32ng/mL, adequate 33-49 ng/mL, optimum 50-60 ng/mL, toxic 60 -70ng/mL and above that as potentially toxic [14].
For blood samples, an experienced nurse was appointed who had experience of more than three years. A 5 ml blood sample was collected from each patient, under aseptic conditions using a tourniquet. Patients' vitals were monitored regularly. Serum samples were collected for hemoglobin (Hb) levels, serum vitamin D, intact parathyroid hormone (iPTH), urea and creatinine, hemoglobin A1c (HbA1c), ferritin, transferrin, random blood sugar (RBS), and fasting blood sugar (FBS
Results
In a total of 80 participants, the mean age was 45. 21 In our study adequacy of hemodialysis was also assessed by urea reduction ratio and single pool Kt/V; in the diabetic group mean URR was 64.93 ± 4.002 while in the non-diabetic group it was 64.67 ± 3.996 with a Pvalue of 0.753.
SpKt/V was 1.1489 ± 0.132 in the diabetic group and in the non-diabetic group it was 1.1429 ± 0.117 (P-value 0.816).
Mean duration of hemodialysis patients was 5.212 ± 1.004 years, while in the diabetic group duration of hemodialysis 4.422 ± 0.811 years and in the non-diabetic group it was noted 4.326 ± 0.875 with significant P-value (0.045).
The majority of patients without diabetes mellitus had adequate or optimum levels of serum 25hydroxyvitamin D levels ( Figure 1).
Discussion
The degree of association of vitamin D, serum mineral regulation, and the physiological function of the renal system necessitate a vast body of research that may provide relevant clinical insight. In this study, the mean values for Hb, iPTH, and vitamin D were decreased, increased, and normal respectively following standard ranges. These findings demonstrate that overall vitamin D deficiency was not particularly prevalent in this population of ESRD patients; although these levels did not reflect a deficiency, they are still relatively low within the normal range.
Even though vitamin D deficiency was not demonstrated based on mean serum levels within the population, this study was able to find a statistically significant difference between vitamin D serum levels relative to non-diabetic ESRD patients. As stated earlier in the results, out of the patients with the greatest level of vitamin D deficiency, the vast majority of them had DM as associated comorbidity. Other studies have demonstrated a linear progressive relationship between diabetic nephropathy patients and vitamin D deficiency [15]. Sacerdote et al. performed a focused review and found an inverse relationship between vitamin D levels and type 2 diabetes, as well as other insulin-related disorders [16].
The causal mechanism involved in the onset and progression of diabetes in association with vitamin D likely concerns the role of the vitamin in the immune system and insulin secretion. For instance, vitamin D serves an important immunomodulatory function, and its respective receptor (VDR) is found on both T and B lymphocytes. The modification of the T helper cell cytokine profile can induce inhibition and expression of effectors T cells, such as those involved in autoimmune reactions which may lead to type 1 DM, vitamin D deficiency also inhibits pancreatic insulin secretion from the beta islet cells, contributing to complications observed in type 2 diabetes [17]. The clinical implications of these findings may be especially immense given local studies that found statistical significance among vitamin-deficient patients and elevated HbA1C levels, blood glucose, and poor glycemic control [18]. Mahmood et al. concluded that measures should be taken to avoid unfavorable clinical outcomes in DM patients through either vitamin supplementation or increased exposure to sunlight [19].
Despite efforts to generalize the findings, this study still posed several limitations such as limited sample size and a primarily older, male, and anemic demographic. The clinical profile and demographics of the patients provide preconditions that are inextricably linked to the propensity for hypovitaminosis. It is well known that the process of aging affects the formation of vitamin D. In fact, an approximately 50% reduction of production is observed due to a decline in renal function which is largely caused by progressing age. The development of a deficiency leads to a greater reduction in the formation of the metabolite, initiating a sort of positive feedback loop. Besides, gender has also been found to play a largely significant role in vitamin D status. In a study evaluating patients undergoing coronary angiography, females had higher rates of renal failure and were associated with lower vitamin D levels by a large statistical margin. Furthermore, recent studies have found that vitamin D deficiency tends to coincide with anemia in both healthy and diseased populations, and may even be involved in the causal mechanism of its development. Granted its involvement in erythropoiesis and the suppression of hepcidin, there is evidence that sufficient supplementation of the vitamin may be an effective preventative measure against the development of anemia [20][21][22].
There is significant relationship between duration of hemodialysis and vitamin D levels in diabetic patients on hemodialysis (0.045), while Bansal et al. in 2012 from India found weak correlation between duration and vitamin D [23]. El-Arbagy et al. from Egypt recently stated that there is no correlation between duration and vitamin D level [24].
Conclusions
The current study indicated that deficiency of serum vitamin D is associated with concomitant DM in patients with ESRD as the majority had a severe deficiency of serum 25(OH)D. Supplemental vitamin D may help correct the deficiency and prevent the associated complications in patients.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study. Jinnah Postgraduate Medical Centre issued approval N0.F.2-81-IRB/2020-GEN/42514/JPMC. With reference to your application/letter dated 20th February, 2020, on the subject noted above and to say that institutional review board has allowed to retrieve data. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2020-11-26T09:07:10.072Z | 2020-11-01T00:00:00.000 | {
"year": 2020,
"sha1": "2e0540ff2f746b219d0cbe7ef17f44fe79e401d8",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/45370-vitamin-d-deficiency-in-end-stage-renal-disease-patients-with-diabetes-mellitus-undergoing-hemodialysis.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5cb2c66dabf45a731d306f65e024c6fe1fd500e9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248344860 | pes2o/s2orc | v3-fos-license | Determining Carina and Clavicular Distance-Dependent Positioning of Endotracheal Tube in Critically Ill Patients: An Artificial Intelligence-Based Approach
Simple Summary Endotracheal intubation (ETI) is employed for maintaining the airway patency of in critically ill patients during mechanical ventilation. Generally, ETI is conducted under anesthesia in the intensive care unit, during which the endotracheal tube (ETT) is inserted at a particular depth into the trachea, and a malpositioned ETT may result in hazardous consequences, such as a collapsed or hyperinflated lung. Therefore, we proposed a deep learning-based CNN approach, combined with four key point annotations on chest radiographs (tracheal tube end, carina, and left/right clavicular heads), which demonstrated significant sensitivity, specificity, and accuracy for recognizing and localizing the ETT tip on chest radiographs. These findings may assist in radiographic confirmation of precise ETT placement and help in ruling out other etiologies of respiratory failure. Abstract Early and accurate prediction of endotracheal tube (ETT) location is pivotal for critically ill patients. Automatic and timely detection of faulty ETT locations from chest X-ray images may avert patients’ morbidity and mortality. Therefore, we designed convolutional neural network (CNN)-based algorithms to evaluate ETT position appropriateness relative to four detected key points, including tracheal tube end, carina, and left/right clavicular heads on chest radiographs. We estimated distances from the tube end to tracheal carina and the midpoint of clavicular heads. A DenseNet121 encoder transformed images into embedding features, and a CNN-based decoder generated the probability distributions. Based on four sets of tube-to-carina distance-dependent parameters (i.e., (i) 30–70 mm, (ii) 30–60 mm, (iii) 20–60 mm, and (iv) 20–55 mm), corresponding models were generated, and their accuracy was evaluated through the predicted L1 distance to ground-truth coordinates. Based on tube-to-carina and tube-to-clavicle distances, the highest sensitivity, and specificity of 92.85% and 84.62% respectively, were revealed for 20–55 mm. This implies that tube-to-carina distance between 20 and 55 mm is optimal for an AI-based key point appropriateness detection system and is empirically comparable to physicians’ consensus.
Introduction
Endotracheal tube (ETT) positioning in critically ill patients is highly crucial while managing airway protection or mechanical ventilation. For such patients, endotracheal intubation (ETI) is generally conducted under anesthesia in the intensive care unit (ICU), and a malpositioned ETT may result in hazardous outcomes, such as a collapsed or hyperinflated lung [1]. Reportedly, the incidence of ETT malposition and associated complications ranges from 0.5 to 7%, respectively [2]. Apart from absolute indicators such as upper airway obstruction, the decision and timing of intubation for specific patients may vary. Hence, a clinician needs to balance the emergency intubation risks against delaying intubation in a patient to mitigate and modify the airways management plan at the bedside.
Radiographic evaluation of ETT positioning is most commonly carried out, particularly among ICU patients, on a reliable chest X-ray (CXR) which is inexpensive and can be promptly obtained at any location of the hospital [1]. When correctly placed, the tip of the ETT should be positioned in the mid-tracheal region, or halfway between the clavicles and the carina. Upper or lower chin position impacts the ETT depth, which renders it to be higher or deeper, respectively [3]. So, the ETT positioning must be precise to suppress the incidence of complications including tracheal damage and hyperinflation of the lung [1]. Hence, using chest X-ray images (CXR) and artificial intelligence (AI), this study aimed at predicting the appropriateness of the ETT position by evaluating the relative distribution of the four key points (i.e., tracheal tube end, carina, and left/right clavicular heads).
In recent years, AI technology has facilitated medical image analysis and biomedical signal processing. Its use in object detection and recognition in X-rays is an emerging area. So far, there is limited applicability of AI models in the clinical practice of ICU. In the ICU, AI may assist clinicians on diagnostic, prognostic, and curative levels to revamp patient outcomes. Currently, it has been used in detecting therapeutic tubes and catheters [4,5], Deep learning, a form of AI, is being increasingly employed in the processing of medical images of the eyes, brain, breast, chest, musculoskeletal system, pelvis, and abdomen [6,7]. Deep learning could automate the detection of thoracic disorder in CXR [8]. Of these, an approach known as the convolutional neural network (CNN) was adopted in this work as the core computational model to extract image data features at different levels by utilizing multiple processing layers.
The underlying model architecture mainly comprised two parts. The encoder was implemented based on a pre-trained DenseNet121 to transform images into embedding features, while the decoder was a CNN subnetwork boosted with attention mechanisms to generate the probability distributions of the four key points. Through the proposed CNN model, we attempted to characterize the ETT position with the reference to sensitivity, specificity, and accuracy.
Datasets
This research employed the datasets from Taipei Medical University Hospital (TMUH), one of the leading teaching hospitals in Taiwan. Following the approval of the institutional review boards of TMUH (IRB number: N202007011), 427 images from 183 patients were obtained from the TMUH database and were split into the training and validation sets, so that CXR images from the same patient would only appear in either the training or validation set. Specifically, we randomly (patient-wise) split the data (80:20) into training (n = 348) and validation sets (n = 79). In agreement with the previous report [9], the left/right clavicular heads were included as these locations with the reference to trachea might help to determine whether the patient's head was in a neutral position during the chest radiography. Since the ETT position varies with the neck position and rotation, we also included the mandible and C7 vertebra as important indicators [10]. For the training and validation sets, each chest Xray was annotated by one physician, whereas for the test set the three physician-based consensuses about four key points were obtained. The consensus of each key point was defined as the midpoint of the three marked key points. Besides those four key points, the consensus on the appropriateness of the ETT position was also derived for the test set. Label 1 represents an adequate ETT position (normal) while 0 indicates an abnormal position. In total, the ratio of the numbers of normal images and abnormal ones in the test dataset was 2:1, and 40 out of 42 images were categorized as mandible above the C7 group, indicating that the mandible of the patient was above C7 in the radiograph (Table 1). In agreement with the previous report [9], the left/right clavicular heads were included as these locations with the reference to trachea might help to determine whether the patient's head was in a neutral position during the chest radiography. Since the ETT position varies with the neck position and rotation, we also included the mandible and C7 vertebra as important indicators [10]. For the training and validation sets, each chest X-ray was annotated by one physician, whereas for the test set the three physician-based consensuses about four key points were obtained. The consensus of each key point was defined as the midpoint of the three marked key points. Besides those four key points, the consensus on the appropriateness of the ETT position was also derived for the test set. Label 1 represents an adequate ETT position (normal) while 0 indicates an abnormal position. In total, the ratio of the numbers of normal images and abnormal ones in the test dataset was 2:1, and 40 out of 42 images were categorized as mandible above the C7 group, indicating that the mandible of the patient was above C7 in the radiograph (Table 1). Besides the standard machine learning process which used training, validation, and test sets to learn and evaluate the model, an isolated clinical evaluation set was collected to demonstrate the level of agreement between the model and a physician in the common clinical practice. In this set, there were a total of 103 images from 35 patients. To simulate the condition of applying the model for the appropriateness of ETT position in clinics, the images were randomly partitioned into five portions, each of which was reviewed solely by one of a group of three certified physicians. Each image was annotated as normal (appropriate ETT position) or abnormal. No consensus was made, and no key point was marked in this set. All of these images were categorized as mandible above the C7 group and the ratio of normal images (n = 79) to abnormal ones (n = 24) was 3:1.
Data Preprocessing and Augmentation
All of the images were normalized to a 0-1 abundance scale and then resized to 512 × 512 for modeling. Each key point (x, y) was transformed to a two-dimensional Gaussian of constant variance (σ = 10 pixels), centering at coordinates marked by the physicians. During the model training, data augmentation was performed via random scaling and random rotation to prevent the model from overfitting the training set. For random scaling, the range was randomly selected between 90% (zoom-in) and 125% (zoomout), whereas the range of rotation was set between −45 degrees and 45 degrees.
Modeling Framework
To automate the evaluation of the ETT position with chest X-rays, our proposed framework consists of two main components. The first component represents a two-stage key point detection model, which detects the four key points, namely tube end, tracheal carina, and left/right clavicular heads, and subsequently fine-tunes the locations of the tube end and tracheal carina ( Figure 2). Besides the standard machine learning process which used training, validation, and test sets to learn and evaluate the model, an isolated clinical evaluation set was collected to demonstrate the level of agreement between the model and a physician in the common clinical practice. In this set, there were a total of 103 images from 35 patients. To simulate the condition of applying the model for the appropriateness of ETT position in clinics, the images were randomly partitioned into five portions, each of which was reviewed solely by one of a group of three certified physicians. Each image was annotated as normal (appropriate ETT position) or abnormal. No consensus was made, and no key point was marked in this set. All of these images were categorized as mandible above the C7 group and the ratio of normal images (n = 79) to abnormal ones (n = 24) was 3:1.
Data Preprocessing and Augmentation
All of the images were normalized to a 0-1 abundance scale and then resized to 512 × 512 for modeling. Each key point (x, y) was transformed to a two-dimensional Gaussian of constant variance ( 10 pixels), centering at coordinates marked by the physicians.
During the model training, data augmentation was performed via random scaling and random rotation to prevent the model from overfitting the training set. For random scaling, the range was randomly selected between 90% (zoom-in) and 125% (zoom-out), whereas the range of rotation was set between −45 degrees and 45 degrees.
Modeling Framework
To automate the evaluation of the ETT position with chest X-rays, our proposed framework consists of two main components. The first component represents a two-stage key point detection model, which detects the four key points, namely tube end, tracheal carina, and left/right clavicular heads, and subsequently fine-tunes the locations of the tube end and tracheal carina ( Figure 2). The inputs and labels in the first component were two-dimensional data with the size of 512 × 512. These inputs were preprocessed chest X-ray (CXR) images, and the labels were two-dimensional Gaussian distributions. The second component predicts the appropriateness of the ETT position based on the four detected key points from the first component. Then, a set of clinical parameters would be applied to these estimated coordinates The inputs and labels in the first component were two-dimensional data with the size of 512 × 512. These inputs were preprocessed chest X-ray (CXR) images, and the labels were two-dimensional Gaussian distributions. The second component predicts the appropriateness of the ETT position based on the four detected key points from the first component. Then, a set of clinical parameters would be applied to these estimated coordinates to derive the appropriateness of the ETT position. The details of the first and second components of our model are described in the next sections.
The First Component: Two-Stage Key Point Detection Model
As illustrated in Figure 2, we used DenseNet121 (pre-trained on ImageNet) as the encoder to transform images into embedding features. The decoder to generate probability distributions of the four key points was composed of three convolutional layers with spatial and channel squeeze and excitation (SCSE) module [11] followed by a 1 × 1 convolutional layer. Networks in our encoder-decoder structure were identical in both stages. Input (512 × 512 × 1) and output (512 × 512 × 4) formats for each stage were the same. The value of four in the third (channel) dimension of the output corresponded to the number of key points to be detected. Each feature map in that dimension represented the estimated 2D probability distribution of the key point coordinates (x, y), which were derived by accessing the location of the highest probability.
The input for the second stage was obtained by cropping and resizing each source image with respect to the four key points. During training, the key points used to crop the source images were the ones marked by the physicians. For the inference, we used the predicted key points from the first stage to obtain the cropped images. We employed the PyTorch framework (version: 1.8.1) throughout the training process. Each stage was trained separately with Adam optimizer, with a learning rate of 0.0001 and a batch size of 8 for 1000 epochs on a GeForce GTX 2080 Ti graphics processing unit (GPU). We used binary cross-entropy as the loss function for appropriateness classification and L1 distance as the loss between the predicted key point and the ground truth heatmaps.
The Second Component: Appropriateness Prediction
With the four key points, we further considered the clinical parameters for appropriateness prediction and the estimated distances from the tube end to the tracheal carina and the midpoint of clavicular heads ( Figure 3). These parameters included the following: tube-to-carina distance between (i) 30 mm to 70 mm, (ii) 30 mm to 60 mm, (iii) 20 mm to 60 mm, or (iv) 20 mm to 55 mm, where an estimated distance within each specific range would be considered normal ( Table 2). Using the training and validation data, we learned to detect the four key points and hypothesized the four distance ranges based on related work and experts' domain knowledge. On the other hand, the consensus by the three physicians also implicitly defined an underlying range in deciding the position appropriateness. Thus, by assuming in turn each of the four distances as the decision rule, the one that yields the best performance on the test set could be the closest to the range implied by the consensus process. We also experimented with the tube-to-clavicle distance along with these four sets of parameters (Table 3). Specifically, an ETT position would be considered normal when its tube-to-carina distance was within the parameters and its tube-to-clavicle distance was greater or equal to zero (i.e., the tube was below the midpoint of the clavicular heads). Additionally, for appropriateness prediction, we averaged the binary predictions (0 or 1), which were derived from the combinations generated by the following parameters: (i) the lower bound of tube-to-carina distance ranged from 20 mm to 30 mm with an interval of 1 mm, (ii) the upper bound of the tube-to-carina distance ranged from 55 mm to 70 mm with an interval of 1 mm, (iii) the tube-to-clavicle distance ranged from −5 mm to 5 mm with an interval of 1 mm. There was a total of 1936 combinations, each of which served as a set of parameters for appropriateness prediction, and the averaged predictions were compared to ground truths to evaluate the model performance on the test dataset. Unlike the use of expert knowledge that limits the range candidates of the tube-to-carina distance to four, the general setting here considers a total of 1936 combinations and yields a weighted appropriateness prediction. The weighted scheme was used for deriving the ROC curve and AUC.
Statistics
For the two-stage keypoint detection, we measured the mean absolute distance in mm between the predicted key points and the ground truths. The model efficacy was determined by performance indices in terms of sensitivity and specificity, which were compared with the consensus of the physicians. For appropriateness predictions, the receiver operating characteristics (ROC) curve and the area under the ROC curve (AUC) was measured and the optimal sensitivity and specificity were derived by using the Youden index. Biology 2022, 11, x FOR PEER REVIEW 6 of 13
Statistics
For the two-stage keypoint detection, we measured the mean absolute distance in mm between the predicted key points and the ground truths. The model efficacy was determined by performance indices in terms of sensitivity and specificity, which were compared with the consensus of the physicians. For appropriateness predictions, the receiver operating characteristics (ROC) curve and the area under the ROC curve (AUC) was measured and the optimal sensitivity and specificity were derived by using the Youden index.
Key Point Detection
Specifically, we found the mean absolute distance of tube end predictions and tracheal carina predictions as 3.04 mm and 2.42 mm, respectively. In comparison, the mean
Key Point Detection
Specifically, we found the mean absolute distance of tube end predictions and tracheal carina predictions as 3.04 mm and 2.42 mm, respectively. In comparison, the mean absolute distances of the left (3.77 mm) and right clavicular head predictions (3.59 mm) were slightly greater, indicating that these two points were more difficult to locate, most likely due to the uncertainty in the annotations. Examples of the visualizations of predicted key points superimposed over the original image are shown in Figure 4. absolute distances of the left (3.77 mm) and right clavicular head predictions (3.59 mm) were slightly greater, indicating that these two points were more difficult to locate, most likely due to the uncertainty in the annotations. Examples of the visualizations of predicted key points superimposed over the original image are shown in Figure 4.
Appropriateness Prediction
To predict the appropriateness of the ETT position, we considered four sets of parameters to the tube-to-carina distances derived from the predicted tube end and carina. These results have been reported in Table 2. In the case of sensitivity, that derived from the distance between 20 mm/30 mm to 60 mm was higher (57.14%) than that between 30 mm to 70 mm (35.71%). The value reached the highest (71.42%) as the maximal distance to be considered for the adequate position was reduced to 55 mm. This indicates that the tube-to-carina distance between 20 mm to 55 mm is optimal for an AI-based key point appropriateness detection system and may be comparable to physicians' consensus. Moreover, the improved performance with only a 5 mm difference on the maximal tubeto-carina distance could be attributed to the parameter making up the detection errors on the tube end (3.04 mm) and the carina (2.42 mm). This improvement and the ensuing
Appropriateness Prediction
To predict the appropriateness of the ETT position, we considered four sets of parameters to the tube-to-carina distances derived from the predicted tube end and carina. These results have been reported in Table 2. In the case of sensitivity, that derived from the distance between 20 mm/30 mm to 60 mm was higher (57.14%) than that between 30 mm to 70 mm (35.71%). The value reached the highest (71.42%) as the maximal distance to be considered for the adequate position was reduced to 55 mm. This indicates that the tube-to-carina distance between 20 mm to 55 mm is optimal for an AI-based key point appropriateness detection system and may be comparable to physicians' consensus. Moreover, the improved performance with only a 5 mm difference on the maximal tubeto-carina distance could be attributed to the parameter making up the detection errors on the tube end (3.04 mm) and the carina (2.42 mm). This improvement and the ensuing sensitivity-specificity trade-off would be the gray area of subjective interpretation on the ETT position in the distance between 55 mm and 70 mm to the carina.
Moreover, we further evaluated the performance by adding tube-to-clavicle distances, an additional parameter, which has been demonstrated in Table 3.
Our result showed greatly improved sensitivities of all of the parameters, especially, the 30 mm to 70 mm. This appropriateness prediction model represents higher sensitivity without significantly compromising the specificity and corroborates our previous findings in Table 2, indicating the possible uncertainty in the ETT position as the tube is located near the clavicle heads.
Receiver Operating Characteristic (ROC) of the Prediction Model
To predict the appropriateness of the ETT position, we applied each combination of parameters on the key points. Thereafter, we averaged out binary appropriateness predictions for deriving the ROC curve ( Figure 5) and AUC. sensitivity-specificity trade-off would be the gray area of subjective interpretation on the ETT position in the distance between 55 mm and 70 mm to the carina.
Moreover, we further evaluated the performance by adding tube-to-clavicle distances, an additional parameter, which has been demonstrated in Table 3.
Our result showed greatly improved sensitivities of all of the parameters, especially, the 30 mm to 70 mm. This appropriateness prediction model represents higher sensitivity without significantly compromising the specificity and corroborates our previous findings in Table 2, indicating the possible uncertainty in the ETT position as the tube is located near the clavicle heads.
Receiver Operating Characteristic (ROC) of the Prediction Model
To predict the appropriateness of the ETT position, we applied each combination of parameters on the key points. Thereafter, we averaged out binary appropriateness predictions for deriving the ROC curve ( Figure 5) and AUC. We also applied the same process on the key points of physicians' consensus, to produce a comparative result. Our prediction model showed an area under curve (AUC) score of 0.9381, which is very close to physicians' consensus of 0.9175. Further, using the Youden index, we found the optimal sensitivity and specificity of our model as 100.00% and 84.62%, respectively.
Appropriateness Prediction on the Clinical Evaluation Dataset
To demonstrate the level of agreement in the common clinical setting, we followed the same above-mentioned process to generate appropriateness prediction on the clinical evaluation set. The ROC curve has been represented in Figure 6, which showed that our model had an AUC score of 0.7181 on this isolated clinical evaluation set, and the optimal sensitivity and specificity with Youden Index were 79.17% and 56.96%, respectively. This indicates that the model did not have a high level of agreement with a single physician, and inconsistency in the subjective appropriateness annotations may be an issue. We also applied the same process on the key points of physicians' consensus, to produce a comparative result. Our prediction model showed an area under curve (AUC) score of 0.9381, which is very close to physicians' consensus of 0.9175. Further, using the Youden index, we found the optimal sensitivity and specificity of our model as 100.00% and 84.62%, respectively.
Appropriateness Prediction on the Clinical Evaluation Dataset
To demonstrate the level of agreement in the common clinical setting, we followed the same above-mentioned process to generate appropriateness prediction on the clinical evaluation set. The ROC curve has been represented in Figure 6, which showed that our model had an AUC score of 0.7181 on this isolated clinical evaluation set, and the optimal sensitivity and specificity with Youden Index were 79.17% and 56.96%, respectively. This indicates that the model did not have a high level of agreement with a single physician, and inconsistency in the subjective appropriateness annotations may be an issue.
Discussion
This is the first report on carina and clavicular distance-dependent identification and localization of positioning of ETT in critically ill patients. In this study, based on the clinical practice of ETT positioning evaluation, four key points including tracheal tube end, carina, and left/right clavicular heads, were identified for each chest X-ray image. With respect to trachea, the left and right clavicular heads were included as these might assist in localizing the ETT positioning. Since the ETT position varies with neck position and rotation, we also included mandible and C7 as important indicators [10]. To date, a few studies have investigated various parameters for determining ETT insertion and final positioning. Some of them are based on vocal cord-carina distance and tracheal length [12], stature, and incisor manubrio-sternal joint length [13]. Further, an individual's height [14] and various anatomical landmarks have also been employed to predict airway length resulting in varying outcomes [13]. A seminal study reported that the tracheal midpoint corresponds internally to a line drawn between the medial heads of the clavicles and hence clavicles have also been employed as a reference point [15]. When placed correctly, the tip of the ETT must be positioned in the mid-tracheal region, halfway between the inferior clavicle and carina, which also coincides with our clinical rule [16]. However, Blayney et al. found the inconsistent position of the clavicle on chest X-ray and suggested using the first thoracic vertebra as a marker for correct tip placement [17]. Taken together, irrespective of the reference point used, the tube must always be positioned at a safe distance from the carina to avoid accidental endobronchial intubation.
According to Goodman's criteria, the ETT should be ideally placed in the mid-trachea approximately 50 mm above carina with the patient's head in a neutral position, considering neck extension or flexion values of about 20 mm while moving downwards or upwards [18,19]. It also recommends that the mean tracheal tip to a carinal distance of 40 mm (range: 30-50 mm) may avert carinal impingement and endobronchial intubation [20]. In a study, the fiberoptically measured optimal placement of tube 2.5-4 cm above carina has been documented [21]. In concord with this report, a seminal randomized trial on 160 patients, recommended the 20/22 cm rule (i.e., inserting tubes to 20-21 cm in women and 22-23 cm in men, with the distal ETT tip less than 2.5 cm away from the carina to avert inadvertent endobronchial intubation) [22]. In contrast to adults, the ETT tip to the tracheal carina has been reported as 1.57 cm in the younger individuals [23]. Notably, the
Discussion
This is the first report on carina and clavicular distance-dependent identification and localization of positioning of ETT in critically ill patients. In this study, based on the clinical practice of ETT positioning evaluation, four key points including tracheal tube end, carina, and left/right clavicular heads, were identified for each chest X-ray image. With respect to trachea, the left and right clavicular heads were included as these might assist in localizing the ETT positioning. Since the ETT position varies with neck position and rotation, we also included mandible and C7 as important indicators [10]. To date, a few studies have investigated various parameters for determining ETT insertion and final positioning. Some of them are based on vocal cord-carina distance and tracheal length [12], stature, and incisor manubrio-sternal joint length [13]. Further, an individual's height [14] and various anatomical landmarks have also been employed to predict airway length resulting in varying outcomes [13]. A seminal study reported that the tracheal midpoint corresponds internally to a line drawn between the medial heads of the clavicles and hence clavicles have also been employed as a reference point [15]. When placed correctly, the tip of the ETT must be positioned in the mid-tracheal region, halfway between the inferior clavicle and carina, which also coincides with our clinical rule [16]. However, Blayney et al. found the inconsistent position of the clavicle on chest X-ray and suggested using the first thoracic vertebra as a marker for correct tip placement [17]. Taken together, irrespective of the reference point used, the tube must always be positioned at a safe distance from the carina to avoid accidental endobronchial intubation.
According to Goodman's criteria, the ETT should be ideally placed in the mid-trachea approximately 50 mm above carina with the patient's head in a neutral position, considering neck extension or flexion values of about 20 mm while moving downwards or upwards [18,19]. It also recommends that the mean tracheal tip to a carinal distance of 40 mm (range: 30-50 mm) may avert carinal impingement and endobronchial intubation [20]. In a study, the fiberoptically measured optimal placement of tube 2.5-4 cm above carina has been documented [21]. In concord with this report, a seminal randomized trial on 160 patients, recommended the 20/22 cm rule (i.e., inserting tubes to 20-21 cm in women and 22-23 cm in men, with the distal ETT tip less than 2.5 cm away from the carina to avert inadvertent endobronchial intubation) [22]. In contrast to adults, the ETT tip to the tracheal carina has been reported as 1.57 cm in the younger individuals [23]. Notably, the distance from the ETT tip to just above tracheal carina in the infants has been suggested in the range of 0.2 and 2 cm, depending on their age [24,25]. However, in a randomized controlled trial on newborn infants, the correct tube position just above carina has been employed as less than 0.2 cm [26]. Considering this evidence, out of the four distance parameters of tube-to-carina, our prediction model showed the highest sensitivity of 71.42% for 20 mm to 55 mm. Even based on tube-to-carina and tube-to-clavicle distance, the highest sensitivity, and specificity of 92.85% and 84.62%, respectively, was revealed for 20 mm to 55 mm. This result is in line with Goodman's criteria, and implies that a tube-to-carina distance between 20 mm to 55 mm is optimal, in the safe limit, and comparable to physicians' consensus. The less satisfactory results on the other three distance ranges do not imply the ineffectiveness of our model, but their deviation from the underlying range criterion embodied in the three physicians' consensus. With the 20-55 mm range for classifying the tube-to-carina distances, our model can effectively determine whether the position of the tube end is proper or not. A recent study using the MIMIC Chest X-ray database also employed CNN-based algorithms to identify and localize the ETT position relative to the carina on chest radiographs [27]. Their distal ETT tip was approximated within a median error of 4.6 mm and 6.0 mm from ground-truth annotations respectively, which is comparatively higher than our results. However, this study did not include clavicular distance as a parameter and their ETT demonstrated sensitivity, specificity, accuracy, and AUC of 0.9737, 0.9689, 0.9714, and 0.9958, respectively. Our prediction model showed AUC score of 0.9381, which is almost coincides with physicians' consensus of 0.9175. Besides, a computer-aided detection technique has also been used to estimate the ETT positioning with a sensitivity and accuracy of 85% and 81% for ETT detection and localization within 10 mm of ground-truth annotations of testing images [28]. Using a fully convolutional CNN model with combined real and synthetic data, the entire course of the ETT has been localized. However, it lacks the location of the distal tip of ETT relative to carina [29]. Apart from detecting the ETT tube on chest radiographs, previous studies have used deep CNN to the Indiana, JSRT, and Shenzhen datasets to localize various ranges of abnormalities, in particular, cardiomegaly with the highest accuracy of 92% and highest AUC of 0.9408 for detecting cardiomegaly [30]. Further, a 121-layer CNN-based on the NIH dataset has also been used for predicting pneumonia from frontal-view chest X-ray images [31]. Notably, the recent years have evidenced an increasing number of studies on ICU-AI models, mostly focusing on predicting complications, mortality, and improving prognostic models [32]. Using large population datasets, AI has mainly been employed in critical care to predict length of stay, ICU readmission and mortality rates, complications and risk stratifications [33]. Lately, a systematic review revealed that ML models could accurately predict onset of septicemia in ICU patients [34]. In a very important study, deep learning has been found as effective as senior radiologists in detecting lung nodules in CXRs and CT scans [35]. These studies provided the basis for conducting extensive deep learning-based prediction models to evaluate ETT position through utilizing chest radiographs of ICU patients.
Besides various outcomes, this study possesses some limitations. Our results may be valid only for the Taiwanese population and may not apply to other ethnic populations. However, recent studies have also documented no clinically major difference between the tracheal diameter of the adult Chinese and Caucasian patients [36]. Moreover, the impact of ethnicity on tracheal diameter has been found small, when adjusting age, sex, height, and weight. So, we also assume that this model has the potential to generalize. It is of note that the normal/abnormal label was tagged by the physician, while the prediction model employed rules of the clinical standard. Therefore, the prediction of four key points may not be accurate due to the difference between the judgment of physicians and the clinical rules. Another limitation includes that clinician may judge the intubation check with patients' body shape, while the computer only considers the distance between tube/clavicle and carina. The appropriateness of the ETT position was decided by physicians as per clinical experience. A consensus was taken for the first test set to determine the model performance. However, for the second test set, no model training was required. That is why only clinical validation was carried out with a standard single opinion, which was made by a trained physician, a representative of many physicians' consensus.
We would also like to highlight that our model has been evaluated based on test set and the clinical evaluation set. It could be noticed from our data that the ground truths of both sets are not of the same quality. The ground truth of each image in the test set was obtained by the consensus of three experienced physicians, while in the clinical evaluation set, it was decided by only a single physician. The design of these two experiments is twofold. Firstly, we aimed to demonstrate that the proposed AI-based model can yield good performance with respect to more reliable ground-truth annotations (i.e., the consensusbased test set). Secondly, we attempted that without explicitly marking the four key points, the task to decide whether the position of an ETT end in CXR is appropriate or not is at times challenging even to an experienced physician. Thus, the less-satisfactory performance on the clinical evaluation set is largely due to the subjective variations on deciding each ground truth based on the decision of only one physician, rather than the inefficiency of the proposed model. Hence, in ICU, the detection of an inappropriate position of tracheal tube end is possible by the physician-in-charge. However, the proposed model should be useful in aiding such decisions with enhanced accuracy. In the emerging studies, the subjective opinions of clinical experts are the traditional basis of clinical practice [37] sometimes using consensus development [38]. In line with this, a previous study also demonstrated the majority vote of three cardiothoracic specialty radiologists as ground truth [39]. Further, our training dataset may include multiple CXR images of a patient, which were not taken consecutively. It is of note that a patient is likely to be subjected to CXR examinations several times during a single ICU stay, and the time intervals between these CXRs are usually at least 12 h. In addition, the position of tracheal tube end of a patient in CXR is not invariably fixed and could be changed due to different head positions or CXR viewing angles. These conditions highly reduce the impact of repeated CXRs images on model performance.
Our prediction suggests the need for extensive research and consensus on the ideal position of the ETT as even a minor length difference may have an adverse impact on respiratory morbidity, particularly in the neonates or infants [40]. Using currently manufactured tubes with whole centimeter markings, the adjustments of less than 1 cm remain challenging. Therefore, we suggest that manufacturing an ETT with 1 2 cm markings may aid in more accurate placement.
Conclusions
Our CNN approach combined with four key points annotations on chest radiographs (tracheal tube end, carina, and left/right clavicular heads) evidence a significant sensitivity, specificity, and accuracy for both identification and localization of the ETT tip on chest radiographs. Our results may support the radiographic confirmation of precise ETT placement and could help in ruling out other etiologies associated with respiratory failure. In the future, the clinical integration of deep learning tools, including user interface optimization to suppress workflow disruption and revamp overall clinical response time may be targeted. This system of machine learning and neural networks could handle enormous volumes of data in bringing positive changes in clinical decision-making processes, such as the automated interpretation of medical images. Additionally, it would reduce the medical staff's workload and enhance patient safety.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 2022-04-24T05:26:41.160Z | 2022-03-23T00:00:00.000 | {
"year": 2022,
"sha1": "de5ef4e53b34eb78c2eeb4f5f8e94f62596ac91a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "de5ef4e53b34eb78c2eeb4f5f8e94f62596ac91a",
"s2fieldsofstudy": [
"Medicine",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247690853 | pes2o/s2orc | v3-fos-license | A Review of Viruses Infecting Yam (Dioscorea spp.)
Yam is an important food staple for millions of people globally, particularly those in the developing countries of West Africa and the Pacific Islands. To sustain the growing population, yam production must be increased amidst the many biotic and abiotic stresses. Plant viruses are among the most detrimental of plant pathogens and have caused great losses of crop yield and quality, including those of yam. Knowledge and understanding of virus biology and ecology are important for the development of diagnostic tools and disease management strategies to combat the spread of yam-infecting viruses. This review aims to highlight current knowledge on key yam-infecting viruses by examining their characteristics, genetic diversity, disease symptoms, diagnostics, and elimination to provide a synopsis for consideration in developing diagnostic strategy and disease management for yam.
Introduction
Yam (Dioscorea spp.) is the collective name for a group of multi-species, dioecious, and monocotyledonous vines cultivated primarily for their starchy underground tubers. The Dioscorea genus belongs to the family Dioscoreaceae of the order Dioscoreales and comprises approximately 600 species of domesticated and wild yams. However, only about 10 edible species are widely grown for food in the tropical and sub-tropical regions of the world [1][2][3][4][5]. In particular, yam is a major food staple and an important source of carbohydrates for the people of Africa, Central and South America, parts of Asia, the Caribbean, and the Pacific Islands [6][7][8][9][10]. In addition, yam holds significant economic, social and cultural value wherever it is cultivated [11][12][13].
Globally, yam is the fourth most important root and tuber crop by production after potato, sweet potato, and cassava. The 'yam belt' of West Africa, comprising Benin, Ivory Coast, Ghana, Nigeria, and Togo, contributes about 95% towards global yam production [14]. Despite its socio-economic importance, yam production is heavily constrained by many factors, such as the high cost and availability of clean "seeds", pests, and diseases [2,15,16]. Pests and diseases, such as bacteria, fungi, insects, nematodes, and viruses, have direct negative impacts on yield and quality. Of these, viruses pose the most serious problems as they are the hardest to control, easily spread with planting material, and have been reported in all yam growing regions of the world [2,6,16,17].
This review aims to provide an overview of yam infecting viruses by describing most prevalent viruses reported to infect yam to date through discussions on their v characteristics, disease symptoms, diagnostics, and sanitation methods. The informa contained herein will provide insights for anyone working on developing diagnostic s egies and disease management in yam. It serves as a guiding document for researc interested in conducting yam virus characterization, diagnostic and prevalence stud There is an increased need to exchange yam germplasm to support breeding, evalua for desired biotic and abiotic stresses, and increase production to support food and n tion security. This review will serve as a point of reference for future global discuss on developing consolidated strategies for yam virus diagnostics to inform safe germplasm exchange. The existence of a consolidated general overview on yam vir The constraints of viral diseases on yam production necessitates urgent efforts for their identification, diagnosis, and management. Global efforts on yam virus characterization over the last five years have led to 13 novel virus species being recognized by the International Committee on Taxonomy of Viruses (ICTV) [19]. Whilst specific studies on yam viruses have focused on their identification, characterization, and diagnostic methods [12,13,24,[29][30][31][32][33], broader studies have focused on virus incidence and distribution across different eco-geographical zones, and food and yield losses associated with yam virus infections [1,6,9,16,17,34,35]. The current gap in yam virus studies is the lack of an in-depth overview of key and prevalent yam infecting viruses in a single reference.
This review aims to provide an overview of yam infecting viruses by describing the most prevalent viruses reported to infect yam to date through discussions on their viral characteristics, disease symptoms, diagnostics, and sanitation methods. The information contained herein will provide insights for anyone working on developing diagnostic strategies and disease management in yam. It serves as a guiding document for researchers interested in conducting yam virus characterization, diagnostic and prevalence studies. There is an increased need to exchange yam germplasm to support breeding, evaluation for desired biotic and abiotic stresses, and increase production to support food and nutrition security. This review will serve as a point of reference for future global discussions on developing consolidated strategies for yam virus diagnostics to inform safe yam germplasm exchange. The existence of a consolidated general overview on yam viruses provides the opportunity to identify gaps in yam virus characterization and diagnostic protocols that provides the opportunity to identify gaps in yam virus characterization and diagnostic protocols that need to be addressed to develop a health testing system to prevent the further geographical spread of yam viruses.
Family Potyviridae
The family Potyviridae is the largest family of RNA viruses to infect plants. There are eight genera in the family and members are distinguished by host range, genomic features, and phylogeny [36]. The virions are flexuous, filamentous particles ranging from 680 to 900 nm in length and 11 to 20 nm in width, whilst the genomes are single-stranded, positive-sense ribonucleic acids (RNAs) ranging from 8.3 to 11.3 kilobase (kb) in length. The typical genome of the family Potyviridae encodes a single large open reading frame (ORF), except for the genus Bymovirus which contains two ORFs. However, a second small ORF termed pretty interesting Potyviridae ORF (PIPO) was discovered embedded in an alternative reading frame [37]. The polyproteins are cleaved into nine or more functional proteins by the endonuclease. Viruses from two genera, Potyvirus and Macluravirus, are important viruses infecting yam and are discussed in the following.
Genus Potyvirus
The genus Potyvirus is the largest genus and the most extensively studied within the family Potyviridae [38]. To date, 176 species have been described and they contain some of the most economically important viruses, such as potato virus Y (PVY) and plum pox virus (PPV) [39]. Potyviruses are generally narrow in host range and are transmitted by over 200 species of aphids in a non-persistent and non-circulative manner [38,40]. The three main viruses infecting yam from this genus are described.
The yam mosaic virus (YMV) (species Yam mosaic virus) is the most prevalent and economically important yam virus, infecting both cultivated and wild yams [6,41]. It has been reported in all yam-growing regions of West Africa, the West Indies, and the Caribbean and is commonly found to infect D. alata, D. rotundata, and D. cayenensis-rotundata [41] but is currently absent from the Pacific region [42]. YMV was first reported to infect D. cayenensis from the Ivory Coast [26]. The complete genome sequence of the YMV Ivory Coast isolate (accession number: U42596) was reported by Aleman et al. [43] and comprises 9608 nucleotides (nt) in length. The complete genome of a second isolate, YMV-NG (accession number: MG711313) was determined to be 9594 nt in length and shared 85% sequence identity to the YMV Ivory Coast isolate [44]. The polyprotein is cleaved by endonuclease into ten mature and one fusion protein (P3-PIPO) critical for viral replication and movement ( Figure 2) [37,40] The virus is naturally transmitted through vegetative propagation of infected material but may also be transmitted through aphid vectors, such as Aphis gossypii (cotton aphids) and A. craccivora (cowpea aphids), in a non-persistent manner [26]. YMV could also be mechanically transmitted to other species, such as Nicotiana benthamiana, N. megalosiphon, and Chenopodium amaranticolor [2,41,45]. Symptoms on infected plants may include mottling, chlorosis of leaf and vein, interveinal mosaic, leaf distortion, and stunted growth [2,46]. Severe losses have been reported on YMV-infected D. rotundata, the most important food yam in West Africa [6]. The use of clean virus-free seeds and breeding for YMV-resistant planting materials remain the most effective method to mitigate the spread of YMV [25,33,47].
Understanding the genetic diversity of YMV isolates is critical for diagnostic tool development and yam improvement. The first analysis of diversity among YMV isolates identified six distinct serogroups based on symptomology, Western immunoblotting, and enzyme-linked immunosorbent assay (ELISA) [45,48]. Molecular evaluation of genetic diversity was assessed by sequencing the C-terminus part of the replicase (Nib), CP, and the 3 -untranslated region (3 UTR) of 27 YMV isolates of D. alata, D. cayenensis-rotundata complex, and D. trifida from Africa, the Caribbean, and French Guiana [49]. The CP region of YMV was the most variable compared to eight other potyviruses and phylogenetic analyses revealed nine distinct molecular groups, with the most diversified and divergent groups including isolates originating from Africa [41,50]. Azeteh et al. [1] reported the phylogenetic analysis of 27 YMV isolates from Cameroon based on CP sequences into three phylogenetic groups. However, these clustering did not correspond to agro-ecological zones or yam species and are likely attributed to the inter-zonal movement of planting materials and spread through aphid transmission.
Knowledge on the genetic mechanism of resistance to YMV in yam remains scarce and conventional breeding efforts for YMV resistance remain a challenge [23,51]. The inheritance of resistance to YMV was investigated in three tetraploid D. rotundata genotypes: TDr 93-1, TDr 93-2, and TDr 89/01444. Disease resistance and susceptibility in parental plants and progenies were scored on a visual disease severity scale of 1 to 5 where plants scored less than or equal to 2 as resistant and above 2 as susceptible as determined by the International Institute of Tropical Agriculture (IITA), Nigeria in 1998. Virus infection was further confirmed by TAS-ELISA. The F1 progeny segregation ratio indicated that resistance in TDr 89/01444 was governed by a single dominant gene in a simplex condition, whilst resistance in TDr 93-2 was associated with the presence of a major recessive gene in a duplex configuration. However, it was noted that the presence of mild mosaic symptoms and low titer of YMV in the resistant parental plants using TAS-ELISA and the observation of asymptomatic F1 progenies with high YMV titer may not represent true resistance but rather a form of tolerance in these genotypes [51]. Similarly, Bakayoko et al. [25] evaluated the resistance to YMV in 206 F1 progenies in the Ivory Coast based on the same disease severity scale reported by IITA (1998). It was reported that 91.3% of the progenies were considered resistant to YMV. However, this result was inconsistent with molecular analysis which revealed 41.7% of the F1 progenies with low severity scores were positive for YMV infection. It was suggested that 1-year-old F1 hybrid progenies were not reliable in breeding programs to evaluate viral resistance. A subsequent study by Mignouna et al. [46] identified two RAPD markers (OPW18 850 and OPX15 850 ) closely linked in the coupling phase with the Ymv-1 gene and were mapped on the same linkage group. This represented the first DNA markers for YMV resistance in yam.
Several serological and nucleic acid-based diagnostic methods have been described for the detection of YMV, such as triple antibody sandwich enzyme-linked immunosorbent assay (TAS-ELISA), immunocapture reverse transcription-polymerase chain reaction (IC-RT-PCR), with RT-PCR as the most frequently used method for YMV detection, targeting the CP region of the YMV genome (Table 1) [50,52,53]. However, these methods are laborious and involve many successive steps for target detection [24]. A rapid YMV specific detection method by reverse-transcription recombinase polymerase amplification (RT-RPA) has been described by Silva et al. [24]. Results from RT-RPA were found to be reproducible and had a similar sensitivity to RT-PCR. However, RT-RPA has many advantages over RT-PCR, such as a rapid sample processing time of less than 30 min and a single incubation temperature (optimal 37 • C) for amplification. Similarly, Nkere et al. [33] reported a chromogenic detection method of YMV by closed-tube RT loop-mediated isothermal amplification (CT-RT-LAMP). YMV-positive samples were visualized by chromogenic detection with SYBR Green 1 dye, and the assay was reported to be 100 times more sensitive compared to the standard RT-PCR method.
The yam mild mosaic virus (YMMV) (species Yam mild mosaic virus) is the second most important potyvirus to infect yam after YMV [54]. It was first described as yam virus 1 (YV1) and Dioscorea alata virus (DAV) and is very prevalent in Africa, infecting D. alata [27,55]. YMMV is now classified as a distinct potyvirus infecting yams based on the ICTV criteria for potyvirus species demarcation [30,32,34]. The complete genome sequences of YMMV isolates range from 9521 to 9538 nt long, excluding the poly(A) tail. The Brazilian isolate encodes a polyprotein of 3084 aa with a large ORF that is cleaved into 11 functional proteins: P1, HC-Pro, P3, 6K1, CI, 6K2, NIa-VPg, NIa-Pro, Nib, PIPO, and CP [30]. The CP protein contains a DAG motif, located on the N-terminus typical of potyviruses and has an important function in aphid transmissibility [54]. YMMV differs from other potyviruses based on the ICTV criteria for species demarcation in the genus Potyviridae which stands at <76-77% nt and <80% aa identity in the CP region. The aa sequence identity in the CP region between YMMV and YMV is 57.1% [32].
Virus distribution is mainly through vegetative propagation of infected tubers or vines but transmission by the aphid vector A. craccivora has been reported [27]. The virus typically causes mild mottling and mosaic symptoms on D. alata, D. cayenensis-rotundata complex, and D. trifida but appears to be symptomless on D. rotundata [52]. Whilst YMMV does not pose significant constraints to yam production compared to YMV, its global presence necessitates an understanding of its dispersion and diversity. In the Pacific, YMMV has been reported from Fiji, Papua New Guinea, Solomon Islands, New Caledonia, and Vanuatu [32,34,42].
A phylogenetic analysis using the 798 nt 3 -terminal region of CP and 5 -terminal region of the 3 -non coding region of 36 YMMV isolates from Asian, Pacific, African, Amazonian and Caribbean origins placed them into eight distinct genetic clades that shared a common ancestor [34]. Another phylogenetic analysis of 18 isolates from Ghana and Nigeria placed these isolates into two major groups, however, this clustering did not correlate with the geographic origin of these isolates suggesting that there may have been an exchange of virus-infected tubers and seeds within the region [54].
Zou et al. [56] recently conducted a comprehensive study of YMMV genetic diversity on 86 isolates from West Africa, Asia, South Pacific, and America and clustered them into 14 distinct groups, indicating high genetic diversity among global YMMV isolates. Whilst YMMV isolates from China clustered into two distinct groups, isolates from other geographical regions were more diverse, particularly those of West Africa and Central America origin. The distinct geographical distribution of Chinese YMMV isolates suggested that germplasm exchange of Chinese yams with other groups was infrequent. In addition, phylogenetic analysis using whole-genome sequences of twelve YMMV isolates revealed four chimeric genome patterns, suggesting that recombination events are frequent among YMMV isolates [56]. This finding supports YMMV genome recombination first observed using 3 -terminal genome sequences by Bousalem et al. [34]. Nkere et al. [54] conducted phylogenetic analysis on 18 full-length CP sequences of YMMV isolates from Ghana and Nigeria and these were clustered into two major groups. Sequence comparison with reference YMMV sequences in the National Centre for Biotechnology Information (NCBI) GenBank clustered these 18 YMMV isolates into four monophyletic groups as per classification by Bousalem et al. [34].
Sensitive diagnostics that are able to detect the full spectrum of YMMV global diversity are imperative for safe germplasm exchange to prevent the exchange of YMMV isolates leading to unwanted recombination events that could lead to new virulent YMMV isolates [56]. Mumford and Seal [52] described the use of a rapid single-tube IC-RT-PCR for the detection of two yam potyviruses, YMV and YMMV, using species-specific primers ( Table 1). The primers were highly specific to the target virus and did not show crossamplification between YMV and YMMV. In addition, YMMV primers were able to amplify and detect a range of isolates, an important feature considering the high genetic diversity reported for YMMV. The test method also reported a thousand-fold increase in sensitivity compared to existing ELISA tests of the time [52]. Nkere et al. [54] assessed the YMMV status of 1530 samples from 140 locations in Nigeria and Ghana by RT-PCR using the YMMV-specific primer pair YMMV-CP-Bam and YMMV-CP-EcoRP (Table 1) and found 12.8% prevalence. In addition, Eni et al. [57] reported the use of a protein-A-sandwich ELISA using polyclonal antibodies for the detection of YMMV. The Japanese yam mosaic virus (JYMV) (species Japanese yam mosaic virus) was first isolated from the Japanese yam, D. japonica in 1974 [31]. It was initially reported as a strain of YMV based on its potyvirus-like particle but was reclassified into a new potyvirus following genomic sequence characterization [31,64]. The complete genome of JYMV is 9757 nt long, excluding the poly(A) tail, with a single ORF encoding a polyprotein of 3130 aa. The genome organization of JYMV is typical of a potyvirus. Nucleotide sequence comparison of JYMV with YMV and YMMV showed 55.4% and 53.9% nucleotide identity, respectively. The identity is below the 76% species demarcation threshold of potyvirus, representing a new potyvirus species [64]. Current reports indicate that JYMV is restricted to China, Japan, and Korea and infects several members of the genus Dioscorea [31,64,65].
A novel strain of JYMV, designated JYMV-CN, was identified from the yam species, D. polystachya, from Yunnan Province, China [64]. The complete sequence has 9701 nt in length, excluding the poly(A) tail, encoding a polyprotein of 3247 aa. Nucleotide sequence comparison with two Japanese isolates revealed identities of 74.7-74.8% at the whole genome level, below the potyvirus species demarcation threshold of 76%. However, sequence analysis of the individual proteins suggested that the Chinese isolate is a divergent strain of JYMV and in the process of speciation [64]. A partial nucleotide sequence (7736 nt) of JYMV isolate BRI infecting D. opposita in Korea has also been recently published [65].
The virus is typically transmitted through vegetative propagation of seed tubers but may be transmitted by aphid vectors, A. gossypii, and Myzus persicae, in a non-persistent manner [31,66]. Symptoms of JYMV infection are similar to YMV, such as mosaic, greenbanding of leaves, yellow stripes, and chlorosis, that can cause significant yield loss [64,67] Fuji and Nakamae [31] reported the use of two methods, RT-PCR and double-antibody sandwich ELISA (DAS-ELISA) for the detection of JYMV. Comparison of the two detection methods showed that RT-PCR was more efficient in detecting JYMV than DAS-ELISA. In addition, DAS-ELISA was unable to detect all serotypes of JYMV. Similarly, Lee et al. [65] reported the use of an RT-PCR method for the diagnosis of a Korean JYMV isolate. Fukuta et al. [68] described a rapid and simple RT-LAMP detection method for JYMV based on the RNA extraction of Wang et al. [69] that allowed for the direct detection of RNA from infected plants without the need for pure RNA, precise thermal cycling, and gel electrophoresis.
A rapid and low-cost detection method for JYMV based on print-capture RT-PCR was reported by Mochizuki et al. [66]. Nucleic acid is captured onto a nitrocellulose membrane from leaf sap and recovered through an elution solution. Elution solution containing nucleic acid can then be directly used for RT-nested PCR for virus detection. This method offers many advantages in that it does not require toxic solvents and liquid nitrogen for nucleic acid extraction, the purified RNA can be used for RT-PCR operation, and tissue printed membranes can be stored for at least 3 months at 4 • C.
Genus Macluravirus
The genus Macluravirus resembles members of the genus Potyvirus in their transmission, but virions are slightly shorter (650-675 nm by 13-16 nm), lacking the P1 protein, a shorter HC-pro, and the absence of the DAG motif located in the CP for aphid transmission. The genus forms a distinct phylogenetic group within the family Potyviridae and has different consensus cleavage sites [12,70]. The virions contain one molecule of linear positive-sense single-stranded RNA (ssRNA) of about 8.0 kb. Three species from the genus, Chinese yam necrotic mosaic virus, Yam chlorotic mosaic virus, and Yam chlorotic necrosis virus have been identified infecting yam and are currently restricted to China, India, Japan, and Korea [10,12,71,72].
The Chinese yam necrotic mosaic virus (CYNMV) (species Chinese yam necrotic mosaic virus) was first reported as the causal agent of necrotic yam disease in the yam cultivar Naigamo (D. opposita) from Japan and was previously identified as a carlavirus based on its morphology and transmission by aphids [70]. However, partial genome sequencing of the 3 -terminal portions identified CYNMV as a distinct member of the genus Macluravirus. This was further supported by phylogenetic analysis of its CP amino acid sequences [10].
Complete nucleotide sequences of CYNMV isolates from Japan, China and Korea yielded genomic RNAs of ca. 8200 nt in length encoding for nine putative proteins that shared similar genomic organization to potyviruses ( Figure 3) [10,58,73]. However, CYNMV lacks the P1 cistron counterpart and has a short HC-Pro cistron in the 5 end compared to potyviruses, making macluravirus genomes the smallest in the family Potyviridae ( Figure 3) [10]. Sequence alignment of CYNMV isolates also indicated the region containing the C-terminus of Nib and N-terminus of CP showed high variability. Kondo and Fujita [10] constructed the first full-length cDNA infectious clone of CYNMV under the control of the cauliflower mosaic virus 35S promoter and the nopaline synthase terminator that resulted in systemic necrotic mosaic symptoms of CYNMV in the yam variety Nagaimo. The infectious clones provide an opportunity to investigate infectivity, host range, symptom expression and virus localization with plant host.
Complete nucleotide sequences of CYNMV isolates from Japan, China and Korea yielded genomic RNAs of ca. 8200 nt in length encoding for nine putative proteins that shared similar genomic organization to potyviruses ( Figure 3) [10,58,73]. However, CYNMV lacks the P1 cistron counterpart and has a short HC-Pro cistron in the 5′ end compared to potyviruses, making macluravirus genomes the smallest in the family Potyviridae (Figure 3) [10]. Sequence alignment of CYNMV isolates also indicated the region containing the C-terminus of Nib and N-terminus of CP showed high variability. Kondo and Fujita [10] constructed the first full-length cDNA infectious clone of CYNMV under the control of the cauliflower mosaic virus 35S promoter and the nopaline synthase terminator that resulted in systemic necrotic mosaic symptoms of CYNMV in the yam variety Nagaimo. The infectious clones provide an opportunity to investigate infectivity, host range, symptom expression and virus localization with plant host. Chinese yam necrotic mosaic virus can cause significant yield and quality losses, with yield losses of 30-45% being reported for CYNMV infection. It is easily transmitted by aphids in a non-persistent manner with a host range restricted to Dioscorea spp. [10]. Based on sequence comparison of CYNMV isolates reported, Kwon et al. [58] designed CYNMVspecific RT-PCR primer pairs, CYNMV-Det-FW and CYNMV-Det-Rv (Table 1), for the diagnosis of CYNMV.
The yam chlorotic necrotic mosaic virus (YCNMV), renamed to yam chlorotic mosaic virus (YCMV) (species Yam chlorotic mosaic virus) following ICTV convention, is the second yam-infecting macluravirus and was identified from the Chinese yam, D. parviflora Chinese yam necrotic mosaic virus can cause significant yield and quality losses, with yield losses of 30-45% being reported for CYNMV infection. It is easily transmitted by aphids in a non-persistent manner with a host range restricted to Dioscorea spp. [10]. Based on sequence comparison of CYNMV isolates reported, Kwon et al. [58] designed CYNMV-specific RT-PCR primer pairs, CYNMV-Det-FW and CYNMV-Det-Rv (Table 1), for the diagnosis of CYNMV.
The yam chlorotic necrotic mosaic virus (YCNMV), renamed to yam chlorotic mosaic virus (YCMV) (species Yam chlorotic mosaic virus) following ICTV convention, is the second yam-infecting macluravirus and was identified from the Chinese yam, D. parviflora [12,71]. It has a monopartite ssRNA of 8208 nt in length, excluding the poly(A) tail, and is currently the smallest genome reported from the family Potyviridae [71]. Its genome organization is typical of macluravirus, lacking the P1 protein, N-terminal HC-Pro, and DAG motif for aphid transmission found in potyviruses. Nucleotide sequence and polyprotein amino acid comparison showed that YCMV is most closely related to CYNMV and phylogenetic analysis grouped them into the same subgroup within the Macluravirus genus [12,71]. The biological characteristics of YCMV are still unknown.
The complete sequences of yam chlorotic necrosis virus (YCNV) (species Yam chlorotic necrosis virus), a third member of the yam-infecting macluravirus, were recently reported [12,72]. The first complete RNA genome reported by Lan et al. [12] consists of 8261 nt, with genome organization typical of the macluravirus. A second complete genome sequence of the YCNV isolate Kerala was reported by Filloux et al. [72] comprising 8263 nt in length. A basic local alignment search (BLAST) of the CP coding region in the GenBank database showed that the Chinese YCNV-YJish isolate had the highest similarity with an Indian yam-infecting macluravirus isolate YMCTCRI-03, with 85% amino acid sequence identity, indicating that these two isolates are representatives of the same species. Phylogenetic analysis using maximum-likelihood showed that YCNV is most closely related to CYNMV and YCMV [12]. Primer pairs YCNF504 and YCNR1269 developed by Lan et al. [12] (Table 1) were used to detect YCNV in 273 leaf samples of D. alata by RT-PCR. Seventy-two of the 273 samples tested positive for YCNV, indicating that YCNV is prevalent in Yunnan Province, China. In addition, YCNV can be mechanically transmitted to Vigna unguiculata (cowpea) and Phaseolus vulgaris (French bean) through sap inoculation [12].
Further macluravirus species may be present as identified by virus characterization work carried out on yam collections maintained by Guadeloupe's Biological Resource Centre for Tropical Plants (CRB-PT) and additional samples from India and the South Pacific [74] The samples were screened for macluravirus using primers (Table 1) designed from the alignment of putative new macluravirus species identified from CAP3-assembled sequences of public ESTs (NCBI) and sequences of known macluraviruses. Sanger sequencing of RT-PCR amplicons revealed two novel macluravirus species in tropical yam, tentatively named Dioscorea alata macluravirus and Dioscorea esculenta macluravirus. These two species were present in samples from Nigeria, Guadeloupe, India, and some Pacific Islands (Palau, PNG, Tonga, and Vanuatu) [74].
Family Caulimoviridae
The family Caulimoviridae comprises non-enveloped reverse-transcribing plant viruses with non-covalently closed circular double-stranded DNA (dsDNA) genomes of 6.9-9.8 kb with two genera being encapsulated by viral CP into bacilliform-shaped virions, Badnavirus and Tungrovirus, whereas members of the genera Caulimovirus, Cavemovirus, Petuvirus, Rosadnavirus, Solendovirus, and Soymovirus have isometric virions. Yet, no virion morphology data are available for the genera Dioscovirus and Vaccinivirus [75]. Only members from two of the genera, Badnavirus and Dioscovirus, are known to infect yam.
The genus Badnavirus is a group of plant pararetroviruses that have been reported to infect a wide range of economically important plants, such as aroids, banana, black pepper, citrus, cocoa, gooseberry, grape, ornamental spiraea, red raspberry, sugarcane, sweet potato and yam of the tropical and temperate regions of Africa, Asia, Europe, Oceania and the Americas [76,77]. The Badnavirus genome comprises a single circular dsDNA ca. 6.9-9.2 kb encapsulated in a non-enveloped bacilliform particle of ca. 130 nm in length and 30 nm in width [9,78]. The genome contains at least three ORFs in the positive strand ( Figure 4), with each strand having a single discontinuity, giving origin to three proteins P1, P2 and P3. The function of P1 remains to be elucidated, the P2 is the virion-associated capsid protein whilst the polyprotein P3 contains many functional domains, particularly, viral movement protein (VMP) (PF01107), coat protein (CP), retropepsin (pepsin-like aspartic protease) (AP) (CD00303), reverse transcriptase (RT) (CD01647) and RNase H1 (RH1) (CD06222) (Figure 4) [9,79,80]. Badnavirus-like particles were first reported from yam in association with a potyvirus causing internal brown spot disease in D. alata and D. cayenensis-rotundata complex in the Caribbean in the 1970s [81,82]. Subsequently, badnaviruses in other Dioscorea spp. were detected and partially characterized from several countries of West Africa, the South Pacific, and South America of which two were tentatively designated as species Dioscorea alata bacilliform virus (DaBV) and a serologically similar Dioscorea bulbifera bacilliform virus (DbBV) [17,28,83].
Badnaviruses are the most widespread viruses infecting yam and are collectively referred to as Dioscorea bacilliform virus (DBV) [17]. DBVs are vegetatively transmitted as well as by various mealybugs, Planococcus citri, with most DBV infections being symptomless, however, leaf symptoms, such as veinal chlorosis, necrosis, puckering, and crinkling, have been observed [17,28]. The first complete sequence of a DBV was obtained from a Nigerian D. alata and named Dioscorea alata bacilliform virus (DBALV) (species Dioscorea bacilliform AL virus) [28,84]. Later, the complete genome sequence of two additional isolates, representing a second species, of 7261 nt in length and sharing 61.9% sequence identity to DBALV was obtained from D. sansibarensis originating from Benin and named Dioscorea sansibarensis bacilliform virus (DBSNV)(species Dioscorea bacilliform SN virus) [85]. In the last decade, an additional six distinct genomes of DBV species have been completely sequenced, Dioscorea bacilliform AL virus 2 (DBALV2), Dioscorea bacilliform ES virus (DBESV), Dioscorea bacilliform RT virus 1 (DBRTV1), Dioscorea bacilliform RT virus 2 Badnavirus-like particles were first reported from yam in association with a potyvirus causing internal brown spot disease in D. alata and D. cayenensis-rotundata complex in the Caribbean in the 1970s [81,82]. Subsequently, badnaviruses in other Dioscorea spp. were detected and partially characterized from several countries of West Africa, the South Pacific, and South America of which two were tentatively designated as species Dioscorea alata bacilliform virus (DaBV) and a serologically similar Dioscorea bulbifera bacilliform virus (DbBV) [17,28,83].
Phylogenetic analysis of 80 partial reverse transcriptase (RT)-ribonuclease H (RNase H) coding sequences generated using the BadnaFP/RP [59] primers revealed 15 sequence groups with less than 79% nucleotide identity to each other and four divergent groups falling outside the badnavirus group but clustering outside ten currently recognized genera within the family Caulimoviridae [17,18,35,79,80,83,[87][88][89]. Subsequent characterization of badnavirus infecting Pacific yam germplasm collections revealed the presence of DBALV, DBALV2, DBESV, and DBRTV2. The most prevalent virus in the collection, DBALV, was identified from samples originating from Vanuatu and Tonga, whilst DBALV2, DBESV, and DBRTV2 were found restricted to PNG, Fiji, and Samoa, respectively [77,86]. Further, integrated viral sequences, called endogenous pararetroviruses (EPRVs) have been identified from yams referred to as endogenous Dioscorea bacilliform viruses (eDBVs) [35,89]. The presence of badnavirus eDBVs in yam was first demonstrated in three D. rotundata samples from Guinea through hybridization studies [35]. The genomic organization of EPRVs can be complex, consisting of rearranged patterns of tandem repeats, fragmentations, inversions, and duplications of complete or partial viral genomes. [35,89]. Further, the genome of an African yam D. cayenensis-rotundata complex has been demonstrated to host eDBVs from four distinct badnavirus species (groups K5, K8, K9, and U12) [89]. However, it remains to be determined if they are transcriptionally active and potentially infectious [44].
Diagnosis of badnaviruses and development of tools for badnavirus detection in yam is complicated by the high badnavirus genetic variability [35]. The discovery of eDBVs into yam genomes further complicates the detection of yam badnavirus [89]. The presence of diverse eDBVs in yam genomes poses serious challenges in the differentiation of integrated sequences from the episomal virus with existing PCR-based diagnostic tools [80]. The high genetic diversity and lack of sufficiently specific and polyclonal antisera to DBV species is yet another constraint for immunocapture-PCR (IC-PCR) amplification and detection of episomal viruses [79,87].
Bömer et al. [79] reported the use of sequence-independent random primed (RP) rolling circle amplification (RCA) for the amplification of episomal (DBV) genomes in yam. This study resulted in the identification and characterization of nine complete genomic sequences of the existing and previously undescribed DBV phylogenetic groups from D. alata and D. rotundata. However, the study also highlighted the disadvantages of the sequence-independent nature of the RP-RCA. The use of random hexamers in the RP-RCA reactions also promotes the amplification of DNA of plant-origin, such as mitochondrial or chloroplast DNA [79,90,91], which precludes RP-RCA from being used as a routine diagnostic protocol. James et al. [92] and Sukal et al. [93] demonstrated that the inclusion of sequence-specific primers into the RP-RCA reactions creates a bias towards the target sequence. Further, Sukal et al. [93] showed the possibility of using specific-primed RCA coupled with restriction analysis as a potential diagnostic tool for DBV. The study further described RCA combined with next-generation sequencing (NGS) as a possible method for the diagnosis and characterization of badnaviruses. Turaki et al. [88] developed a PCRdependent denaturing gradient gel electrophoresis (PCR-DGGE) workflow for the rapid and efficient determination and the unraveling of complex mixtures of potentially episomal and endogenous badnavirus sequences. This resulted in the identification of complex 'fingerprints' made up of multiple sequences of DBV. This technique can be particularly useful for badnavirus diversity studies.
The genus Dioscovirus is a newly described genus in the family Caulimoviridae and consists of a single species, Dioscorea nummularia-associated virus (DNUaV) [18]. It is a novel circular double-stranded DNA virus found infecting D. nummularia originating from Samoa. The genome of DNUaV was generated using rolling circle amplification (RCA), followed by cloning and sequencing of restriction endonucleases, EcoR1, Kpn1, Sph1, and Stu1, linearised products. The DNUaV genome is 8139 nt in length and contains four putative ORFs. Whilst ORFs 1 and 2 did not have identifiable conserved domains, ORF 3 had conserved domains, such as CP, MP, aspartic protease, and RT/RNase H1, typical of Caulimoviridae and ORF 4 contained a transactivator (TAV) domain ( Figure 5) [18].
plex 'fingerprints' made up of multiple sequences of DBV. This technique can be particularly useful for badnavirus diversity studies.
The genus Dioscovirus is a newly described genus in the family Caulimoviridae and consists of a single species, Dioscorea nummularia-associated virus (DNUaV) [18]. It is a novel circular double-stranded DNA virus found infecting D. nummularia originating from Samoa. The genome of DNUaV was generated using rolling circle amplification (RCA), followed by cloning and sequencing of restriction endonucleases, EcoR1, Kpn1, Sph1, and Stu1, linearised products. The DNUaV genome is 8139 nt in length and contains four putative ORFs. Whilst ORFs 1 and 2 did not have identifiable conserved domains, ORF 3 had conserved domains, such as CP, MP, aspartic protease, and RT/RNase H1, typical of Caulimoviridae and ORF 4 contained a transactivator (TAV) domain ( Figure 5) [18]. Primer pairs for PCR detection were designed by Sukal et al. [18] amplifying a 450 bp region of the putative ORF4 sequence and were used to screen a collection of 173 samples obtained from the Centre for Pacific Crops and Trees (CePaCT), a regional genebank based in Fiji. Among the 173 samples screened, only two Samoan D. nummularia samples generated the expected amplicons. The DNUaV positive plants did not show any disease symptoms and it is yet to be determined how it affects yam plants and associated yield. Primer pairs for PCR detection were designed by Sukal et al. [18] amplifying a 450 bp region of the putative ORF4 sequence and were used to screen a collection of 173 samples obtained from the Centre for Pacific Crops and Trees (CePaCT), a regional genebank based in Fiji. Among the 173 samples screened, only two Samoan D. nummularia samples generated the expected amplicons. The DNUaV positive plants did not show any disease symptoms and it is yet to be determined how it affects yam plants and associated yield.
Family Bromoviridae, Genus Cucumovirus
The family Bromoviridae contains some of the most important plant RNA viruses, with members distributed worldwide, infecting over 10,000 plant species [94]. The genomes of this family are tri-segmented, positive-sense, and single-stranded RNAs approximately 8 kb in length [95]. RNA1 and RNA2 encode for proteins involved in virus replication, whilst RNA3 encodes for the proteins, MP and CP ( Figure 6). In some members, especially genus Cucumovirus, a fifth protein, P2b, located in RNA2 and part of the C-terminus region of the P2 protein, is involved in silencing suppression, systemic movement, and expression of symptoms ( Figure 6) [94].
Family Bromoviridae, Genus Cucumovirus
The family Bromoviridae contains some of the most important plant RNA viruses, with members distributed worldwide, infecting over 10,000 plant species [94]. The genomes of this family are tri-segmented, positive-sense, and single-stranded RNAs approximately 8 kb in length [95]. RNA1 and RNA2 encode for proteins involved in virus replication, whilst RNA3 encodes for the proteins, MP and CP ( Figure 6). In some members, especially genus Cucumovirus, a fifth protein, P2b, located in RNA2 and part of the Cterminus region of the P2 protein, is involved in silencing suppression, systemic movement, and expression of symptoms ( Figure 6) [94]. Cucumber mosaic virus (CMV) (species Cucumber mosaic virus) is a member of the genus Cucumovirus. It has an isometric single-stranded, positive-sense tripartite RNA genome consisting of RNA1, RNA2, and RNA3 and two subgenomic regions, RNA4 and RNA4A [21,96]. These are translated into five proteins, designated as 1a, 2a, 2b, 3a, and CP. In particular, the CMV 2b protein functions in long-distance virus movement, systemic symptoms expression and inhibition of virus silencing and is important for disease development [96]. It has a very wide host range, affecting more than 1200 plant species belonging to 100 families, and is transmitted by mechanical inoculation of plant sap and over 80 species of aphids in a non-persistent manner [21,97].
The first CMV infection in yam was reported as a virus disease in the yam D. trifida in Guadeloupe in the 1970s [98]. Currently, reports of yam CMV infections are restricted to West Africa infecting D. alata and D. rotundata. Field surveys conducted in yam growing regions of Benin, Ghana, and Togo reported their first record of CMV infections, albeit in low prevalence and mixed infection with other yam viruses [57,99]. On the contrary, a field survey of 591 leaf samples tested with multiplex RT-PCR revealed no positive CMV Cucumber mosaic virus (CMV) (species Cucumber mosaic virus) is a member of the genus Cucumovirus. It has an isometric single-stranded, positive-sense tripartite RNA genome consisting of RNA1, RNA2, and RNA3 and two subgenomic regions, RNA4 and RNA4A [21,96]. These are translated into five proteins, designated as 1a, 2a, 2b, 3a, and CP. In particular, the CMV 2b protein functions in long-distance virus movement, systemic symptoms expression and inhibition of virus silencing and is important for disease development [96]. It has a very wide host range, affecting more than 1200 plant species belonging to 100 families, and is transmitted by mechanical inoculation of plant sap and over 80 species of aphids in a non-persistent manner [21,97].
The first CMV infection in yam was reported as a virus disease in the yam D. trifida in Guadeloupe in the 1970s [98]. Currently, reports of yam CMV infections are restricted to West Africa infecting D. alata and D. rotundata. Field surveys conducted in yam growing regions of Benin, Ghana, and Togo reported their first record of CMV infections, albeit in low prevalence and mixed infection with other yam viruses [57,99]. On the contrary, a field survey of 591 leaf samples tested with multiplex RT-PCR revealed no positive CMV infection in yam in Cameroon [1]. Similarly, a germplasm collection of 38 D. rotundata maintained in the Ivory Coast tested negative for CMV despite previous reports of CMV infection in the Ivory Coast [25]. However, it was pointed out that the previous study was conducted on D. alata, whilst the current study was conducted on D. rotundata. Another prevalence study conducted on 396 accessions of yam from Guadeloupe for CMV also returned negative for CMV infection [19] despite previous reports of CMV infecting yam in Guadeloupe [98].
In the field, mixed infections of CMV with other viruses are common and symptoms are difficult to distinguish from single CMV infections [96]. Sap inoculation of yam CMV isolates induced systemic mosaic in Cucumis sativus, and systemic chlorosis, necrotic lesion, and leaf distortion in Nicotiana glutinosa [21]. Phylogenetic analysis of the 3 nucleotide sequence of the CP gene and C-terminal noncoding region of RNA3 of a Benin CMV isolate categorized it as a subgroup 1A strain ELISA is routinely used for the diagnosis of CMV in plants. Whilst antibodies are readily available commercially, serological differences between CMV isolates have been reported [60]. Eni et al. [100] reported the production of polyclonal antibodies against a yam isolate of CMV from Nigeria, which was able to detect CMV in infected yam leaves from Nigeria, Ghana, Togo, and Benin. Molecular detection using RT-PCR is the preferred method of detection and is generally based on generic CMV primers designed from other crops [1,60,61] (Table 1).
Mambole et al. [13] reported the complete genome sequence of a novel potexvirus, yam virus X (YVX) (species Yam virus X), genus Potexvirus, family Alphaflexiviridae, isolated from D. trifida in Guadeloupe. The YVX genome is 6158 nt in length, excluding the poly(A) tail, and encodes five ORFs. A large ORF1 encodes the RdRp, whilst ORFs 2, 3, and 4 encode the putative triple gene block proteins, TGBp1, TGBp2, and TGBp3, typical of potexvirus and function in viral movement. ORF5 encodes the CP. A BLAST search of the RdRp and CP amino acid sequences in the NCBI database and phylogenetic analysis confirmed this as a potexvirus but the maximum sequence identity was only 51.9% indicating it was a new member of the genus. Yam plants infected with YVX were symptomless except for a single plant co-infected with a potyvirus that showed mild symptoms. Mechanical transmission of viral isolates to indicator plants N. benthamiana, N. clevelandii, C. quinoa, and C. amaranticolor did not yield local or systemic infection and RT-PCR detection with the potexvirus-specific primers also yielded negative results. A prevalence study carried out by Mambole et al. [13] on 383 yam accessions from 34 countries reported 17 YVX positive samples using the potexvirus-specific degenerate primer pair Potex-2RC/Potex-5 (Table 1). Phylogenetic analysis of sequenced amplification products provided evidence for two additional and distinct groups of potexvirus sequences; group one includes sequences from D. nummularia from Vanuatu and group two includes sequences from D. bulbifera and D. rotundata from Haiti, and D. trifida and D. rotundata sequences from Guadeloupe.
The complete genome sequence of a novel putative secovirus was isolated from yam plants exhibiting mosaic symptoms in Brazil [101]. The genome is composed of two positive-sense RNA molecules of 5979 and 3809 nt in lengths, each with a single large ORF. RNA1-ORF1 was predicted to encode a polyprotein associated with the replication process. RNA2-ORF2 was predicted to encode for the CP and MP. It is tentatively called Dioscorea mosaic-associated virus (DmaV) of the genus Sadwavirus, family Secoviridae. Phylogenetic analysis of the protease-polymerase (Pro-Pol) amino acid sequence of DmaV with other members of secovirus indicated that it most closely related to the chocolate lily virus A (CLVA), whilst the amino acid sequence identity indicated it was a putative new member of the family Secoviridae based on ICTV species demarcation criteria.
Marais et al. [63] recently reported the complete nucleotide sequence of a novel yam virus tentatively named yam asymptomatic virus 1 (YaV1) (species Yam asymptomatic virus 1) from an asymptomatic D. alata plant collected from Vanuatu. The genome is 14,855 nt in length, encoding for 10 putative ORFs with a similar organization to that of the little cherry virus 2 (LChV2) of subgroup 1 of genus Ampelovirus. Phylogenetic analysis of the HSP70 and CP amino acid sequences confirmed it to be a novel member of the genus Ampelovirus, family Closteroviridae and distinct from another recently identified ampelovirus, air potato virus 1 (AiPoV1) infecting D. bulbifera from Florida, USA [102]. DiosClos-F and DiosClos-R primer pair were used to screen a yam field collection of 170 accessions in Guadeloupe (French West Indies) by the Biological Resource Center for Tropical Plants (BRC-TP). A total of 86 accessions representing different yam species were positive for YaV1 and showed that YaV1 was highly prevalent in the field collection, but asymptomatic. The infected accessions were mostly of Caribbean origin with two accessions from Africa (Ivory Coast and Nigeria) and one from the Pacific (New Caledonia), suggesting that YaV1 may also be present in these countries. Future prevalence studies of YaV1 will be required to determine their geographical distribution [63]. Blastn analysis of the YaV1 genome sequence yielded two expressed sequence tags (ESTs) from a Nigerian D. alata plant, confirming its presence in Africa. Sanger sequencing of fifty-five selected amplification products was used for phylogenetic analysis but revealed low variability (93.3 to 100% sequence identity) amongst distinct accessions, suggesting that plant-to-plant transmission through insect vector may be possible.
The complete genome sequences of two isolates of the tentatively named 'yam virus Y' (YVY), obtained from a collection of D. rotundata from the Natural Resources Institute (NRI, UK), IITA, and Council for Scientific and Industrial Research-Crop Research Institute (CSIR-CRI, Ghana), were sequenced using high-throughput sequencing (HTS) [20]. The genomes of YVY-Dan and YVY-Mak isolates are 7557 nt and 7584 nt in length, respectively, excluding the poly(A) tail. The genome encodes five ORFs; ORF1 encodes a large replication protein, ORF2, ORF3 and ORF4 constitute the triple gene block protein, associated with viral movement, whilst ORF5 encodes a putative CP protein. Based on ICTV demarcation criteria and phylogenetic analysis, YVY is grouped with unassigned members of the family Betaflexiviridae and most closely related to sugarcane striate mosaic-associated virus (SCSMaV) (Sugarcane striate mosaic-associated virus). A prevalence study using newly developed YVY-specific PCR primers reported 31 YVY positive samples in a collection of 55 breeding lines and landraces grown in NRI UK, IITA Nigeria, and CSIR-CRI Ghana. Among these, 23 showed mixed infection with YMV. Plants that were singly infected with YVY were generally symptomless except for one plant, whilst plants infected with YMV or mix-infection developed symptoms [20].
Yam Virus Sanitation
Plant viruses are obligate intracellular parasites that can survive only inside living cells [103]. Once the viral infection is established in plants, it is not feasible to cure, in contrast to bacterial or fungal infections where they can be treated with antibacterial or antifungal agents [104]. Virus elimination through in vitro culture techniques has been proven to be successful in producing virus-free plants. Some of these established methods include shoot-tip or meristem culture, micrografting, chemotherapy, thermotherapy, and shoot-tip cryotherapy [103,105,106].
Shin et al. [107] reported the production of YMV-free D. opposita plants by cryotherapy of shoot tips. Shoot apices were precultured for 16 h in 0.3 M sucrose, encapsulated in sodium alginate, and dehydrated for 4 h before direct immersion in liquid nitrogen.
Regenerated shoot tips were reported to be 90% YMV-free. Similarly, Ita et al. [2] reported the elimination of YMV from D. rotundata by cryotherapy of axillary buds of infected stocks. Enlarged axillary buds of infected plants were frozen in liquid nitrogen for one hour, re-warmed at 40 • C, and cultured to regenerate plants. Plantlet regeneration was reported at 76%, whilst YMV elimination was reported at 100%. Similar protocols can be adopted for other yam viruses Hot water treatment was successfully used for the elimination of YMMV in D. alata. Single node vine cuttings were treated at 32 and 36 • C for different time durations and virus elimination was confirmed by RT-PCR using YMMV-specific primers. Treatment at 36 • C for 30 min was reported to be most efficient at 90% elimination [108]. Umber et al. [19] reported the use of a combination of thermotherapy and meristem culture for the elimination of yam viruses prevalent in Guadeloupe (badnavirus, YMV, YMMV, CMV, DMaV, YaV1, potexvirus, and macluravirus). Sanitation rates were variable among the different viruses, with YaV1 reporting the lowest at 14.5% and macluravirus the highest at 100% elimination. Among the 57 accessions subjected to combined thermotherapy and meristem culture protocol in the study, sixteen accessions were fully sanitized.
The application of water-dissolved ozone was reported for the sanitation of potyvirus during in vitro propagation of D. cayenensis-rotundata [109]. Potyvirus-positive nodal segments were subjected to different concentrations of water-dissolved ozone under different time durations. Treatment with 1.5 ppm ozone for 10 min was most efficient and produced 63% potyvirus-free in vitro yam plants. In addition, it was reported that this treatment stimulated plant tissue growth, thus reducing the time for the establishment stage during in vitro culture [109].
Yam Virus Status in the Pacific
Efficient virus diagnostic tools and sanitation are essential to facilitating germplasm exchange. The Centre for Pacific Crops and Trees (CePaCT) of the Land Resource Division (LRD) of the Pacific Community (SPC) located in Fiji is the premier genebank of the Pacific supporting the safe conservation and distribution of plant genetic resources of importance to the region. The center (CePaCT) has a unique yam collection from the Pacific region that is invaluable towards mitigating biotic and abiotic stresses in production. However, the yam diversity remains unavailable due to the fact that diagnostic protocols for yam viruses, particularly those endemic to the Pacific region, remain undefined.
In the 2000s, virus characterization work carried out in the region, though sporadic, has speculated the existence of known and novel badnavirus and potyvirus diversity in the region [17,32,34,42,110]. This has led to further characterization work on the diversity of badnaviruses, with the identification of two novel and two known badnaviruses [77,86]. The previous studies also highlight the need to carry out more characterization work in the region to delineate the diversity of viruses infecting Pacific yam collections and to develop diagnostic protocols to enable the testing of the germplasm before exchange. The diagnostic and sanitation protocols described in this review will greatly assist in the development of diagnostic and prevalence studies of yam viruses in the Pacific region to better understand their distribution and to facilitate the safe exchange of Pacific yam germplasm.
Conclusions and Perspectives
In this review, the most prevalent viruses infecting yam globally have been described for their origin, morphology, genome organization, symptoms caused, and diagnostics. As a predominantly vegetatively propagated crop, the use of clean, virus-free planting materials is the most effective method to curtail the spread of yam viruses. A major drawback, however, is the lack of formal seed systems within smallholder farming communities where globally, most yams are cultivated. The use and sharing of virus-infected planting materials by farmers promote the spread of yam viruses to new ecogeographical locations.
Yam viruses, such as YMV, YMMV, and badnaviruses, have been reported to be highly prevalent and genetically diverse. In addition, the discovery of integrated viral sequences from badnaviruses within host genomes adds a level of complexity to their diagnostics. As more yam-infecting viruses are being discovered from other virus families, the development of sensitive and specific diagnostic tools for their detection becomes paramount. This leaves the conundrum and predicament of whether a truly 'virus-free' plant can be achieved and what level of sanitation is required to meet international standards for germplasm exchange.
Serological and PCR-based methods of detection will continue to be the backbone of yam virus diagnostics. Their limitations, however, are that they can only detect known or related viruses. In addition, the high genetic diversity of virus isolates can limit their detection spectrum. The use of HTS technologies for sequencing and discovery of existing and novel yam viruses is gaining traction. Bomer et al. [44] evaluated a combined tissue culture and NGS approach for the detection of yam virus without prior knowledge of viral sequences and successfully detected and sequenced two novel badnaviruses and one novel YMV isolate. However, the practicality of HTS for routine virus diagnostics is yet to be realized. The use of a Nanopore-based MinION approach demonstrated its ability to both reliably detect and sequence near full-length genomes of yam viruses [72], representing an important first step for future research into portable field-based diagnostics Virus elimination through in vitro culture techniques has proven effective for producing virus-free plants in yam. Natural resistance, however, might prove to be more economic and efficient in the long term. Work on genetic resistance and crop improvement has been slow in the past due to a lack of genetic and genomic tools. With the recent publication of several whole-genome sequences of Dioscorea spp. [3,111,112], and the discovery of thousands of good quality single nucleotide polymorphisms (SNPs) [113], research in areas such as gene mapping, marker development for marker-assisted breeding, virus-host interaction, and molecular mechanisms of resistance in yam should start gaining momentum. This is the first review of global yam viruses and the information contained herein will invaluably facilitate further development in yam virus diagnostics and sanitation for the safe international exchange of yams particularly in the Pacific and those held elsewhere in the world. | 2022-03-26T15:09:39.295Z | 2022-03-23T00:00:00.000 | {
"year": 2022,
"sha1": "9bf8fb95f24413c93d366ad8b193ec3bba715798",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/14/4/662/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "658288a672c8d7d156f0ff78f52bd31e3d311196",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
35027182 | pes2o/s2orc | v3-fos-license | Partial purification and characterization of glutathione S-transferase from the somatic tissue of Gastrothylax crumenifer (Trematoda: Digenea)
Aim: Aim of the present study was to carry out the partial purification and biochemical characterization of glutathione S-transferase (GST) from the somatic tissue of ruminal amphistome parasite, Gastrothylax crumenifer (Gc) infecting Indian water buffalo (Bubalus bubalis). Materials and Methods: The crude somatic homogenate of Gc was subjected to progressive ammonium sulfate precipitation followed by size exclusion chromatography in a Sephacryl S 100-HR column. The partially purified GST was assayed spectrophotometrically, and the corresponding enzyme activity was also recorded in polyacrylamide gel. GST isolated from the amphistome parasite was also exposed to variable changes in temperature and the pH gradient of the assay mixture. Results: The precipitated amphistome GST molecules showed maximum activity in the sixth elution fraction. The GST subunit appeared as a single band in the reducing polyacrylamide gel electrophoresis with an apparent molecular weight of 26 kDa. The GST proteins were found to be fairly stable up to 37°C, beyond this the activity got heavily impaired. Further, the GST obtained showed a pH optima of 7.5. Conclusion: Present findings showed that GST from Gc could be conveniently purified using gel filtration chromatography. The purified enzyme showed maximum stability and activity at 4°C.
Introduction
Amphistomes are digenetic trematodes comprising a characteristic posteriorly located muscular acetabulum. There are more than 70 species of amphistome parasites all over the world [1], particularly in the hot and humid land stretches. They are known to parasitize a wide variety of small and large ruminants and are responsible for substantial economic loss. In India, reports on the prevalence of these flukes have been archived from all the major provinces. Gastrothylax crumenifer (Gc) is a commonly found amphistome parasite infecting the rumen of the Indian Water buffalo, Bubalus bubalis in this part of north India. In general, the adult forms of amphistome parasites manifest diminished pathogenicity, but the immature rumen flukes during their course of migration lead to severe pathological disturbances including hemorrhagic inflammation in the wall of alimentary tract [2]. In India, several outbreaks of acute amphistomosis associated with high mortality among young sheep, goats, cattle, and buffaloes have been recorded [3][4][5][6]. In the absence of commercial vaccine, chemotherapy is the only way to deal with these rampant and neglected epidemic infections, but reports on emerging anthelmintic resistance have necessitated the quest to find new drug targets and promising vaccine candidates.
Glutathione S-transferases (GSTs) are a diverse family of multifunctional proteins that find widespread distribution in the aerobic organisms. GSTs mediate the covalent addition of the tripeptide glutathione (GSH) to a structurally diverse set of electrophiles [7]. GST enzymes are involved in the active detoxification of xenobiotics by conjugation to GSH. They are also known to neutralize endogenous secondary metabolite formed during oxidative stress [8]. These enzymes occur in multiple forms and catalyze a multitude of reactions involving electrophilic functional groups [9]. The GST has got a principal role in the process of inactivating a wide range of exogenous/endogenous toxic molecules and to turn them into water-soluble compounds. GSTs gained vital significance in parasites as the main detoxification system due to the lack of cytochrome P450 (CYP450) activity [10]. It has been postulated that GSH mediated antioxidant system is responsible for the prolong survival of helminth parasites in the mammalian definitive host. Occurrence of GSTs in helminth parasites protect them from the reactive oxygen species (ROS) generated during normal metabolism and by the immune effector cells of the invaded host [11,12]. The inherent capacity of helminth GSTs to neutralize cytotoxic components furnished by ROS of host origin on cell membrane strengthens the potential of GSTs as a protective tool against the host immune response. The inhibition of helminth GSTs dismantles the parasite defense against mounting oxidative stress and immunogenic attack of the host [13,14]. These facts place them as targets for the development of vaccines or chemotherapeutic agents [15,16]. The discovery and exploration of biochemical attributes of GSTs in helminth parasites have resulted into elaborate testing of this enzyme form as a target immunoprotective antigen.
There are many reports available on these vaccination experiments [17][18][19][20]. Therefore, to cater to that need, preliminary studies were designed and executed to purify and characterize GST from Gc to generate a baseline data which could further be exploited for immune and chemotherapeutic assessments to check amphistome infections in domesticated ruminants.
Ethical approval
Ethical approval is not necessary to pursue this type of study.
Collection of parasites
Mature and active Gc amphistome parasites were collected from the rumen of the Indian water buffalo (B. bubalis) slaughtered at Aligarh abattoirs. The parasites were thoroughly washed in phosphate buffered saline (100 mM, pH 7.4) and then briefly rinsed in the same washing buffer containing 0.01% penicillin-streptomycin (Merck, Germany) solution to remove any possible microbial contamination.
Preparation of somatic extracts
The parasites were homogenized in a chilled mortar-pestle over ice in cold phosphate buffer solution (100 mM, pH 7.4). The sample was centrifuged at 10,000×g for 10 min in a refrigerated centrifuge (Hitachi, Japan) and the supernatants were collected as soluble protein fractions.
Protein estimation
The protein contents were estimated following the dye binding method of Bradford [21] as modified by Spector [22]. Bovine serum albumin was used as a standard.
Ammonium sulfate precipitation
The soluble proteins of somatic extract of Gc were fractionated between 0-20%, 20-40%, 40-60%, 60-80%, and 80-100% ammonium sulfate saturation in a stepwise ascending grade of protein salting out method. After 4 h, the precipitated proteins from each of the fractions were procured by centrifugation at 10,000×g for 30 min at 4°C in a cooling centrifuge (Remi, India).
Dialysis of samples
A small portion of the supernatants from each salting out step was saved and the precipitate was dissolved in a minimum volume of 100 mM sodium phosphate buffer (pH 7.4). The supernatant and the precipitate obtained from each salting out step were dialyzed separately thrice against 1200 ml of the same buffer overnight at 4°C to remove ammonium sulfate content from the protein samples.
Protein estimation of each fraction
The protein content of the dialyzed supernatant and precipitated samples were estimated as mentioned earlier.
Assay of GST enzyme
Following protein estimation, all the supernatant and ammonium sulfate precipitated samples of Gc were assayed spectrophotometrically to determine the activity of GST enzyme. The precipitated fractions with maximum GST specific activity were selected and processed for gel filtration chromatography.
Gel filtration chromatography
A Sephacryl S 100-HR column (Sigma-Aldrich, USA) was prepared as recommended by Peterson and Sober [23] with necessary adjustments and modifications. Pre-swollen gel suspended in ethanol was soaked in sufficient amount of double distilled water and washed at least thrice. The finer resin fragments were removed by suspending the gel in two-to fourfold excess of 100 mM sodium phosphate buffer, pH 7.4 and the gel was allowed to settle down. A glass column (70 cm×2 cm) was mounted on a sturdy vertical support after introducing the glass wool on its opening near the bottom end which was fitted with rubber tubing. Following the clamping of rubber tubing, the column was filled to one-third of its length with operating buffer to check leaks and flush air bubbles from the dead space. The de-aerated gel slurry was poured with the help of a glass rod into the column with care to avoid generating air bubbles. The column was left to stand overnight. Flow rate was increased gradually, and after achieving a constant flow rate (higher than that required for final elution), the column was adjusted to the required flow rate. The packed column was thoroughly washed with two-bed volumes of operating buffer (100 mM sodium phosphate buffer, pH 7.4). To check uniform packing and to determine void volume of the column, 2% (w/v) solution of blue dextran in 100 mM sodium phosphate buffer (pH 7.4) was passed through the column. The volume of the blue dextran and protein solution applied was not more than 2-3% of the total bed volume. The dialyzed sample was subjected to gel filtration chromatography on Sephacryl S-100-HR column equilibrated with 100 mM sodium phosphate buffer, pH 7.4 at 4°C. The flow rate of the column was set at15 ml/h during the process of filtration. Fractions of 5 ml were collected and assayed for protein content and GST activity. Homogeneity of the purification was analyzed by 12% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE).
Collection of eluents
The eluents containing protein samples were collected into 14 subsequent fractions (5 ml) and were assayed for protein content and GST activity.
GST activity in each eluent
The eluted fractions showing detectable amount of protein content were assayed to determine GST enzyme activity in a UV-visible spectrophotometer (Taurus Scientific, U.S.A). GST activity was determined by the method of Habig et al. [24] with minor modifications. The assay was performed in a total volume of 3.0 ml reaction mixture containing 300 µl of 1.0 mM reduced GSH, 10 µl of 1.0 mM 1-chloro-2,4-dinitrobenzene (CDNB), and 50 µl of protein samples. The remaining volume was adjusted with 0.1 M sodium phosphate buffer (pH 6.5). CDNB and GSH were dissolved in ethanol and 0.1 M sodium phosphate buffer (pH 6.5), respectively. The control assay mixture did not have any protein (enzyme) sample. The assay was carried out at least in three replicates. The change in absorbance at 340 nm was recorded for 3 min. The change in absorbance was calculated and used for determining the enzyme specific activity. The GST activity is defined as the amount of enzyme that catalyzes the formation of 1.0 µmol of S-(2,4-dinitrophenyl) GSH/ min/mg protein under the standard assay conditions. The unit of enzyme activity is expressed as nmoles/ mg protein/minute.
Molecular weight determination of GST enriched fraction by SDS-PAGE
The eluted fractions showing highest activity for GST were subjected to SDS-PAGE to assess the homogeneity of purification as well as relative percentage of GST protein in the GST enriched fractions. All the samples and buffers used in this study were filtered through 0.22 μm Millipore filters to remove small particles and the gel solutions were additionally degassed before gel polymerization.
Protein loading and electrophoresis
Eluted fractions containing highest GST activity in Gc were separately mixed with the SDS sample buffer containing β-mercaptoethanol in the ratio of 1:2 (Sample: Sample buffer) and then the mixture was heated for 3-5 min at 95°C in water bath. Samples were loaded in different wells along with the standard protein markers (Precision Plus Protein ™ Dual Color Standard; BIO-RAD) in a separate lane. After that, the gels were placed in a Benchtop Mini-Boat gel electrophoresis assembly and then running buffer was poured into the buffer tank. Electrophoresis [25] was carried out at 100 V for 60 min at RT. The gels were then carefully removed and thoroughly washed twice with distilled water before incubation in CBBR-250 solution overnight at RT. The over stained gels were destained, photographed, and analyzed.
Gel preparation
To resolve the isozyme pattern and activity of GST enzyme molecules, the polyacrylamide gels were cast between glass cassettes in an electrophoresis assembly. A 12% separating gel and stacking gel of 5% concentration were prepared.
Protein loading and electrophoresis
Samples (desired eluted fractions of Gc) were mixed with the non-reducing sample buffer in the ratio of 1:2 (sample: sample buffer) and then the mixture was held at 4°C for 2-4 h. The prepared samples were loaded and electrophoresis was done at 100 V for 90 min.
Activity of GST in native polyacrylamide gel
The activity of GST enzyme in gel was performed according to the protocol of Ricci et al. [26]. The electrophoresed gels were then incubated in a series of reagents to develop achromatic bands depicting GST isozyme.
Photography of stained gels
The stained gels were scanned on a computer-driven laser scanner for analysis and all the images were saved for further analysis as well as to maintain digital record in the laboratory.
Effect of pH on partially purified GST
Variation in the enzyme activity of partially purified GST obtained from Gc was examined at various pH values (5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, and 9.0). Purified GST enzymes (1 mg) from the amphistome parasites were incubated separately in 1 ml of 100 mM sodium phosphate buffered solution having different pH at a constant temperature of 4°C for 30 min. Following incubation, each sample was assayed for GST activity as mentioned above.
Effect of temperature on partially purified GST
The variation in specific activity of GST from Gc was also investigated as a function of five different temperature (4, 25, 37, 60, and 80)°C. Purified GST enzymes (1 mg) were incubated separately in 1 ml of sodium phosphate buffered solution (100 mM, pH 7.4) pre-maintained at specific temperatures, for 30 min. Following incubation, each sample was assayed for GST activity according to the methods mentioned earlier.
Results
The present work deals with the partial purification of GST protein from the somatic tissues of amphistome parasites, Gc. The methodology involved two steps isolation process using tissue homogenates. The aqueous protein extracts were subjected to ammonium sulfate precipitation (40-60%) for Gc. The precipitated protein fractions from the flukes were passed through a Sephacryl S-100 HR column. Different steps of typical purification process of GST proteins, their specific enzyme activity, fold purification, and percent yield are summarized in Table-1. Fractionation of the soluble proteins with ammonium sulfate decreased the total amount of protein obtained in the subsequent precipitate. The ammonium sulfate precipitation (40-60%) revealed 47.74% yield of the total activity in crude protein but the GST protein showed 2.78-fold purification as compared to the crude homogenate of Gc.
Gel filtration chromatography
The ammonium sulfate precipitated proteins (40-60%) obtained were chromatographed on a Sephacryl S-100 HR column pre-equilibrated with 100 mM sodium phosphate buffer (pH 7.4). Single peak showing highest GST specific enzyme activity (24.677 nmoles/mg protein/min) was obtained for the rumen flukes as shown in Figure-1. The sixth eluted fraction corresponding to the peak was used for further analyses. It was observed that the size exclusion chromatography could purify the GST protein by 22.27-folds from crude homogenates of Gc. The percent yields of GST protein after size exclusion chromatography in Gc were found to be 18.61 as compared to their corresponding crude forms (Table-1).
Homogeneity of the purified GST proteins
Eluted GST proteins from Gc showed a single peak corresponding to highest enzyme specific activity suggested a homogenous soluble extract preparation ( Figure-1). In addition, the preparations had minimum interference with the ions of sodium and phosphates as are evident from the symmetric peaks and patterns of eluting fractions in terms of protein content and GST specific activity. Physical evidence for homogeneity was further provided by PAGE.
Reducing SDS-PAGE
Partially purified GST proteins from Gc (Eluted fraction-6) were analyzed on PAGE under reducing and non-reducing (in absence of SDS and β-mercaptoethanol) conditions. The PAGE (12%) profile under reducing conditions showed that the GST migrated as a single prominent band in the parasites under study (Figure-2a). The specific polypeptides corresponding to GST Available at www.veterinaryworld.org/Vol.10/December-2017/13.pdf monomeric proteins in the rumen amphistomes were observed to be lying in the low molecular weight range.
Non-reducing (native) PAGE
Partially purified GST proteins from the eluted fraction number 6 for Gc (GcGST) were analyzed on native PAGE under non-reducing conditions. Subsequent staining of the gel matrix for in-gel activity of GST revealed a single achromatic band over a blue insoluble formazan dark background (Figure-2b).
Molecular weight determination
Analysis of the resolved polypeptide on a 12% SDS polyacrylamide gel in a Bio Rad Gel Documentation system revealed a single band of 26 kDa polypeptide corresponding to GST monomer (Figure-2a).
Effect of pH on partially purified GST
Effect of pH on the specific activity of partially purified GST was examined at various pH values ranging from 5.0 to 9.0 (Figure-3). The GST activities in the parasitic flukes were seen to be drastically affected at both the extreme of the test range. The GST proteins were found to have considerable enzymatic activities in a pH range of 6.5-8.5 in the parasite under study. Maximum stability of the catalytic properties of GST proteins was observed at pH 7.5 (Figure-3). A rise in pH from 7.5 to 8.0 resulted in an evident decrease in the enzyme activity. This suggests that the GST enzyme in this amphistome parasite has specific pH optima.
Effect of temperature on partially purified GST
Alteration in the specific activity of GST from Gc was also investigated as a function of temperature in a range from 4 to 80°C for 30 min. The GST protein remained considerably active within the temperature of 4-37°C for the parasite species (Figure-4). However, a rise in temperature from 25 to 30°C resulted into sharp decline in the GST specific activity profile in the parasites under study (Figure-4). When the temperature was raised from 4 to 25°C, the abrupt slump in the activities of GST proteins was recorded. This suggested that GST from Gc is appreciably robust in terms of minor thermal fluctuations at lower temperature ranges.
Discussion
GSTs exist in all living organisms and they are involved in various detoxification pathways and antioxidant processes. Since, helminth parasites lack p-450 monoxygenases, the Phase-II biotransformations are carried out by GST isoforms and hence are of cardinal importance [27]. GSTs perform several important functions in the body and are associated with several pathological conditions including cancer [28], rheumatoid arthritis [29], osteoporosis [30], renal dysfunctions [31], cardiovascular diseases [32,33], and Alzheimer's disease [34]. GSTs have been purified and characterized from various helminth parasites including nematodes, trematodes, and cestodes such as Clonorchis sinensis [35], Setaria digitata [36], Schistosoma mansoni [37], Schistosoma japonicum [38], Fasciola hepatica [39], Necator americanus [40], and Echinococcus granulosus [41]. Due to lack of information on GSTs from a neglected group of digenetic trematodes, the amphistome parasites, present work has been carried out. To the best of our knowledge, this is the first description about the purification and properties of GST enzymes obtained from the amphistome parasites, Gc. It was envisaged that a thorough study on GST from the ruminal amphistomes will shed some light toward better understanding of GSTs of helminth origin. This will be useful in generating a base line data so that the salient enzymatic properties can be compared with other known GST from different parasitic fauna. In the present work, GST was purified from Gc by the method as described in the materials and method section. The two step procedure involved ammonium sulfate fractionation and gel filtration chromatography. The simple isolation procedure provided a percent yield of 18.61-and 22.27-fold purification for Gc. Purification of the GST isozymes from a variety of helminth parasites has been reported using a combination of several methods including affinity chromatography, chromatofocusing, gel filtration, and ion exchange chromatography [12,14,[42][43][44]. However, the procedure used in this study is simple and cost-effective which give appreciable yield and fold purification as compared to the values reported in literature for other helminth parasites [36]. The partially purified GST obtained from the amphistome parasites was found to be homogenous on the basis of charge as shown by native-PAGE. In SDS-PAGE under reducing conditions, GST from Gc gave a single band which could be of dimeric nature (native state) with two identical subunits.
GSTs obtained from amphistome parasites were partially purified on a Sephacryl S 100-HR resin matrix. The molecular mass of the subunits comprising the native GST molecule was found to be made up of two equal sub-units of 26 kDa in Gc following denaturing PAGE technique in the presence of β-mercaptoethanol. Subunits of GST with comparable molecular weights (26 kDa) have also been reported in a wide variety of helminths such as S. mansoni [37], S. japonicum [45], F. hepatica [46], Taenia solium [42], and Setaria cervi [43]. In general, cytosolic GSTs have monomers of 23-28 kDa with an average of 220 amino acids in their sequences. They all share the same tertiary and quaternary dimeric structural features. The dimer may have identical subunits (homodimer) or different subunits (heterodimer) of the same class [47]. In the present study, appreciable stability of the partially purified GSTs from Gc was observed in the broad range of pH (6.5-8.5) as well as temperature (4-37°C). Similar kinds of observations were made with GST purified from S. digitata which showed good enzyme activity between 0°C and 40°C, while a sharp decline in activity was observed beyond 40°C with complete loss of activity at 80°C [36]. A similar trend in the GST enzyme activity was reported earlier with temperature optima of around 40°C [42,43] and loss of activity at higher temperatures. The enzyme activity of GSTs from Gc was seen to vary with change in pH. The optimum GST activities for this amphistome were observed at pH 7.5; a similar finding was also reported in GST purified from S. digitata [36].
In most of the helminth parasites, CYP450 mediated biotransformation has not been reported. Hence, the GST enzyme system plays an indispensible role in carrying out the Phase II detoxification pathways. Due to this, purified GSTs from various helminth parasites have been screened for immune and chemotherapeutic leads and targets to design and develop a better and reliable parasite control measure. The present study was an attempt to purify GST enzyme from an amphistome parasite to determine and obtain the basic biochemical attributes which could help in the formulation of new drug to control the amphistomosis in our farm animals.
Conclusion
The present study is an attempt to purify and conduct basic biochemical characterization of GST from the rumen infecting amphistome Gc using size exclusion chromatography.
Authors' Contributions
SA and AS prepared the study design and carried out the research under the supervision of MKS. SK and SHK isolated the parasites and analyzed the data. The manuscript was drafted and revised by SA and SK under the guidance of MKS. All authors read and approved the final manuscript. | 2018-04-03T04:18:50.151Z | 2014-12-21T00:00:00.000 | {
"year": 2014,
"sha1": "fd178d46492b00817e615a4493746e806a630f71",
"oa_license": "CCBY",
"oa_url": "http://www.veterinaryworld.org/Vol.10/December-2017/13.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd178d46492b00817e615a4493746e806a630f71",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
245554536 | pes2o/s2orc | v3-fos-license | Investigation of Effect of Preliminary Annealing on Superplasticity of Ultrafine-Grained Conductor Aluminum Alloys Al-0.5%Mg-Sc
Effect of preliminary precipitation of Al3Sc particles on the characteristics of superplastic conductor Al-0.5%Mg-X%Sc (X = 0.2, 0.3, 0.4, 0.5 wt.%) alloys with ultrafine-grained (UFG) microstructure has been studied. The precipitation of the Al3Sc particles took place during long-time annealing of the alloys at 300 °C. The preliminary annealing was shown to affect the superplasticity characteristics of the UFG Al-0.5%Mg-X%Sc alloys (the elongation to failure, yield stress, dynamic grain growth rate) weakly but to promote more intensive pore formation and to reduce the volume fraction of the recrystallized microstructure in the deformed and non-deformed parts of the aluminum alloy specimens. The dynamic grain growth was shown to go in the deformed specimen material nonuniformly–the maximum volume fraction of the recrystallized microstructure was observed in the regions of the localization of plastic deformation.
Introduction
At present, microdoped high-strength Al alloys are considered to be promising materials for electrical engineering, in particular, for the replacement of copper alloys in small-sized avionics wiring by the Al alloys [1][2][3]. It will allow reducing the weight of the on-board wiring of modern aircraft and increasing the load capacity, energy efficiency, etc., of these ones in the future. The conductor Al alloys should have high strength and thermal stability [1][2][3] as well as good plasticity at room and elevated temperatures to ensure the possibility of making small-sized bimetallic wires of 0.2-0.5 mm in diameter by drawing or rolling from workpieces.
Materials and Methods
The Al-0.5 wt.%Mg alloys with different Sc contents (0.2, 0.3, 0.4, and 0.5 wt.%Sc) were the objects of investigation. The alloy specimens 22 × 22 × 150 mm in sizes were obtained by induction casting in INDUTHERM ® VTC-200 casting machine (Indutherm GmbH, Walzbachtal, Germany) according to the procedure described in [30,38]. After casting, the alloys were not subjected to homogenization. The UFG structure in the workpieces was formed by Equal Channel Angular Pressing (ECAP) using Ficep ® HF 400 L hydraulic press (Ficep ® S.P.A., Varese, Italy) by the modes: temperature T ECAP = 225 • C, strain rate 0.4 mm/c, number of cycles-N = 4, ECAP regime-B c . The warm-up time of the workpiece prior to ECAP was 10 min, the holding time of the UFG workpiece in the instrumentation after ECAP did not exceed 5 min.
The mechanical tension testing of the flat double-blade shaped specimens with working parts 3 mm long and 2 × 2 mm in cross-sections was carried out using Tinius Olsen H25K-S tension machine (Tinius Olsen Ltd., Surrey, UK). Testing was performed in the temperature range from 300 to 500 • C; the tension rate varied from 10 −3 to 3.3 × 10 −1 s −1 . The holding time of the specimen in the furnace prior to the superplasticity experiments was 5 min. The uncertainty of the temperature maintenance during the superplasticity tests was ±5 • C. The temperature was measured by a thermocouple placed as close as possible to the specimen clamping area. During the experiment, the "stress (σ)-strain (ε)" curves were acquired, which the values of the relative elongation to failure (δ) and of the yield stress (σ b ) were determined from.
Chemical analysis was performed using iCAP ® 6300-ICP-OES Radial View™ spectrometer with induction-coupled plasma (Thermo Scientific, Waltham, MA, USA). To study the macro-and microstructure of the alloys, a Leica ® IM DRM metallographic optical microscope (Leica Microsystems GmbH, Wetzlar, Germany), a Jeol ® JSM-6490 Scanning Electron Microscope (SEM), and a Jeol ® JEM-2100 Transmission (TEM) were used (Jeol Ltd., Tokyo, Japan). To study the macro-and microstructure, the specimen surfaces were subjected to mechanical grinding with diamond pastes to the roughness <1 µm followed by polishing in 8%HClO 4 + 9%H 2 O + 10%C 6 H 14 O 2 + 73%C 2 H 5 OH solution. The microstructure was revealed by etching in a glycerin-based solution (1%HF + 1.5%HCl + 2.5%HNO 3 + 95% glycerine); the macrostructure-by etching in 40%HNO 3 + 40%HCl + 20% HF solution. The mean grain sizes (d) and the volume fraction of the recrystallized microstructure (f R ) were determined using GoodGrains software (UNN, Nizhny Novgorod, Russia). The mean sizes of the Al grains and Al 3 Sc particles were determined by the chord method. The microhardness (H v ) measurements were performed using an HVS1000 hardness tester (INNOVATEST Europe BV, Maastricht, The Netherlands). The areas of the microstructure and microhardness areas are marked by yellow dashed lines in Figure 1. General view of a specimen of UFG alloy Al-0.5%Mg-0.5%Sc after the superplasticity testing (500 • C, 10 −2 s −1 ). The areas of investigation of the grain microstructure and measuring the microhardness in the non-deformed and deformed parts are marked as (1) and (2), respectively. In the non-deformed part, the dendrite boundaries are visible, inside which a uniform UFG microstructure was formed.
The fractographic analysis of the fractures was carried out using Jeol ® JSM-6490 SEM. The analysis of the specimen fractures was carried out according to the classification described in [43].
The microstructure and microhardness measurements on the specimens after the superplasticity testing were performed in two areas-in the non-deformed area of the specimen (Zone I) and in the deformed one, the closest to the destruction center (Zone II). For the investigations, the specimens were pressed into a WEM REM mixture (Cloeren Technology GmbH, Berlin, Germany) and subjected to mechanical grinding, electrochemical polishing, and wet chemical etching according to the procedure described above. In the case when the destruction area was not in the center of the tested specimen, the part of the specimen, which has been subjected to the highest tensile strain in the course of testing was selected for the microstructure and fractographic analysis.
The annealing of the specimens was performed in an EKPS-10 air furnace (Smolensk SKTB SPU JSC, Smolensk, Russia). The uncertainty of the temperature maintenance in the furnace was ±10 • C. After annealing, the specimens were cooled down in the air.
Microstructure Investigation
The Al-0.5%Mg-Sc cast alloys had a dendrite-wise coarse-grained macrostructure: columnar crystals in the rapid cool-down zone at the specimen edges and equiaxial grains in the central parts of the bulks ( Figure 2). The polished area occupied by the equiaxial grains increased with increasing Sc content. In the alloy with 0.5%Sc, a uniform macrostructure was formed ( Figure 2c) consisting of the grains with nearly equiaxial shapes almost completely. At the sides of the Al-0.5%Mg-0.5%Sc bulk, there were large equiaxial grains of~0.5 mm in size. The residual dendrite macrostructure in the alloy with 0.5%Sc was observed in the upper part of the bulk only. The average grain sizes in the central parts of the bulks decreased from 1.0-1. The microstructure investigations have shown large light-colored micron-sized particles consisting of Sc and Al only in the cast alloys with 0.4 and 0.5%Sc ( Figure 3). According to [4,[13][14][15][16][17], these are probably Al 3 Sc particles. Primary Al 3 Sc particles contain Fe and Ni in their composition ( Figure 3). The particles are distributed uniformly enough inside the bulk; an insufficient increase of the volume fraction of the primary particles in the central parts of the bulks of the alloys with 0.4% and 0.5%Sc was observed. In the metallographic investigations, the macrostructure of the UFG alloys after four cycles of ECAP (T ECAP = 225 • C) comprises the crossing microbands of localized plastic deformation (Figure 4a). The presence of the localized deformation strips affects the grain morphology in the areas of the bands crossing ( Figure 4b). After ECAP, a uniform UFG structure formed, the average grain sizes were 0.4-0.6 µm and almost did not depend on the Sc concentration (see Figure 5a,b). There were no abnormally large grains in the UFG alloys after ECAP (Figures 4b and 5a,b). In the UFG alloys, there were few submicron Al 3 Sc particles (Figure 5c,d). An insufficient increase of the volume fraction of the primary Al 3 Sc particles with increasing Sc content was observed (Figure 5c,d); the particle sizes almost did not change. The parameters (the sizes, the quantity, the positions inside the workpiece) and the composition of the particles in the cast and UFG alloys were close to each other ( Figure 3). It allows suggesting that the submicron Al 3 Sc particles observed in the UFG alloys form during the bulk crystallization. The Al 3 Sc particles were coherent to the Al crystal lattice: the elastic strain fields were observed near the particles while the interphase boundaries between the Al 3 Sc particles and the Al crystal lattice were diffused (Figure 5e,f, see also [38]). No large elongated Al 3 Sc particles, the presence of which evidences the intermittent mechanism of the particle nucleation (see [39][40][41][42][43][44][45][46][47][48][49]) were found. According to [30,38], the temperature of the recrystallization during 30-min annealing of the UFG Al-0.5%Mg-Sc alloys is~350-375 • C. The high thermal stability of the UFG structure of the alloys originates from the early nucleation of the Al 3 Sc particles. In [38], the dependence of the specific electrical resistivity (SER) on the annealing temperature has been investigated. From the analysis of the results, the solid solution decomposition in the UFG Al-0.5%Mg-Sc alloy was shown to begin at 200-225 • C and to almost complete after 30-min annealing at 325-375 • C. Heating up to temperatures over 425 • C results in an increase of the SER, obviously, due to the dissolving of the Al 3 Sc particles nucleated earlier.
According to the TEM data, several types of Al 3 Sc particles are formed in the UFG Al-0.5%Mg-Sc alloys in the course of long-time annealing (up to 300 h) at 275-300 • C. In the annealed UFG alloy, coherent nanoparticles of 10-30 nm in sizes nucleated inside the grains and at the grain boundaries were observed, as well as large dashed-linewise Al 3 Sc particles of 50-200 nm in sizes nucleated near the grain boundaries via the discontinuous precipitation mechanism ( Figure 6). It should be stressed here that the mean grain sizes almost did not change during annealing, and the annealed alloys preserved the UFG microstructure ( Figure 7). The results of investigations by electron microscopy presented in Figure 6 supported the conclusion made in [38] on the two-stage character of the Al 3 Sc particle nucleation during annealing the UFG Al-0.5%Mg-Sc alloys. In [38], on the basis of analysis of the results of investigations of the dependence of the specific electrical resistivity on the annealing time, the two-stage character of the Al 3 Sc particle nucleation was shown to originate from the competition of the grain boundary diffusion (small holding times) and diffusion along the lattice dislocation cores (large isothermic holding times and/or elevated annealing temperatures). As has been noted above, the microstructure of the UFG alloys comprises a site of crossing localized strain bands (Figure 4a). The metallographic investigations have shown the deformation "banding" formed during ECAP to reproduce well in the microstructure of the UFG alloys after long-time holding at 300 • C. As one can see in Figure 8, the localized strain bands were seen clearly after preliminary low-temperature annealing and electrochemical polishing. It is interesting to note that the low-temperature prerecrystallization annealing did not affect the mean spacing between the bands, which was 20-30 µm but allowed revealing these ones more clearly as compared to the UFG state after ECAP. As has been mentioned above, the recrystallization in UFG Al-0.5%Mg-Sc alloys begins after heating up to~350 • C (30 min). No deformation bands were found in the completely recrystallized alloys after annealing at 450-500 • C (see [38]); a uniform finegrained structure was formed in the alloys. The mean sizes of the recrystallized grains (d R ) and the volume fraction of the recrystallized material (f R ) decreased with increasing Sc content. After annealing at 500 • C (30 min), the increasing of Sc content from 0.2% up to 0.5% resulted in a decreasing f R from~100% down to~60-70% and to a decreasing of d R from~250 µm down to~5 µm (see [38]). No abnormally large, recrystallized grains were observed in the UFG alloys after long-time annealing at 300 • C ( Figure 8).
For the superplasticity tests, the specimens of the cast and UFG Al-0.5%Mg-Sc alloys were annealed at 300 • C for various times. The annealing was aimed at the forming of a uniform fine-grained structure with the maximum volume fraction of nucleated Al 3 Sc particles in the UFG alloys. The annealing time for each alloy was selected on the basis of the results of investigations of the solid solution decomposition presented in [38]. The annealed cast specimens of the cast and UFG Al-0.5%Mg-Sc alloys were tested for superplasticity. The results of the testing were compared to the data for the non-annealed alloys presented in [30].
Cast Alloys
As an example, the tension curves σ(ε) for the specimens of some cast alloys in the initial state and of the ones subjected to preliminary at 300 • C are presented in Figure 9. As one can see in Figure 9a, the tension curves σ(ε) of the coarse-grained Al-0.5%Mg-Sc alloys had classical three-stage character typical enough for the tension of highly plastic alloys: a short stage of the strain hardening transforming into a long stage of stable plastic flow, and, finally, the stage of localized plastic deformation finishing by the destruction of the specimen. The values of the yield stress (σ b ) and of the relative elongation to failure (δ) are presented in Table 1. As one can see in Figure 9a and from Table 1, the temperature of testing does not affect the shapes of the σ(ε) curves considerably. The yield stress and elongation to failure decrease with increasing test temperature from 300 up to 500 • C: the values of σ b decreased from 70 MPa down to 29-30 MPa and δ decreased from~62-64% down to~43% (Table 1). In our opinion, the reduction of δ in the cast alloys is related to the blocking of the lattice dislocation motion by the Al 3 Sc particles. At the same time, the Al 3 Sc particles nucleated earlier, dissolving partly with an increasing temperature that leads to an increase of the specific electrical resistance of the alloys (see [38]). The decreasing of the volume fraction and the coalescence of the Al 3 Sc particles may reduce the effect of these ones on the tensile behavior of the cast Al-0.5%Mg-Sc alloys at elevated temperatures. This leads to an insufficient increase of the elongation of the cast Al-0.5%Mg-Sc alloys at 500 • C (as compared to the one at 450 • C) again.
Long-time annealing of the cast Al-0.5%Mg-Sc alloys at 300 • C resulted in changes in the shapes of the σ(ε) curves. As one can see in Figure 9b, the stable plastic flow stages in the σ(ε) curves were absent, the stages of strain hardening transformed into one of the plastic strain localization directly.
The results of the metallographic investigations evidenced the formation of large pores of several tens of microns in size in the destruction region ( Figure 10a). The large pores were located along the dendrite boundaries whereas the small ones-both along the grain boundaries and inside the grains. The sizes and the volume fraction of the pores decreased with increasing distance from the place of destruction. At the testing temperature of 500 • C, the zone of intensive pore formation was~1.5-2 mm from the destruction region ( Figure 10b). No pore formation was observed in the non-deformed region. The results of the fractographic analysis of the fractures of the cast Al-0.5%Mg-Sc alloy specimens after the tension testing at elevated temperatures are presented in Figure 11a,c. At the macroscopic level, the specimen fractures were of the same type and comprised large shear elements, the directions of which coincided with one of the dendrite grains (see Figure 2 in [38]). At the vertices of the shear elements, the pits of various geometries were observed evidencing a viscous nature of the destruction of cast alloys ( Figure 11). The variations of the deformation temperature and rate did not affect the general fracture pattern of the cast Al-0.5%Mg-Sc alloy specimens (Figure 11a,c). From the comparison of Figure 11b,d, one can see the increase of the deformation temperature to result in an increase of the pit sizes in the destruction zone of the cast alloy specimen; after testing at elevated temperatures, the pits had strongly elongated shapes (Figure 11d). Figure 12 presents the σ(ε) curves for the specimens of UFG alloys with different Sc contents. Table 2 presents the values of σ b and δ for the alloys investigated at various temperatures and strain rates. The σ(ε) curves acquired at 300 and 350 • C were typical enough for severely deformed metals-short stages of intensive strain hardening followed by the rapid softening were observed in the σ(ε) curves. At higher test temperatures (400-500 • C), an increase in the duration of the uniform elongation stage was observed, which reached~200% for the UFG Al-0.5%Mg-0.2%Sc alloy at 500 • C. The degree of uniform strain decreased slightly with increasing Sc content and did not exceed 80% for the UFG Al-0.5%Mg-0.5%Sc alloy. However, a considerable general increase in the elongation to failure was observed (see Table 1). As in the case of the cast alloys, the increase of the test temperature resulted in a decrease in the yield stress but the plasticity of the UFG alloys increased essentially ( Figure 12, Table 2). The dependencies δ(T) and δ( · ε) had monotonous characters with maxima that are typical enough for the superplastic behavior of the finegrained alloys (see [24][25][26][27][28][29][47][48][49]). At the strain rate · ε = 10 −2 s −1 , the maximum values of δ max for the majority of UFG Al-0.5%Mg-Sc alloys were achieved at 450 • C.
UFG Alloys
As one can see from Table 2, the preliminary annealing resulted in a decrease of the yield stress of the UFG alloys regardless of the Sc content as well as to the test temperature and strain rate. The largest decrease of the yield stress was observed in the case of strain at 300 and 350 • C. The strain in the temperature range 450-500 • C, the differences between the values in σ b between the non-annealed and annealed specimens did not exceed 2-3 MPa.
The effect of preliminary annealing on the plasticity of the UFG alloys had a more complex character. As one can see from Table 2, the preliminary annealing did not affect the elongation to failure considerably when testing at 300-350 • C. However, it resulted in some decreasing of plasticity of the UFG Al-0.5%Mg-Sc alloys at elevated test temperatures (450 and 500 • C) and at increased strain rates (from 10 −1 s −1 and higher). Annealing The values of the strain rate sensitivity coefficient m were calculated from the slopes of the dependencies σ b · ε , which can be interpolated by a straight line in the logarithmic axes ε with good accuracy (Figure 13a). Figure 13b presents pendencies of the strain rate sensitivity coefficient m = ln(σ b )/ ln · ε on the Sc concentration: test temperature-400 • C (squares) and 500 • C (circles); empty symbols-initial state [30]; full symbolsannealed state.
The metallographic investigations of the destroyed specimen surfaces evidenced an intensive pore formation during the superplasticity testing of the UFG alloys (Figure 14d). The largest pores are formed in the destruction region as well as within the localized deformation areas. Note also that the mean pore sizes in the deformed parts of the UFG alloy specimens were smaller than the ones in the destroyed parts of the cast alloy specimens. In the metallographic studies, in the case of the use of the same magnifications, it was manifested visually in several specimens as a decreasing of the volume fraction of pores. The results of fractographic analysis have shown the fractures of all UFG alloy specimens after the superplasticity testing to have a viscous character. These can be described as a set of pits of various sizes ( Figure 11). Variations of test temperature and strain rate did not affect the fracture character considerably.
Dynamic Grain Growth
The metallographic and electron microscopy investigations conducted have shown no essential changes in the microstructure of the cast Al-0.5%Mg-Sc alloys during the superplastic deformation. The dependence of the microhardness on the heating temperature had a two-stage character with a maximum similar qualitatively to the one of the dependence of the microhardness on the 30-min annealing temperature (see [30,38]). It allows suggesting the character of the microhardness changes in the cast alloys with increasing test temperature to be determined by nucleation and growth of the Al 3 Sc particles. The microhardness values for the deformed areas and for the non-deformed ones differ no more than in 50-60 MPa. The maximum values of the microhardness in the alloy with the maximum Sc content (0.5%) were 600-610 MPa that is~1.5 times higher than the ones of the cast Al-0.5%Mg-0.5%Sc alloy in the initial state (400 MPa, see [38]). However, these values appeared to be lower than the maximum microhardness values (~900 MPa, see [38]) in the cast Al-0.5%Mg-0.5%Sc alloy after annealing at 350-375 • C for 30 min. Lower values of microhardness of the cast specimens after the tensile testing, in our opinion, are caused by differences in the heating times. In the case of the tensile testing at 400 • C with the rate of 10 −2 s −1 , it was <15 min (taking into account the 10-min holding in the furnace prior to the start of testing).
Prior to describing the results of investigations of the dynamic grain growth in the UFG alloys, it is worth noting that in the specimens deformed in the conditions close to the optimal ones for the superplasticity, clearly expressed plastic deformation localization areas (so called "neckings") were observed. A typical view of a UFG alloy specimen with the deformation localization areas is presented in Figure 14. Electron microscopy and metallographic investigations have shown the recrystallization processes inside the areas of plastic deformation localization and outside the ones to be different. The volume fraction of the recrystallized structure (f R ) inside the areas of plastic deformation localization was very high whereas outside the ones f R did not exceed 10-15%. Additionally, it is interesting to note that an increased porosity was observed in the areas of plastic deformation localization in some specimens despite the quite large distance from the destruction areas (Figure 14c). It should be stressed that the effect of plastic deformation localization was observed also in the non-annealed specimens of the UFG Al-0.5%Mg-Sc alloys (see Figure 7a in [30]). However, the degree of deformation localization (the magnitude of specimen thinning) was lower considerably.
Next, we studied the microstructure parameters (the volume fraction of the recrystallized structure f R and the mean sizes of the recrystallized grains d) and the microhardness in the regions, the size of which did not exceed 0.5-1 mm from the destruction points.
It should be stressed here that the preliminary annealing at 300 • C resulted in the stabilization of the microstructure of the UFG Al-0.5%Mg-Sc alloys. The volume fraction of the recrystallized structure in the non-annealed specimens exceeded 80% after the superplasticity testing at 300 and 350 • C. At elevated test temperatures (450, 500 • C), the whole deformed parts of specimens were recrystallized almost completely. As one can see from Table 3 after the superplasticity testing, considerably smaller volume fractions of the recrystallized structure were observed in the specimens annealed in advance as compared to the non-annealed specimens. In the non-deformed areas of the UFG Al-0.5%Mg-0.2%Sc alloy specimens, the volume fraction of the recrystallized structure did not exceed 10% even after testing at 500 • C. The maximum volume fraction of the recrystallized microstructure (f R~8 0%) in the deformed part of the UFG alloy Al-0.5%Mg-0.2%Sc specimen was observed after testing at 500 • C with the strain rate 10 −2 s −1 (Table 3).
The increase of the strain rate resulted in a decrease in the volume fraction of the recrystallized structure and of the mean grain sizes in the deformed region d 2 ( Table 3). The dependence of the volume fraction of the recrystallized structure (f R ) on the heating time can be described using the Avrami equation: f R = 1− exp(−t/τ) n where n is a numerical coefficient and τ is the characteristic time of the diffusion-controlled process, which, in the first approximation, can be described by the equation: τ = τ 0 ·exp(Q/kT). A similar equation is used often to describe the dependence of f R on the strain degree (ε): f R = 1 − exp(−B·ε) m where m and B are some numerical parameters. In most cases, the increase of the strain rate would lead to the decreasing of the tensile time (t) and to the decreasing of the degree of elongation to failure (δ) (see Table 2). In this connection, one can consider the dependence of the volume fraction of the recrystallized structure on the strain rate f R ( · ε) to be described well (in the first approximation) by the Avrami equation. Figure 15 presents the images of the recrystallized grain microstructure of the UFG Al-0.5%Mg-Sc alloys in the deformed regions and in the non-deformed ones after tension testing. As one can see in Figure 15, intensive dynamic grain growth takes place during superplastic deformation. The mean grain sizes in the deformed regions (d 2 ) exceeded the ones in the non-deformed regions (d 1 ). The values of d 1 and d 2 are presented in Table 3. In Figure 16, the dependencies of d 1 and d 2 on the test temperature are presented. As one can see from Table 3 and Figure 16, an increase in the mean grain size and of the volume fraction of the recrystallized microstructure with increasing test temperature was observed. The increasing of the Sc content resulted in a decreasing of the volume fraction of the recrystallized structure and of the mean grain sizes in both deformed and non-deformed parts of the specimens. As one can see from Table 3, in the UFG alloys with 0.3-0.5%Sc, the volume fraction of the recrystallized microstructure of the non-deformed parts of the specimens was 1% or less. The mean grain sizes were close to the initial ones.
The analysis of the dependencies of the microhardness on the test temperature presented in Figure 16 shows the microhardness in the deformed parts of the specimens to be considerably lower than the one in the non-deformed parts. In our opinion, it is related to the intensive dynamic grain growth in the deformed parts of the UFG alloy specimens, which leads to the mean grain sizes d 2 to be considerably higher than the ones in the non-deformed parts (d 1 ).
The dependencies of the microhardness on the mean grain size for the deformed parts of the UFG Al-0.5%Mg-Sc alloy specimens can be described with good accuracy using the Hall-Petch equation: is the microhardness of the crystal lattice, K is the grain boundary hardening coefficient (Hall-Petch coefficient). One can see in Figure 17 that the dependencies H v (d) in the H v − d −1/2 axes can be interpolated by straight lines with a satisfactory accuracy (for the majority of alloys, the reliability of the linear approximation R 2 > 0.8). The magnitude of the coefficient K for the annealed alloys increased with increasing Sc concentration in the UFG alloys and was close to the values of parameter K in the non-annealed UFG alloys (see [38]). It is interesting to note that higher values of the coefficient K were observed in the alloys with increased Sc content. The formation of the second phase (Al 3 Sc) particles at the grain boundaries would dampen the crossing of the grain boundaries by the dislocation bunches and make the functioning of the Frank-Reed source in the adjacent grains difficult. To explain this effect, the model described in [50] can be used also. According to this model, the dislocation loops may form around the non-coherent second phase particles (see also [51][52][53][54][55]). At the same time, it is worth noting that the negative values of the coefficient H v0 for the UFG alloys with increased Sc content (see Figure 17) were unexpected since the nucleation of the Al 3 Sc particles was expected to result in an increase of H v0 (see [50,55]). The analysis of the nature of this effect in the dynamic grain growth in the superplasticity conditions will be continued in our further studies.
Discussion
The mechanisms of superplastic deformation of the UFG Al-0.5Mg-Sc alloys were described in [30]. It should be stressed only that the high values of the strain rate sensitivity coefficient (m = 0.40-0.47 at the test temperature 450 • C, see Figure 13b) evidence for the grain boundary sliding to be the primary mechanism of the high-temperature deformation of the UFG alloys. The equiaxial shapes of the grains in the destruction zone ( Figure 11) are also indirect sign evidence in favor of this suggestion. In the coarse-grained Al alloys, the primary mechanism of the high-temperature plastic deformation is the power-law creep [47,48], the strain rate of which is much lower than the one of the grain boundary sliding in the UFG alloys. The difference in the deformation mechanisms in the cast and UFG alloys resulted in the differences in the values of the relative elongation to failure for the cast and UFG alloys (see Tables 1 and 2). Different characters of the effect of the test temperature on the elongation for the cast and UFG Al-0.5%Mg-Sc alloys are evidence in favor of this suggestion indirectly.
Let us analyze the effect of preliminary annealing on the ultimate characteristics of the superplastic deformation and on the kinetics of the dynamic grain growth in the UFG Al-0.5%Mg-Sc alloys.
Note that the goal of the preliminary annealing at 300 • C was the nucleation of the Al 3 Sc particles providing the stabilization of the nonequilibrium microstructure in the UFG Al-0.5%Mg-Sc alloys. Therefore, preliminary annealing was expected to allow forming smaller grains in the UFG alloys, decreasing the intensity of the dynamic grain growth, and, as a consequence, improving the plasticity of the alloys at elevated test temperatures. As one can see from Table 2, this effect was not achieved-the elongation to failure of the annealed UFG alloys differed from the magnitudes of δ for the non-annealed alloys insufficiently.
In our opinion, there are at least two reasons why the increased plasticity of the UFG Al-0.5%Mg-Sc alloys was not achieved.
The main reason is that the ultimate elongation to failure in the UFG Al-0.5%Mg-Sc alloys is controlled likely by the pore formation at the large Al 3 Sc particles (see [30] as well as Introduction). The preliminary annealing leads to the nucleation of the Al 3 Sc particles, in particular, to the formation of large, elongated particles near the grain boundaries via the discontinuous precipitation mechanism ( Figure 6). These large particles are the points of formation and growth of the pores and, as a consequence, promote the cavitation destruction of the UFG Al-0.5%Mg-Sc alloy specimens. According to the model [41,42], for the initiation of a micropore, the power of the disclination loop forming at Al 3 Sc particle during the superplastic deformation should reach its critical value ω*. The magnitude of the critical power of a disclination loop can be calculated easily from the equality of the energy of disclination and the one of the free surface of a pore or a crack of a given size. Accordingly, one can expect the disclination loops forming at the large Al 3 Sc particles to reach the critical value ω* faster. So far, the nucleation of the large Al 3 Sc particles in the course of preliminary annealing provides the conditions limiting the maximum plasticity of the UFG alloys. To ensure higher ultimate characteristics of superplasticity, one should minimize the volume fraction of the Al 3 Sc particles forming via the discontinuous precipitation mechanism.
The second factor, which does not allow providing the improved superplastic characteristics of the UFG alloys is the specifics of the dynamic grain growth in the annealed UFG Al-0.5%Mg-Sc alloys. As it has been shown above, the annealed UFG alloys are featured by an increased tendency to the plastic deformation localization at the microscopic level (Figures 1 and 14).
The origin (or the origins) of the effect of the preliminary annealing on the character of the plastic deformation localization in the UFG Al-0.5%Mg-Sc alloys are not clear at the moment and additional investigations are necessary. Can be one of the possible origins of this the presence of the residual dendrite macrostructure, which was manifested in the metallographic investigations of the non-deformed parts of the specimens (Figure 1). The nucleation of the Al 3 Sc particles at the dendrite boundaries damping the motion of dislocations, in our opinion, may promote the enhanced tendency of the Al-0.5%Mg-Sc alloys to the plastic deformation localization. We suppose that the disappearance of the uniform plastic flow stage in the annealed cast Al-0.5%Mg-Sc alloy specimens is also evidence in favor of this assumption indirectly. Note also that the characteristic distance between the macro-neckings of the plastic deformation localization (Figure 14) was close to the one between the dendrite boundaries, which was 0.3-1 mm (Figure 1). In our opinion, it also evidences an important role of the dendrite macrostructure in the manifestation of the plastic deformation localization in the UFG Al-0.5%Mg-Sc alloys. In order to increase the uniformity of the plastic flow of the UFG Al-0.5%Mg-Sc alloys at the macroscopic level, it is necessary to apply the technologies of preliminary hot deformation processing allowing removing the dendrite nonuniformity macrostructures completely.
As it has been shown above, the macrolocalization of the plastic deformation leads to a nonuniformity of recrystallization inside the specimens. The presence of the regions with different grain sizes in the structure of the material and, as a consequence, with different values of hardness ( Figure 18) suppresses the possibility of uniform plastic flow of the material. It is a negative factor, which should be taken into account when selecting the optimal regimes of fabricating the small-sized wires using the hot deformation method (drawing, rolling, extraction, etc.). Let us analyze the kinetics of the dynamic grain growth during the superplastic deformation of the UFG Al-0.5%Mg-Sc alloys. To describe the dynamic grain growth in the UFG Al-0.5%Mg-Sc alloys, we will use the approach developed earlier within the framework of the theory of structural superplasticity [56,57] and of the theory of nonequilibrium grain boundaries in fine-grained metals [58]. Within the framework of this approach, the grain growth rate in the UFG materials is governed by the defects at the grain boundaries, which generate the long-range internal stress fields σ i . According to [56,58], the interactions of the defects distributed inside the grain boundary with the fields of the external stress (σ) and of the internal stress (σ i ) result in the arising of additional driving forces for the grain boundary migration: where σ i is the internal stress field generated by the defects distributed inside the grain boundaries and in the triple joints of the ones: Here G is the shear modulus, ρ st b is the stationary density of the orientation mismatch dislocations (OMDs) in the non-equilibrium grain boundaries in the UFG metal, ∆b is the Burgers vector of OMD, α 1 and α 2 are the numerical coefficients.
Besides, the defects affect the diffusion mobility of the grain boundaries M. At high power of the disclination dipoles, these ones can limit the mobility [58]: where M b is the mobility coefficient of a defectless grain boundary, M ρ is the mobility coefficient of the OMDs distributed inside grain boundary, and M ω is the mobility coefficient of the joint disclinations [58]. The values of contributions M b , M ρ , and M ω can be calculated using the following formulas [58]: where is the grain growth rate during annealing measured experimentally, δ = 2b is the grain boundary width, b is the Burgers vector, D b is the grain boundary diffusion coefficient, k is the Boltzmann constant, and γ b is the grain boundary energy.
The grain growth rate V m can be related to the effective migration mobility M and to the driving force P by usual relation [58]: At low power of the joint disclinations, the mobility of the grain boundaries is determined by the mobility of the OMDs, and the driving force is related to the interaction of these ones with the external stress field [56,58]: In the case of high ω, the mobility of the grain boundaries in the superplasticity regime is governed by the mobility of the disclination dipoles M ω , ant the driving force is related to the interaction of the disclination dipoles with each other [56,57]: In the intermediate case, the dynamic grain growth rate can be written in more general form: where A ∼ = C b b(σ/G) 2−x and the magnitude of the exponent x takes the values from 1 to 2 subject to the joint disclination power ω. At low ω, x = 1, and The efficiency of application of the theoretical models described in [56][57][58] has been demonstrated earlier when describing the dynamic grain growth in the non-annealed UFG Al-0.5%Mg-Sc alloys (see [30]).
According to [47,56,57], usually, the dependence of the grain growth rate on the strain rate measured experimentally is expressed in the form where the parameter k depends on the strain rate · ε can be determined from the slope of the curve lg( · d) − lg( · ε) at fixed values of the strain degree. As one can see in Figure 19, the measured magnitude of the coefficient k exp for the UFG Al-0.5%Mg-Sc alloys at 400 • C and 500 • C varied from 0.9 to~1.2. The calculated values of k exp agree well with the values for the non-annealed UFG Al-0.5%Mg-Sc alloys (see Figure 16 in [30]). Comparing the values of k exp with the theoretical ones k th presented in [30] shows the kinetics of the dynamic grain growth in the annealed UFG alloys to be governed by the mobility of the OMDs. Figure 19. Dependence of the dynamic grain growth rate on the strain rate in the logarithmic axes. UFG alloys with 0.2%Sc (circles) and 0.3%Sc (squares). Test temperature 400 • C (empty markers) and 500 • C (full markers).
1.
The superplasticity of the cast and ultrafine-grained (UFG) Al-0.5%Mg-Sc alloys with the Sc contents from 0.2 to 0.5 wt.% has been studied. The cast structure in the alloys was formed by induction casting without application of subsequent homogenization. The UFG structure was formed by ECAP. The stabilization of the nonequilibrium UFG structure was provided by preliminary annealing at 300 • C that did not exceed the recrystallization temperature in the investigated alloys. In the course of preliminary annealing, the Al 3 Sc particles of two types, nucleated-coherent Al 3 Sc nanoparticles inside the grains and relatively large (50-200 nm) elongated fan-shaped Al 3 Sc particles formed via the discontinuous decay mechanism. 2.
The UFG alloys have good superplastic characteristics-in the annealed UFG Al-0.5%Mg-0.5%Sc alloy, the relative elongation to failure reached 900% (test temperature 500 • C, strain rate 3.3 × 10 −2 s −1 ). The magnitude of strain rate sensitivity coefficient m was 0.4-0.47. At reduced test temperatures (300-350 • C) not exceeding the recrystallization temperature, the elongation to failure in the annealed UFG alloys varied from 170% to 320%.
3.
The values of the elongation to failure for the annealed UFG Al-0.5%Mg-Sc alloys are comparable to the ones for the non-annealed alloys tested in the same temperature and rate strain conditions. The close values of elongation in the UFG alloys with different grain sizes are caused likely by the following factors: (a) The formation of pores at large Al 3 Sc particles forming via the discontinuous decay mechanism during preliminary low-temperature annealing. The generation and growth of the pores at the large Al 3 Sc particles leads to accelerated cavitation destruction of the UFG Al-0.5%Mg-Sc alloys.
Nonuniformity of the plastic deformation at the macroscopic level and the formation of the macro-neckings of the localized plastic deformation. The low-temperature annealing leads to the increase of the macro-localization scale during the superplastic deformation of the UFG Al-0.5%Mg-Sc alloys. (c) Accelerated dynamic grain growth, the kinetics of which is determined by the mobility of the orientation mismatch dislocations in the non-equilibrium grain boundaries in the UFG alloys. | 2021-12-30T16:08:47.239Z | 2021-12-27T00:00:00.000 | {
"year": 2021,
"sha1": "e7e4332980f3a96d15a749b712abec5eef389a43",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fa0f3edb06f83714a55788727f406d79772c996b",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
243115245 | pes2o/s2orc | v3-fos-license | Radiolabeling efficiency and stability study on Lutetium-177 labeled bombesin peptide
Bombesin is a 14-amino-acid peptide having the ability to specifically bind gastrin releasing peptide receptors (GRPR) which show over-expression in many types of cancer cells. Therefore, bombesin analogs have been complexed with radionuclides and reported as radiopharmaceuticals for cancer diagnosis and therapy. Lutetium-177 (Lu-177) is a beta emitting radionuclide that decays with a half-life of 6.65 days. The medium beta energy and the relatively long half-life of Lu-177 make it one of the ideal radionuclides used in targeted radionuclide therapy. As the oxidation state of this radioisotope is 3+, it requires multidentate chelators such as DOTA to form stable complex. In this work, the commercially available conjugated peptide, DOTA-[Pro1, Tyr4]-bombesin, was labeled with Lu-177 for preliminary formulation as a therapeutic radiopharmaceutical. The aim was to evaluate the radiolabeling efficiency using various amounts of the peptide and the stability in human serum for 7 days. The radiolabeling was performed in sterile water for injection with 5 mCi of Lu-177, adjusted to pH 5.5 to 6.0 by 0.5 M sodium acetate, and incubated at 100°C for 30 min. It was found that the radiochemical yield was more than 99% when using 20 µg of the peptide, and the complex was stable for a week. Moreover, human serum was used to simulate in vivo condition. The results showed high complex stability with more than 98% remaining intact after 7 days.
Introduction
Peptide-based radiopharmaceuticals have been successfully used for the localization and the staging of diseases in molecular imaging technique for over two decades [1]. A higher density of peptide receptors found on tumor cells than in normal tissues benefits peptide-based radionuclide targeting. The radiolabeled peptides will specifically bind the receptors, accumulate in the tumors and reveal themselves as hot spots on images [2]. Moreover, the unbound radiolabeled peptide will be rapidly cleared from the blood pool and non-target tissues resulting in the high target-to-background ratio. Therefore, superior-quality images will be constructed, and tumors can be identified.
Bombesin is a tetradecapeptide initially isolated from frog skin by Erspamer and coworkers [3]. It binds bombesin receptors which are often expressed in several tumors such as breast, ovarian, prostate, lung, colon and skin tumors [4]. It was reported that bombesin analogs could be radiolabeled with both beta and gamma emitting radionuclides [5].
For radionuclide therapy, 177 Lu has been of interest due to its suitable energy (beta energy 498 keV and gamma energy 208 keV) and half-life (6.65 days), as well as ease of production in high yield via . 177 Lu can also be produced in nuclear reactor by direct irradiation of enriched 176 Lu or indirect irradiation of 176 Yb. It has been developed as radiopharmaceuticals by complexing with peptide-conjugated-multidentate chelators such as DTPA and DOTA.
In this study, the labeling condition of DOTA-[Pro 1 ,Tyr 4 ]-bombesin with 177 Lu has been reported. The stability of the labeled peptide has been investigated and the stability in human serum has been evaluated so as to predict its degradation behavior in biological condition.
Chemicals and quality control technique
DOTA-[Pro 1 ,Tyr 4 ]-bombesin in TFA salt was purchased from ABX. Lu-177 as 177 Lu-lutetiumchloride solution in 0.05 M HCl was purchased from IDB Holland BV. All other chemicals and materials were of analytical grade. Thin layer chromatography was used for the chemical quality control of the labeled compound. Radiochemical purity was performed by Instant Thin Layer Chromatography Silica Gel impregnated glass fiber strips (ITLC-SG) using a mixture of NH4OH/EtOH/H2O (2:10:20) as mobile phase. In this system, the labeled compound moved with the mobile phase to solvent front, while free 177 Lu and impurities remained at the origin [5]. The chromatograms were analyzed by radio-ITLC equipped with NaI detector. All experiments were performed in triplicate.
Investigation of labeling conditions
Labeling conditions were investigated with various amounts of DOTA-[Pro 1 ,Tyr 4 ]-bombesin peptide (1 to 30 µg). To the solution of 1 µg/µL peptide in water, 10 µL of 10% (w/v) ascorbic acid as stabilizer and 5 mCi of 177 Lu were added; then the pH was adjusted to 5.5 to 6.0 by 0.5 M sodium acetate in a total volume of 300 µL. The reaction mixture was incubated at 100 o C for 30 min then cooled to room temperature. Radiolabeling efficiency was evaluated by radio-ITLC-SG. The optimal condition was found and used for further study.
Serum stability.
Serum stability test was performed in human serum. To a solution of 50 µL labeled compound from the same labeling condition as 2.3.1, 450 µL human serum was added, and the mixture was incubated at 37 o C. Radio-ITLC-SG was used to analyze for the % remain intact at 1, 3, 6, 24, 48, 72, 96, 120, 144 and 168 h.
Radio-ITLC-SG chromatogram
The analysis of labeled compound was conducted on the basis that the labeled compound moves with the mobile phase to the solvent front whilst the unreacted 177 Lu and other impurities remain at the baseline as shown in figure 1. Other types of mobile phase were reported such as 0.1 M citrate/citric acid buffer pH 5.0 to determine free 177 Lu [7] and 10% (w/v) ammonium hydroxide solution : methanol (1:1) to determine colloidal impurities [8]. However, the method used in this experiment is recommended as it could separate all impurities from the labeled compound in one system.
DOTA-[Pro 1 ,Tyr 4 ]-bombesin labeled Lu-177
The labeling of DOTA-[Pro 1 ,Tyr 4 ]-bombesin with 177 Lu was performed without purification. Labeling yield was greater than 99% when using no less than 20 µg of the peptide as shown in figure 2. Therefore, this amount of the peptide was used to label with 177 Lu for stability study. Tyr 4 ]bombesin was investigated for a period of 7 days. The labeled compound was kept at room temperature and analyzed by radio-ITLC-SG as described above. The results revealed high stability of 177 Lu-labeled peptide as % remain intact was more than 95% after a week, see figure 3. It was also reported that the stability of the labeled compound was checked while it was stored at 2-8 o C [9]. However, the results showed no significant differences. 3.3.2. Serum stability. Serum stability experiment was performed in order to forecast in vivo stability of the labeled compound. After incubation in human serum, 177 Lu-labeled compound was measured at several time intervals as shown in figure 4. Although the metabolic degradation was found as % intact decreased, the remaining labeled compound was higher than 95% over 7 days which demonstrated a high stability.
Conclusion
This study presented the optimal radiolabeling condition of bombesin peptide with the therapeutic radionuclide. The labeling of DOTA-[Pro 1 ,Tyr 4 ]-bombesin with 177 Lu was successful and stable for 7 days. The in vitro stability study showed very promising data. For future work, other in vitro studies such as cell binding and cytotoxicity would be considered before moving onto biodistribution study in animals. | 2019-12-19T09:15:46.803Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "3b1e45c26c162c4be272fa0d03a2e17625dd3d98",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1380/1/012020",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "bbcc912dfe928237b078111e98f997bbe6448460",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
} |
118358348 | pes2o/s2orc | v3-fos-license | Separating intrinsic alignment and galaxy-galaxy lensing
The coherent physical alignment of galaxies is an important systematic for gravitational lensing studies as well as a probe of the physical mechanisms involved in galaxy formation and evolution. We develop a formalism for treating this intrinsic alignment (IA) in the context of galaxy-galaxy lensing and present an improved method for measuring IA contamination, which can arise when sources physically associated with the lens are placed behind the lens due to photometric redshift scatter. We apply the technique to recent Sloan Digital Sky Survey (SDSS) measurements of Luminous Red Galaxy lenses and typical (L*) source galaxies with photometric redshifts selected from the SDSS imaging data. Compared to previous measurements, this method has the advantage of being fully self-consistent in its treatment of the IA and lensing signals, solving for the two simultaneously. We find an IA signal consistent with zero, placing tight constraints on both the magnitude of the IA effect and its potential contamination to the lensing signal. While these constraints depend on source selection and redshift quality, the method can be applied to any measurement that uses photometric redshifts. We obtain a model-independent upper-limit of roughly 10% IA contamination for projected separations of approximately 0.1-100 Mpc/h. With more stringent photo-z cuts and reasonable assumptions about the physics of intrinsic alignments, this upper limit is reduced to 1-2%. These limits are well below the statistical error of the current lensing measurements. Our results suggest that IA will not present intractable challenges to the next generation of galaxy-galaxy lensing experiments, and the methods presented here should continue to aid in our understanding of alignment processes and in the removal of IA from the lensing signal.
Introduction
Observations of gravitational lensing, the deflection of light by matter between the source and observer, have become an important and widely-used tool in cosmology and astrophysics (for a review, see [1][2][3][4]). Lensing measurements are equally sensitive to all types of matter, making them a particularly useful probe of dark matter and theories of gravitation [5]. Galaxy-galaxy lensing measures the gravitational influence of massive foreground ("lens") objects on the measured shapes of background ("source") galaxies. In the weak lensing regime, such effects must be studied statistically, since the lensing deflections are typically factors of several tens to hundreds smaller than the intrinsic galaxy ellipticities. Correlating the observed source shapes with lens positions measures the galaxy-mass cross-correlation, which can in turn be used to examine the density profile of halos and probe the standard ΛCDM paradigm as well as theories of modified gravity [6,7]. Combining such lensing results with clustering measurements of similar lenses may provide an especially robust probe of density fluctuations [8]. measure the IA and lensing signals from photometric lensing measurements with minimal assumptions. We exploit the fact that the IA signal is sourced by contamination from galaxies physically associated with the lens. Splitting the source catalog by separation in photo-z from the lens allows us to compare samples with different levels of contamination and thus solve simultaneously for the IA signal and lensing shear. This method is completely self-contained and model-independent, using only the lensing measurements themselves to constrain IA. Limited model-dependent assumptions can also be applied to improve constraints (see [33][34][35][36] for techniques to remove IA using flexible models).
We apply this method to LRG lenses and typical photometric galaxy lenses in the Sloan Digital Sky Survey (SDSS). The main distinctions between this study and [13] are the lower redshift range for this work, and the fact that [13] directly measured the intrinsic alignments of LRGs whereas we use the galaxy-galaxy lensing signal to measure the intrinsic alignments of typical source galaxies that dominate lensing measurements. Using this technique with upcoming large imaging surveys will allow us to probe IA across a range of scales and as a function of both lens and source properties. A potentially valuable application of this method is in the galaxy-galaxy lensing mass determination of galaxy clusters. The use of clusters to constrain cosmological parameters relies on understanding and removal of effects which could bias the inferred masses. Better characterization of alignment in different galaxy populations will improve our understanding of systematic errors for weak lensing studies in general, and will provide a probe of the complex astrophysical processes involved in galaxy formation and evolution.
The paper is organized as follows. Section 2 describes the observations involved in galaxy-galaxy lensing and the data utilized in this study. Section 3 contains a brief overview of the formalism of galaxy-galaxy weak lensing, including IA. Section 4 discusses our method of extracting IA from the lensing signal, and section 5 shows our results. We conclude with a discussion of the implications of our work in section 6. An appendix includes a more technical discussion of issues relevant to our measurement method. Calculating the lensing signal and calibration factors requires a cosmological model, although this assumption has a minimal impact on the IA results. Where relevant, we adopt a flat ΛCDM universe with Ω m = 0.25 and present results in units with h = 1.
Data
For measurement of galaxy-galaxy lensing and of intrinsic alignments, we use data from the Sloan Digital Sky Survey. A requirement for this study is to have reasonably accurate redshift information for the galaxies that we use as lenses (tracing the density field) and those that we use to measure shapes (tracing either the intrinsic alignments or the lensing shears, depending on the line-of-sight position with respect to the lens). In addition, we must measure galaxy shapes at sufficient resolution. In this section, we describe the data used for the lens and shape samples.
The SDSS [37] imaged roughly π steradians of the sky and spectroscopically followed-up approximately one million of the detected galaxies [38][39][40]. The imaging was carried out by drift-scanning the sky in photometric conditions [41,42], in five bands (ugriz) [43,44] using a specially-designed wide-field camera [45]. These imaging data were used to create the source catalog used in this paper. We also use SDSS spectroscopy for the lens galaxies. All of the data were processed by completely automated pipelines that detect and measure photometric properties of objects, and astrometrically calibrate the data [46][47][48]. The SDSS I/II imaging surveys were completed with a seventh data release [DR7,49], though this work also relies on an improved data reduction pipeline that was part of the eighth data release, from SDSS III [50]; and an improved photometric calibration ['ubercalibration', 51].
Lens sample
The lens sample considered here consists of LRG lenses from DR7 as selected by [52]. To match cuts in the shape sample on imaging and PSF quality, positions with respect to bright stars, and extinction, we remove 8% of the area, yielding a final area of 7,131 deg 2 . The lens sample consists of 62,081 galaxies with spectroscopic redshifts between 0.16 ≤ z < 0.36 and g-band absolute magnitudes M g in the range [−23.2, −21.2], with a roughly constant comoving density of 10 −4 (h −1 Mpc) −3 . For all calculations, we include a weight for each lens designed to mitigate the effects of fiber collisions, completeness, and large-scale structure fluctuations [52].
Shape sample
The sample of galaxies with shape measurements, used in this paper to study both the gravitational shear and the intrinsic alignment field, is described in [53]. This catalog is based on the SDSS DR8 photometric data, processed using re-Gaussianization [54] to correct for the effect of the point-spread function (PSF). It contains ∼ 30 million galaxies (1.2 arcmin −2 ) with r-band apparent magnitude r < 21.8. There are also cuts on the image quality, quality of the PSF estimation, galaxy size compared to the PSF, and Galactic extinction. The photometric redshifts assigned to each galaxy based on the five-band photometry are from the Zurich Extragalactic Bayesian Redshift Analyzer [ZEBRA,55]. The photo-z uncertainty is σ z /(1+z) ∼ 0.11, due primarily to the low S/N for the majority of the photometric galaxies. The impact of this uncertainty on lensing measurements with this catalog is characterized by [29]. For this analysis, we use the entire source sample, as well as "red" and "blue" color sub-samples, where the split is done using the ZEBRA template type (see appendix A for more details). For comparison with other IA results, note that the typical luminosity of these sources in the redshift range of interest is ≈ L * , making them comparable to the SDSS L4 sample. Where relevant, we refer to the SDSS luminosity classification scheme described in [56], based on r-band absolute magnitude. The labels {L1, L2, L3, L4} correspond to M r in the ranges { [-17,-16]; [-18,-17]; [-19,-18]; [-20,-19]}.
Lensing formalism
As light travels from a distant galaxy, the presence of intervening matter alters its path, acting as a "gravitational lens" and altering the observed shape of the galaxy. This gravitational shear is determined by the projected mass density along the line-of-sight. In this section we summarize the relevant aspects of lensing formalism and develop a consistent treatment of intrinsic alignment. For a more detailed treatment of lensing, see, e.g., [1] or [53,57] for systematic issues.
Galaxy-galaxy lensing
Observed galaxy shapes are a combination of the intrinsic shape and the gravitational shear. The observed ellipticity of an object, e, is related to observed shear by the responsivity factor R, which measures the average response of an ensemble of source shapes to a given shear: γ = e /2R [58]. In the weak lensing limit, one can write the shear as a sum of gravitational (G) and intrinsic (I) contributions: γ = γ G + γ I . For an individual object, the intrinsic shape dominates, although if the intrinsic components of galaxy shapes are not correlated with each other, they will average to zero and simply contribute a "shape noise" to the measurement. Thus, it is often assumed that the observed shear components provide an unbiased estimator of the true gravitational shear: over a large number of sources γ = γ G , where a tilde denotes an observed quantity. In the case of galaxy-galaxy lensing, the lensing of background ("source") galaxies is dominated by a single massive foreground ("lens") object, which imparts a tangential shear to the observed shapes, yielding a correlation with the lens position. We define the lens surface density contrast, ∆Σ, as the difference between the average (projected) surface mass density within radius r p and the surface mass density at r p : ∆Σ(r p ) =Σ(< r p ) − Σ(r p ). The tangential gravitational shear, γ G t , is related to the surface density contrast: where Σ c is the critical surface density, expressed in comoving coordinates as The angular diameter distances D l , D s , and D ls are between the observer and lens, observer and source, and lens and source, respectively, while z l is the lens redshift.
If the intrinsic shapes of nearby galaxies are correlated with each other, an intrinsic alignment shear contribution, which we denote γ IA , will be present: For notational simplicity, in the remainder of this work we drop the "t" subscript -all shears are assumed to refer to the tangential component. A galaxy-galaxy lensing measurement consists of averaging over many lens-source pairs to estimate the average lens density contrast: wherew j is the weight given to each lens-source pair, and B(r p ) is the boost factor, discussed below, which accounts for sources that are physically associated with the lens. In the summation expressions, "lens" denotes a sum over all real lens-source pairs with projected separation r p in the desired bin, while "rand" denotes a sum over random lens-source pairs (where the random lenses are distributed with the same angular and radial selection function as the real lenses). The optimal weight [28] for each pair is a combination of the geometric factor Σ c,j , shape noise from the variance in the source ensemble (e rms ), and the individual object measurement noise (σ e,j ):w (3.5)
Accounting for physically associated galaxies
When looking along the line-of-sight at or near a lens galaxy, the measured number density of sources, as a function of redshift, is given by the sum of two components: a smooth background dn/dz, determined by observational parameters and selection cuts; and a sharp peak located at z l due to galaxies that are physically associated with the lens and thus strongly clustered with it. The boost factor, B, measures the contribution of this excess peak in the photometrically defined source sample. Since these excess galaxies are not lensed, they will dilute the measurement unless accounted for explicitly, as done in eq. 3.4. The boost is observationally determined by comparing the weighted number of lens-source pairs for a given source sample with the number of random-source pairs for the same sample: With accurate redshift information, it would be easy to exclude physically associated sources, and the boost would approach 1. The measured boosts for the samples used in this study are shown in figure 1. The quantity (B − 1)/B gives the fraction of galaxies in the source sample that are physically associated with the lens. Note that we are assuming that the number of physically associated galaxies is the same as the number of "excess" galaxies due to clustering. This assumption need not hold, a subtlety we will explore below.
Correcting for photometric redshift bias
If IA contributes no signal, γ IA j = 0, and the expression in eq. 3.4 can be rearranged: where we have defined b z , the bias in the lensing signal due to the uncertainty in photometric redshift, and used the fact that excess galaxies are at the lens redshift and thus have Because the lensing efficiency, expressed by Σ c , is a nonlinear function of distance, even an unbiased scatter in redshift can result in a biased lensing measurement, and correction for this photometric bias must be applied when photometric redshifts are used. The value of b z is a function of the lens redshift, but for the purposes of this study, we only need the average value b z , determined by integrating over the lens redshift distribution (see, e.g., eq. 22 of [29]). Combined with the definition of the boost, we can express b z as a sum over sources around random points rather than lenses, allowing the use of calibration sets from fixed areas of the sky unassociated with the lenses. To calculate b z , we follow the methods As expected, boosts are larger for red galaxies (which cluster more strongly) and for samples defined with photo-z's closer to the lens redshift. In all figures, error bars indicate 68% confidence regions and small horizontal offsets are added for clarity.
of [29], utilizing the same calibration sets. These calibration sets consist of galaxies with both spectroscopic and photometric redshifts, allowing the determination of bothΣ c,j and Σ c,j . Object selection in these calibration samples matches that for the source samples considered here.
With these definitions, we can rearrange eq. 3.4 to write the true ∆Σ in terms of the observed ∆Σ and the IA contamination (no longer assumed to be zero): where c z ≡ ( b z + 1) −1 . In the limiting case of no IA, physically associated galaxies contribute nothing to the measured signal, and the boost acts to cancel this "dilution." 4 Methodology for measuring intrinsic alignment
Solving for IA
Galaxies that are physically associated with the lens but mistakenly placed behind the lens due to scatter in photometric redshift can lead to IA contamination. At greater line-of-sight separation from the lens, the number of associated galaxies should decrease, assuming a wellbehaved photo-z distribution. We exploit this fact to separate IA from the lensing signal. Our IA method is valid in the presence of catastrophic photo-z failures, however the rate and distribution of these failures must be understood using a representative redshift calibration sample. In principle, catastrophic failures could induce a non-physical correlation between IA and observed line-of-sight separation which will bias results if not taken into account. For these SDSS measurements, the failure rate is both low and well-understood and can be safely ignored.
With a minor simplification to the second term in brackets in eq. 3.9, we can explicitly solve for the IA contamination. As discussed above, lens-source pairs can be divided statistically into "random" and "excess" pairs: We take the sum over "random" pairs to be zero, since non-associated pairs should not be aligned with the lens, and it is assumed that the number of excess pairs is a good approximation for the number of physically associated ones. On small scales, where clustering is strong, this assumption is a good one. On larger scales, this assumption may break down, a possibility we address below. We further simplify the expression by replacing the value γ IA .9 can then be rewritten: where Σ c ex (r p ) can be directly measured: This simplification of the IA sum is reasonable because the large photometric uncertainty effectively removes any correlation between observedΣ c and γ IA for each physically associated object. In other words, the intrinsic alignment shear should depend on the line-of-sight separation between lens and source, but our photo-z errors are far larger than the scales relevant for IA variation.
We consider two groups of lens-source pairs, denoted a and b, defined in terms of the separation between lens (located at z l ) and source (with photometric redshift z p ). We take sample a to consist of all pairs with z l < z p < z l + ∆z while sample b has z p > z l + ∆z, where ∆z = 0.17. Other splitting schemes are feasible, and this technique can be generalized to involve more than two source subsamples, although given the limitations in the statistical power of current measurements, using two subsamples is preferred. The potential advantage of using additional subsamples is discussed in section 6. We assume here that the lens redshift, z l , is determined precisely with spectroscopic measurements, but it is straightforward to include uncertainty in z l . We also refer to the src and assoc samples, defined by z p > z l (i.e. the combined a and b samples) and |z p − z l | < σ z , respectively, where σ z is the uncertainty in each galaxy photo-z. Samples a and b are probing the average surface density profile of the same set of lenses, and thus measurements of both come from the same underlying ∆Σ. However, because these samples have different source redshift distributions with respect to the lens positions, they will have different levels of IA contamination: sample b consists of lenssource pairs with larger z p − z l and should thus have a lower fraction of physically associated galaxies. We further assume that the physically associated galaxies in both samples have the same average value ofγ IA and that the two have different levels of IA contamination only due to the different numbers of physically associated galaxies they contain. We discuss this assumption in appendix A. However, as shown in section 5.2, potential violations of it produce a sub-dominant bias given current levels of measurement uncertainty, and splitting the source sample by color should largely remove this bias. There are now only two unknown quantities in the lensing measurement: the true ∆Σ and the level of IA contamination per associated object,γ IA . With two sets of lens-source pairs, we are able to solve for bothγ IA and the true ∆Σ with the IA contamination removed: where we have suppressed explicit dependence on r p . From eq. 4.3 and the measuredγ IA , we can calculate the IA contribution to ∆Σ, which we denote ∆Σ IA , and thus a fractional contamination to the observed lensing signal for a given source sample s.
The weighting applied here for lens-source pairs is the standard scheme used for galaxygalaxy lensing because it corresponds to weighting by the noise in ∆Σ. These are not the optimal weights for measuring the IA signal, since they give less weight to close pairs which are more likely to be physically associated. However, we need to solve for bothγ IA and ∆Σ. One can solve for the optimal weights to measure each of these quantities, assuming the other is known, and then proceed to solve for both iteratively. However, since the uncertainty in our signal is dominated by the ∆Σ measurement, the ideal weighting scheme should approach the standard lensing weights. To test a weighting scheme that doesn't favor pairs at large line-ofsight separation, we repeated our measurement using pure shape-noise weighting (removing the factor of Σ −2 c ) and found a significantly higher uncertainty inγ IA . However, as the precision of future measurements improves, it may be beneficial to use weights optimized for IA measurement with an iterative scheme.
Note that the general technique applied here -comparing the lensing signal observed in two of more samples with different lens-source separations -is similar to a shear-ratio test to probe the geometry of the universe [59]. For the source and lens redshift distributions considered here, we find that varying the cosmological parameters across a reasonable range contributes a highly sub-dominant signal: our choice of cosmology has a negligible impact on IA results. However, for future studies with improved precision, the effects of assuming an incorrect cosmology should be considered. Conversely, if not properly treated, IA has the potential to contaminate shear-ratio or similar measurements and bias the resulting constraints on cosmological parameters.
Estimating and reducing uncertainties
To estimate the uncertainties in this measurement, we construct 1000 bootstrap realizations of all measured quantities, by random sampling with replacement, from 100 contiguous regions of the survey footprint. The number of bootstrap regions and realizations is chosen to ensure that the errors are a reasonably smooth function of scale. The quantitiesγ IA and ∆Σ IA / ∆Σ are then calculated for each realization, allowing us to directly determine confidence intervals without assuming a Gaussian distribution. Unless otherwise stated, 68% confidence intervals are shown. We also use these bootstrap realizations to determine model-dependent confidence intervals for IA. See section 5 for further discussion.
On large scales, where B → 1, the fractional error in B − 1 can become quite large. For source subsamples closer to the lens, where the boosts are larger, this fractional error is smaller. If the excess galaxies in different samples have the same composition, these boosts come from the same clustering. The ratio of B − 1 between these samples should then be constant as a function of scale, with the ratio reflecting the photo-z scatter into each sample. As discussed in appendix A, such homogeneity between subsamples is present for red galaxies, allowing an improvement in the boost uncertainties. We find the ratio between B − 1 for different samples on smaller scales (r p = 0.2-4.8h −1 Mpc), where the fractional errors are small. We then take the boosts for the background samples (a and b) to be the rescaled boosts from the assoc sample, thus providing smaller fractional errors. However, as shown in fig. 3, this provides only a modest improvement to the overall uncertainty inγ IA .
Furthermore, there is a systematic uncertainty in the determination of B of roughly 3% due to large-scale structure fluctuations and low-level variations in the lens density with observing conditions, which are difficult to model and are not faithfully reproduced in the random catalogs (see Mandelbaum et al. 2012 in prep. for details related to this lens catalog, and [60] for a related example in SDSS). On scales above ∼ 10 h −1 Mpc, B − 1 is only a few percent and thus cannot be meaningfully measured. On scales between ∼ 1-10 h −1 Mpc, this systematic uncertainty dominates the errors, but the boost values are large enough for accurate measurement. However, this systematic uncertainty leads to some bootstrap realizations having a negative number of "excess" lens-source pairs, which is non-physical at these relatively small transverse separations. This artifact could impact the measured uncertainty inγ IA and fractional contamination from IA . To test the impact of this uncertainty, we repeat our calculations using the median boost value rather than that for each individual bootstrap realization, effectively removing the systematic uncertainty, and find little effect on the quantities of interest. We thus conclude that the boost systematic does not significantly influence results on these scales.
4.3 Identifying physically associated galaxies as "excess" galaxies Ideally, eq. 4.1 would be split into sums over physically associated pairs (which can have IA) and non-associated pairs. However, we are not able to directly measure the fraction of physically associated galaxies in a given sample. Instead, the boost factors determine the fraction of "excess" pairs. This excess is similarly described by the two-point crosscorrelation between lens and source objects, denoted ξ ls . At small separations, where ξ ls ≫ 1, essentially all associated galaxies are also excess galaxies. On larger scales, however, ξ ls 1: the number of galaxies that would be present near the lens given a random distribution becomes comparable to and eventually larger than the number of excess galaxies present due to clustering. This complication does not reduce the applicability of the method -the physical mechanism of IA does not distinguish between an object that is excess above random and one that is not. As long as the ratio of excess galaxies to total physically associated galaxies is the same between the two samples, we may simply attribute the entire IA signal to the excess object fraction. The photometric redshift scatter should be, on average, the same for all physically associated galaxies, and thus this ratio should be fixed at a given r p . While this approach allows for accurate IA extraction, it also can lead to seemingly counterintuitive results. For instance, even though alignment should be weaker at larger scales, the value of γ IA may increase above a certain scale, as the number of excess galaxies drops more quickly than the total IA signal (see, e.g., [61]). The integral constraint requires that the average clustering across all scales be zero, while no such constraint exists for the shear correlation signal. This effect should be considered whenever IA is examined in the context of clustering.
Combining correlations and photometric redshift uncertainty
For effective comparison with both theoretical models and previous IA measurements, we wish to express the observables of interest in terms of underlying cosmological quantities. The theoretical values of both the boost factors andγ IA can be written as the convolution of the photometric redshift scatter with the lens-source and the lens-IA cross-correlation functions, ξ ls = δ l δ s and ξ l+ = δ l γ + . Here, γ + = −γ t , matching the standard convention used for shear correlation functions.
Consider lenses at z l and sources at true redshift z s , with photometric redshift z p . For lens and source redshift distributions p l (z l ) and p s (z s ) we can calculate the boost for an individual lens at z l : where Π is the line-of-sight separation between the lens and source.P (z s , z l ) is the weighted probability of an object located at z s being placed in the source sample due to its assigned z p . This quantity reflects the photometric redshift scatter into the source sample and will depend on the survey properties, the sources under consideration, and the definition of the sample:P for a source sample with z min < z p < z max , the boundaries of which may be defined relative to the lens redshift. P (z p |z s ) is the probability that an object with true redshift z s is photometrically assigned z p . Here we have assumed that P (z p |z s ) does not change as a function of distance from the lens due to varying average source properties or observational effects. The total boost factor is found by integrating over the lens distribution p l (z l ): In an analogous fashion,γ IA can be written in terms of the ξ l+ and ξ ls : where the minus sign accounts for the convention that positive "+" shear corresponds to negative tangential shear, and w X is the projected correlation function corresponding to ξ X . The quantity (ξ IA l+ (r))/(1 + ξ ls (r)) gives the IA contribution to γ t per lens-source pair (both "excess" and "random") at r. Different IA models provide predictions for ξ l+ which can be tested against observation. As discussed in sec. 5.1, eq. 4.11 also allows for comparison between the results in this work and previous IA studies.
Results
We now present the results of the IA measurement technique developed above. We define the a and b samples using ∆z = 0.17 to yield a roughly equal number of lens-source pairs in the two and minimize measurement uncertainty. See appendix B for a discussion of this split. Figure 2 shows measurements of ∆Σ for the a, b, and entire src samples. 5 These measurements include the photo-z calibration correction factor c z for each sample and thus would provide an unbiased estimate of the true ∆Σ in the absence of IA. Any statistically significant deviation between measurements for the a and b samples would represent either an error in calibration or the presence of IA. It is apparent that the measurements are consistent within the confidence intervals (68%), so we do not expect a statistically significant detection of IA. Instead, we seek to place tight constraints on the potential IA contribution. We probe projected separations of r p = 0.9-10.1 h −1 Mpc. On larger scales, the systematic uncertainties in measured boosts, discussed above, greatly degrade the power of our method. On smaller scales, lensing magnification, non-weak shear, and sky systematics in the SDSS software pipeline limit measurement accuracy [62].
Previous IA measurement methods
The work of [32] attempts to constrain IA by applying similar source photo-z cuts and then assuming that the background sample is effectively free of IA contamination. With this assumption, it is straightforward to estimate the lensing signal in the sample more closely associated with the lens and solve for the IA in that sample. For a background sample, b, with no IA: Thus IA in an associated sample, a, can be expressed as Results using this technique are shown below in figure 3. Note, however, that direct comparison with [32] is difficult. In that work, shear measurements are done directly (rather than using the quantity ∆Σ), and the lensing weights and photo-z bias factor are not taken into account. Furthermore, that work uses flux-limited SDSS Main spectroscopic sample galaxies [40] as lenses, rather than LRGs, which will have different IA and clustering properties.
Other studies (e.g. [10,11,13]) directly measure IA using w g+ of low-redshift SDSS Main spectroscopic sample galaxies as well as higher redshift LRGs (which display a much stronger signal). The use of spectroscopic redshifts greatly reduces the contamination from lensing and dilution from including non-associated pairs. Some relevant results are shown for comparison. It is necessary to convert such measurements to the quantity relevant for galaxygalaxy lensing using eq. 4.11. This conversion is non-trivial, and thus any comparison is only approximate. Similarly, [63] directly measuresγ IA for a low-redshift sample of satellites around massive lens galaxies. They find a significant signal below r p = 0.1h −1 Mpc, smaller than the scales considered here. On larger scales, their results are consistent with zero IA. The authors of [14] constrain IA for higher-z blue galaxies (z ∼ 0.6) using shapes from SDSS and spectra from the WiggleZ survey [64]. Their results are consistent with the limits we find here. Figure 3 shows the results of our IA measurement technique across a range of projected separations. For comparison, results are shown with and without the "extended boosts" technique discussed in section 4.2. Using extended boosts lowers the statistical uncertainty at large scales, although the difference is not significant. Also shown are the results assuming no IA in the background source sample (b). This measurement is more analogous to that made in [32], although as noted above, direct comparison with these earlier measurements is challenging. As expected, neglecting the IA contribution in the background sample biases the resulting IA measurement to a lower magnitude: the disparity between ∆Σ measurements in the two samples is effectively increased when allowing for IA in the background sample. The induced bias is smaller than the level of uncertainty in the measurement for blue, red, and all sources. Because this assumption also reduces the statistical uncertainty in the measurement, the resulting confidence intervals are fully contained within those of our more general and self-consistent measurement. Although ignoring IA in the background sample does not bias the IA measurement in a statistically significant way, it results in overly optimistic confidence regions. In effect, neglecting IA in the background sample has converted a statistical error into a systematic error. For the remainder of this work, we consider results using our fully consistent method. For red sources, where the assumption of a uniform excess source population between the two samples is justified, we use the "extended boosts," while for all and blue sources, we use the boosts as measured.
Model-independent measurement
Comparing measurements with and without accounting for background IA tests the impact of the assumption thatγ IA is the same in the two samples. As discussed in appendix A, the a and b samples can have somewhat different average galaxy alignment properties. However, even ifγ IA differs between them, it cannot do so more strongly than in the case of completely neglecting IA in the background sample. 6 Since the bias induced in this limiting case is minimal, we can be confident in applying our measurement method to all, red, and blue sources.
In all cases, IA contamination is consistent with zero at the scales considered. The uncertainty in IA for red galaxies is smaller than for blue galaxies since the former cluster more strongly and have better photo-z precision, leading to a larger difference in the number of excess galaxies between the a and b samples. Appendix B discusses the sources of uncertainty and the effects of splitting sources by color and photo-z.
Including minimal model dependence
The results shown in the previous section place constraints on IA as a function of projected separation r p , utilizing only the lensing measurements themselves. Treating the IA measurements at different separations as independent is the maximally conservative approach. Including information about the underlying model of IA, which relates the signal strength on different transverse scales, will yield tighter constraints. Determining an accurate model for Blue circles indicate the use of "extended boosts." Green triangles indicate that the IA contamination in the background sample was assumed to be zero. All three methods yield results consistent within the statistical uncertainty, although assuming that the background sample has no IA slightly biases the magnitude ofγ IA to lower values.
IA is an ongoing theoretical challenge, made more complicated by the heterogenous nature of source samples used in lensing studies. For this reason, we aim to make fairly minimal assumptions and show the resulting constraints for two cases: a generalized power-law model and a model motivated from IA measurements of LRGs. The 1000 bootstrap realizations are combined to construct a full covariance matrix for the IA measurement: where angled brackets indicate an average over the realizations. We then use this covariance matrix to find best-fit parameters for a particular model for each bootstrap realization. At every value of r p , a confidence interval is constructed using the N boot predictions from the model fits. Thus, in the case of a model with multiple parameters or one that does not monotonically change with its parameter, the resulting confidence region envelopes may have a shape different from the model itself. Because it is calculated from bootstrap realizations, . Model-dependent confidence intervals for the IA signal for all, blue, and red sources are shown with the model-independent measurements (black data points). Our results are consistent with zero IA signal. Solid black (dashed green) lines denote the power-law (LRG observational) model. Inner lines bound the 68% confidence region while outer lines correspond to 95%. Previous observational results for L3 and L4 red galaxies from [10] and for WiggleZ galaxies from [14] are shown for comparison. These previous results have been converted to theγ IA quantity, as discussed in section 5.2.
the covariance matrix estimated here,Ĉ, is only an estimate of the true covariance matrix and will not result in a χ 2 -distribution (see [32]). However, since we directly construct confidence intervals, we only minimize χ 2 and need not worry about its distribution. After calculatingĈ, we make the further assumption that off-diagonal terms are zero. As discussed in Mandelbaum et al. (2012, in prep), shape noise dominates the covariance of the lensing signal on the scales used here, and thus correlations between bins become appreciable only when the same sources are correlated with multiple lenses -i.e., r p that is roughly twice the typical separation between lenses, or ≈ 20h −1 Mpc. Figure 4 shows the resulting confidence intervals. We first consider a generalized power-law model of the formγ IA = Ar β p , where both A and β are free parameters fit to each realization. This model is well-motivated in the regime where both w g+ and w gg can be approximated by power laws (see, e.g., [10]). As discussed above, the resulting envelope of the confidence region does not follow a power-law. Instead, constraints are tight on the intermediate scales probed by these lensing measurements (where IA signal is consistent with zero), and rapidly increase on large and small scales where there is little IA information.
We also consider a model motivated by the IA observations of LRGs. Following eq. 4.11, γ IA can be estimated as the ratio w l+ /w ls . Using the LRG observations of [10], we calculate this quantity, smoothing with a Savitsky-Golay filter and interpolating to obtain a continuous scale-dependence. We fit the result to our IA measurements with a single parameter for the amplitude, which allows for differences in IA strength and galaxy biasing between the LRGs and the fainter source galaxies used in this work. We use LRG observations because they exhibit a significantly stronger IA signal, providing sufficient S/N (above r p = 0.9h −1 Mpc) to determine a well-defined scale dependence. Non-linear effects and environmental dependence of IA mean that LRG measurements on smaller scales are unlikely to accurately reflect the behavior of the relevant sources. We thus use a spline to extend to smaller scales, and note that constraints in this region are contingent on the smooth continuation of IA behavior. We fit these constraints using scales above r p = 0.44h −1 Mpc, unlike in the case of the power-law shape, where all measurements are used.
This observational model assumes that the dimmer source galaxies considered here have the same IA scale-dependence as the LRGs, allowing for changes in amplitude due to differences in clustering bias, alignment bias, and object ellipticity. The model is thus of limited use on scales where IA and clustering are qualitatively different (e.g. a different physical mechanism for alignment is dominant). Unlike the power-law model, this observational model provides well-behaved confidence regions on larger scales because LRG observations extend beyond the region where we obtain constraints. As noted above, however, constraints from the observational model on scales below r p = 0.9h −1 Mpc are less robust.
On larger scales, where clustering is weak, the statistical distinction between "excess" and "associated" pairs becomes significant, and the number of associated pairs will no longer be well-approximated by w gg . In this regime, the validity of both models discussed here is diminished.
Contamination to lensing signal
As seen in eq. 4.7, the level of IA contamination in the lensing signal depends on both the IA signal per excess object,γ IA , and the fraction of excess galaxies in the particular sample (directly measured by the boosts). Figure 5 shows the fractional contamination resulting from the IA constraints, both with and without assumptions on IA scale-dependence. Results are shown both for all background sources and for a subset of sources with photo-z at least ∆z = 0.17 behind the lens. Using this background sample with a photo-z margin removes a large fraction of excess galaxies and thus greatly reduces the potential contamination from IA. Due to the lensing weights used, this cut can be applied without significantly reducing the statistical power of the measurement. With the current level of binning in projected separation, the statistical uncertainty in the ∆Σ measurement is ≈ 5-15%, depending on the sample and projected separation. For instance, at r p = 1h −1 Mpc the uncertainty in ∆Σ is ≈ 7% for all and blue sources and ≈ 10% for red sources. In most cases, the limits on fractional contamination from IA for the scales we measure (≈ 0.1-10h −1 Mpc) are below this uncertainty, significantly so when model dependence is included.
The limits on fractional contamination in figure 5 for the all and blue samples are conservative. As discussed in section 5.2 and appendix B, the uncertainty inγ IA is larger for blue than for red galaxies, even though both observational results and theoretical predictions (e.g. [10,23,61]) predict that IA for blue objects is minimal. This consideration suggests that more realistic constraints can be obtained by takingγ IA for blue galaxies to be less than that Figure 5. Fractional contamination to the lensing signal is shown for all, blue, and red galaxies. Data points and lines follow the same convention as figure 4. The left column shows the contamination when the entire src sample (z p > z l ) is used, while the right column shows contamination when only the background sources at greater separation (z p > z l + ∆z) are used. As expected, including scale-dependent assumptions and applying the photo-z cut both provide tighter constraints.
for red galaxies. Figure 6 shows the resulting fractional contamination for two limiting cases: γ IA = 0 for blue galaxies andγ IA is the same for blue and red galaxies. With the photo-z cut, the more optimistic assumption (γ IA = 0 for blue galaxies) yields a constraint of ∼ 1% on IA contamination for all source galaxies on the scales we measure. The more pessimistic assumption ofγ IA being the same for red and blue galaxies results in IA constraints of a few percent on these scales.
Discussion
In this work we have developed a method for measuring the intrinsic alignment of source galaxies in a galaxy-galaxy lensing measurement, as well as the resulting contamination to the lensing signal. Unlike previous IA measurement techniques, the method is fully selfconsistent, yielding an unbiased measurement of both IA and ∆Σ, and can be used to directly infer contamination to the galaxy-galaxy lensing signal. Applying this method to SDSS LRG lenses and photometric sources, we find a signal consistent with zero IA for all, blue, and red source galaxies at transverse separations of ≈ 0.1-10h −1 Mpc. To obtain tighter constraints on IA, we have assumed a functional form for its scale-dependence, using two Figure 6. Fractional contamination to the lensing signal is shown making two different assumptions onγ IA . The top row shows the fractional contamination to all sources ifγ IA = 0 for blue galaxies. The middle and bottom rows show the fractional contamination to all and blue sources, respectively, if γ IA for all galaxies is the same as for red galaxies. These reasonable assumptions significantly tighten the constraints on contamination from IA. Columns and line conventions are the same as in figure 5. different physically reasonable models: a general power law and a model motivated by LRG observations. These two models yield similar results on intermediate scales, diverging on large scales where there are no measurements to provide information. We note that while our results are only directly applicable to galaxy-galaxy lensing, the alignment mechanisms being probed are the same as those that can contaminate shear-shear correlations. Given an IA model or parametrization, one could use galaxy-galaxy lensing results to determine constraints on the corresponding GI contamination in shear-shear correlations. The GI term is believed to be the primary IA systematic, since the intrinsic-intrinsic (II) term can be removed with redshift information. Our current work examines a redshift range somewhat below that most relevant for upcoming lensing surveys. Similar analysis with higher redshift lenses will allow us to place constraints on the GI term in future measurements of both galaxy-galaxy lensing and shear-shear correlations.
IA for blue galaxies is less well constrained due to their weaker clustering and poorer photo-z precision. However, previous studies indicate that red galaxies should have a stronger IA signal. Thus, although a conservative approach allows for larger IA in blue galaxies, it is reasonable to take the results for red galaxies as an overall upper limit.
Compared to previous IA measurements, our analysis most closely probes the redshifts and galaxy luminosities relevant to current and future lensing measurements. Our results are broadly consistent with previous studies (e.g. [10,13,14,32,63]), which find insignificant IA signal for similar galaxies at lower redshifts (see figure 4). One possible exception is the red L3 sample of [10,13], where there is a weak detection of IA at r p ≈ 10 h −1 Mpc. However, there are notable differences between our current study and previous ones. We measure IA for typical sources at higher redshift than [10,13,32], and probe alignment in cross-correlation with LRGs rather than in auto-correlation. We also note that recent measurements [10][11][12][13] detect a significant autocorrelation IA signal for an LRG sample similar to our lens sample. This signal is not inconsistent with our findings, which constrain the IA for significantly less luminous galaxies around LRGs. However, this disparity indicates that different types of galaxies display different alignment in the same underlying density field (e.g. LRGs are significantly more aligned). Such a result is not surprising given the likelihood of environmental and nonlinear effects: alignment on these scales depends on more than simply the density field. Comparison with previous IA measurements is difficult since it requires assumptions on how IA and clustering behaviors change on nonlinear scales with object type, redshift, and environment. In addition to effects discussed above, if the primary alignment mechanism acts during periods of formation or accretion, followed by decreasing alignment due to stochastic nonlinear astrophysics, we would also expect an IA signal with significant redshift evolution (see, e.g., [26]). Furthermore, measuring IA at low redshift can be done using spectroscopic galaxies directly without having to account for the lensing signal, as is done in [10]. Any tension between the constraints found here and previous low-z measurements is therefore relevant to IA modeling but not worrisome from a consistency standpoint. We expect future measurements with higher S/N to detect a non-zero IA signal for galaxies typically used as lensing sources, although it will be weaker than that for LRGs.
We also compare our constraints with those found in [14], who use spectroscopic redshifts to directly constrain the IA of galaxies in the WiggleZ sample. These galaxies have a similar luminosity and roughly comparable redshift distribution to those in this study, although the WiggleZ galaxies peak at larger z. The biggest difference between the samples is color: WiggleZ selects emission-line galaxies with UV observations, resulting in a sample that corresponds to the 10-20% bluest galaxies of the "blue" galaxies examined here. Nevertheless, comparison between the results is informative. As seen in figure 4, despite the fact that our method is less efficient with blue galaxies than with red galaxies, the constraints we find for this sub-sample are tighter than the limits from [14]. Even allowing for uncertainty in converting between the different types of measurements and different object properties, our photometric approach to IA is competitive with spectroscopic methods, which necessarily use smaller galaxy samples. Given the wealth of upcoming photometric surveys, the disparity in size between photometric and spectroscopic samples with reliable shape measurements will only increase.
We have placed constraints on the potential contamination to the lensing signal from IA: including minimal model assumptions on scale-dependence, 5% contamination is found at the 95% confidence level on the scales we probe. IA thus remains a subdominant source of uncertainty when compared with the current statistical error on the galaxy-galaxy lensing signal of ≈ 5-15%, given the binning in transverse separation. These results apply to the source selection and redshift quality of this particular study -contamination will be worse if less redshift information is used. Constraints on contamination depend not only on source properties such as color, but also on the photo-z selection cut used. As seen in figure 5, constraints are significantly tighter when sources are selected with a photo-z cut to remove a large fraction of galaxies physically associated with the lens. In this work, we apply the cut z p > z l + ∆z, for ∆z = 0.17, although optimizing this selection will depend on the specifics of a particular survey and science goals. Applying such a cut can greatly reduce the possible IA contamination without sacrificing much signal, since distant lens-source pairs dominate the signal due to the weights. Statistical uncertainties in the ∆Σ measurement with and without the photo-z cut agree to within 10%. Similarly, the relative importance of IA is greater for lens galaxies at higher redshifts: the distribution of background source galaxies will be concentrated closer to the lens positions, increasing the number of physically associated galaxies that receive appreciable weight.
Our findings do not suggest a clear method to cut sources by color to reduce potential IA contamination. Although the constraints found here are tighter for red galaxies, this is because our method performs better for sources that have stronger clustering and better photo-z precision. It is likely that blue galaxies display weaker alignment, and it is unclear to what extent this trend is offset by poorer photo-z precision. Future lensing measurements with lower statistical uncertainty should resolve this issue. However, it is likely beneficial to remove sources with particularly high photo-z uncertainty in order to decrease both potential IA contamination and systematic uncertainty from imperfect photo-z bias correction. In this work, we split the source galaxies into only two sub-samples. With improved statistics, it will be possible to use additional sub-samples in order to break a possible degeneracy between IA strength and correlation between the IA properties and photometric redshift uncertainties of source galaxies. This issue is discussed in appendix A.
High-precision photometric lensing science will be a primary tool in the future of observational cosmology. As such, upcoming surveys such as KIDS, DES, HSC, and LSST are all designed to obtain photometry and high-resolution imaging for a large number of sources. While the data sets are being built and understood, early focus will likely be on galaxy-galaxy lensing, utilizing the large number of massive foreground galaxies with welldetermined redshifts, such as galaxies from current spectroscopic surveys or galaxy clusters. In the near term, galaxy-galaxy lensing analysis is currently ongoing with the BOSS CMASS sample [65,66] as lenses, and we hope to apply our method as part of the analysis. This sample is at higher redshift (0.4 z 0.7), allowing a probe of IA in a new redshift regime relevant for the next generation of lensing surveys and where lens-source overlap may increase the impact of IA. Similarly, we intend to use our method with cluster lenses to study IA of galaxies around cluster centers. Since clusters are even more highly biased than LRGs, we would expect larger numbers of physically associated galaxies in the source sample as well as a higher amplitude for alignment effects. IA contamination for cluster lenses could bias the measured lensing masses and the resulting cosmological results, particularly given that cluster lensing masses are typically determined using lensing signals on even smaller scales than were considered here, where IA effects will be more important.
In the longer term, future studies should obtain galaxy-galaxy lensing signals with uncertainty at the ∼ 1% level. Thus, although IA may become a significant systematic as the statistical errors decrease, applying the method outlined here will yield correspondingly stronger limits on IA (roughly equal to the precision of the lensing measurement) and will allow the removal of this contamination. When IA constraints are improved by assuming a specific radial scaling of the IA signal, they should reach the sub-percent level. Indeed, given the detection of IA in previous spectroscopic studies, these upcoming photometric surveys should achieve a significant detection of IA with the joint lensing and IA measurement method introduced in this paper. These measurements will further reduce systematic uncertainties in lensing studies. Moreover, they will probe the physical mechanisms of intrinsic alignment and thus improve our understanding of galaxy formation and evolution.
A Equating IA in different samples
In principle, solving simultaneously for ∆Σ andγ IA requires that the average value ofγ IA in the two source sub-samples, split by photo-z in relation to the lens position, be the same. As discussed above, we find that potential violations of this assumption lead to sub-dominant bias effects given the current level of measurement uncertainty. However, upcoming studies will provide greatly improved precision, and thus a discussion of this issue is warranted.
The above assumption need not hold if some set of galaxy properties, such as color or luminosity, correlates with both the photo-z uncertainty and the level of IA. Source samples defined by different lens-source line-of-sight separation could then have different average IA properties. Since an important step in the determination of an object's photometric redshift is fitting to an expected spectral template based on multi-band photometry, different morphological types can have with different photo-z precision (see [29]). Both observational and theoretical studies have suggested a divide in IA properties along morphological lines: late-type spirals (typically blue and star-forming) and early-type ellipticals (typically red) are likely subject to differing physical processes that affect alignment, e.g. [23]. In the case of spirals, angular momentum provides the major source of object support and can thus significantly influence orientation via tidal torquing. Elliptical galaxies are pressuresupported through velocity dispersion and are thus expected to align more closely with the surrounding halo and underlying tidal field. Indeed, several observations of IA have found much stronger signal for red galaxies ( [10,61]).
To mitigate this issue, one can split the sample by photometric template type. Ideally, multiple splits would be used to isolate individual template types. However, dividing the source sample reduces statistical power, which quickly becomes the limiting factor. Previous IA studies have relied on a simple division between red and blue galaxies, and future lensing measurements are unlikely to consider more complicated template separation. Thus to both maximize the signal and make the results directly relevant, in this work we have applied a single split. The ZEBRA pipeline interpolates between six primary template types, allowing five subdivisions between each. We define all galaxies assigned to the first five interpolated templates, between the elliptical (Ell) and Sbc primary templates, to be red. All other galaxies are blue (see [29] for further information on the templates).
The sums over weights that determine the boosts can also be used to calculate the effective fraction of red (or blue) galaxies in each source sample as a function of r p , automatically accounting for the lensing weights. Figure 7 shows the effective red fraction of galaxies around random lenses (f r ), total galaxies around real lenses (f t ), and excess galaxies around real lenses (f e ). This fraction is defined as redw j / allw j , where the sums are taken over the set of random, total, or excess lens-source pairs (determined statistically). Because it reflects the underlying distribution and photo-z scatter of sources, with no influence from clustering, f r should be constant as a function of r p and is included as a reference and check on systematics. The quantity f e shows why splitting by color may be necessary. Sinceγ IA in the two different source samples should be the same, the excess galaxies in the two samples should have the same properties. The excess object populations in samples a and b are quite different and thus will have different IA signal unless all galaxies, regardless of color, have the same IA behavior.
The behavior of f t and f e seen in figure 7 is easily understood in terms of basic source clustering properties and photo-z uncertainties. As with many photo-z estimation procedures, ZEBRA provides more accurate redshifts for red galaxies. Poorer photo-z precision for blue objects results in a larger fraction of excess galaxies (actually located at z l ) being scattered into the b sample. This effect increases the values of f r and f t for the a sample, since blue galaxies have been preferentially removed. Furthermore, the fact that red galaxies are more highly biased than blue galaxies yields the observed scale dependence: f r and f t increase on small scales where clustering is more significant. The effects of this photo-z trend can also be seen in figure 1. The separation between B a and B b is larger for red galaxies, indicating a more accurate physical distinction between the two sub-samples. Similarly, B b for blue galaxies is larger than for red galaxies, despite having weaker clustering, because there are more physically associated galaxies with large photo-z error.
To examine the effectiveness of the split, we perform an additional test using the ratio of boost factors between the two samples: (B a (r p ) − 1)/(B b (r p ) − 1). The quantity (B − 1) measures the fractional contamination from excess sources and is given by a projection of the lens-source cross-correlation (eq. 4.8). Comparison of the observed boost with this prediction could indicate that the "excess" sources in samples a and b are statistically equivalent. In principle, knowledge of the source photometric and spectroscopic redshift distributions (e.g. from a calibration sample) allows modeling of the projection weighting (P (z s , z l ) in eq. 4.8), from which it is possible to de-project the measured boost into a cross-correlation function. However, it is impractical to obtainP (z s , z l ) with adequate resolution to probe the correlation function on relevant scales (r p 10 h −1 Mpc). Furthermore, it is challenging to construct a set of galaxies from the calibration set that reliably measures the photo-z behavior of sources near lens galaxies. Another way to probe the sufficiency of the object split is to look at the ratio of measured B − 1 values for different samples. Because of large photo-z uncertainties, P (z s , z l ) will be broad compared with ξ ls and is thus effectively constant in the numerator of eq. 4.10. If the r p and z l dependencies of ξ ls separate, we find: where w ls (r p ) ≡ dΠp s (Π)ξ ls (r p , Π) is the projected cross-correlation function, and s a and s b denote the sources found in samples a and b. The constant of proportionality involves the ratio of integrals over lens and source distributions as well as the photo-z projection weightingP .
If samples a and b contain a statistically similar collection of sources, this ratio will be roughly constant across the range of scales we wish to probe. Scale-dependence in the ratio could indicate that the samples have a different composition and thus have different scale-dependent biases on the cross correlation.
In figure 8, the quantity R(r p ) is plotted, for all, red, and blue source galaxies. Previous work indicates that red galaxies have reasonably well-understood patterns of small-scale clustering, photo-z error, and IA [13,67,68]. The roughly constant value of R(r p ) measured for red galaxies supports the fact that we have selected a source sample with uniform properties across the photo-z split. The IA properties of "blue" galaxies are poorly understand, allowing for more significant variation within the broad sample. The measurement of R(r p ) for blue sources in figure 8 indicates that the sample exhibits significant heterogeneity in clustering (and likely IA) properties. Results for blue sources are thus more subject to systematic uncertainties, although as discussed above, these uncertainties are currently sub-dominant to the relatively large statistical uncertainty on intrinsic alignment contamination. However the R(r p ) diagnostic is not conclusive. For instance, the entire source sample with no color split contains multiple galaxy populations, each with different clustering and IA properties. The value of R(r p ) depends on the relative abundance as well as the scaledependent bias of each. As seen in figure 8, these factors result in a roughly constant value of R(r p ) for the entire source sample, despite its composite nature. Similarly, if assumptions on the relevant scale dependencies in the measured boost are violated, a uniform sample could exhibit a non-constant R(r p ). We reiterate that for this work, even if the assumption of sample homogeneity is violated, as we expect it to be with the all and blue samples, our method still provides a more self-consistent and unbiased IA measurement than earlier techniques. The bias potentially introduced by the assumption is shown in section 5.2 to be small. In future studies with higher measurement precision, the implications of this assumption should be considered.
More generally, with sufficient statistical power, it will be possible to determine the effect of correlations between IA properties and photometric redshift uncertainties. We expect such correlations to exist, even within a single morphological category, since both IA and photo-z quality are affected by galaxy luminosity (e.g. [10,55]). Splitting the source galaxies into more than two sub-samples will allow a measurement of howγ IA varies. Any two sub-samples can be used to solve forγ IA , and the results from different sub-samples can then be compared.
B Sources of uncertainty
The sources of uncertainty in the IA measurement are seen in eq. 4.5. ∆Σ is subject to shape and measurement noise. Scatter in the number of random-source pairs leads to uncertainty in both the boost factors and Σ c ex . All of these quantities are affected by photo-z scatter, the bias from which is corrected with the c z factor. The uncertainty in c z , as measured from calibration sets, is at the ∼ 2-3% level [29], including both statistical and systematic effects, and we thus do not include it. We note, however, thatγ IA depends only on the ratio of c z between the source samples. Thus, for deeper surveys for which it can be difficult to obtain sufficiently representative photo-z calibration samples, the uncertainty in c z can be mitigated by measuring the ratio of the lensing signal for the two samples on large scales (≈ 50h −1 Mpc), where the only difference should be due to photo-z calibration. With the exception of c z , all the sources of uncertainty mentioned here are automatically included in the bootstrap realizations. On the range of scales considered here, the most significant source of error is the shape and measurement noise in ∆Σ.
As seen in figure 2, ∆Σ is measured at similar levels of precision for red and blue sources, since they have roughly the same number of lens-source pairs and measurement noise per pair. However, the uncertainty in the IA signal for blue galaxies is higher than for red galaxies. This difference is due to stronger clustering and more precise ZEBRA photo-z measurement for red source galaxies. The denominator of eq. 4.5 converts the difference between ∆Σ measurements to the IA signal. When B a ≫ B b , the denominator scales as B a − 1, and uncertainty is reduced when strong clustering yields a large value of B a . If the difference between B a and B b is smaller, as would be the case for large photo-z uncertainties, the denominator is further reduced. These two factors combine to degrade the precision of the IA measurement for blue galaxies.
Examining these sources of uncertainty also allows us to understand the effects of different source splitting on the measurement precision. If boost factors were held fixed and were perfectly known, the error in ∆Σ a − ∆Σ b would be minimized when the a and b samples had equal lens-source pair numbers, weighted by (c z B Σ c ) −1 for each sample. However, changing the location of the photo-z split will affect both the number of lens-source pairs and the overall boost in each sub-sample. Increasing the value of B a will decrease the overall error for a given measurement uncertainty. Moreover, uncertainty in the boost factors themselves contribute a non-negligible error, which decreases with larger B a in the relevant regime. Finally, having (B a − 1) ≫ (B b − 1) both decreases the uncertainty and avoids a singular IA estimator, which would yield a large, non-Gaussian variance. Our choice of ∆z = 0.17 was motivated by these considerations, although it is not rigorously shown to be the optimal value. We tried other, similar splitting schemes, and found our choice yields smaller uncertainties inγ IA . | 2012-05-31T21:36:04.000Z | 2012-04-10T00:00:00.000 | {
"year": 2012,
"sha1": "8370f8532016c364f7dceacb0dbe3610f23e4b0f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1204.2264",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8370f8532016c364f7dceacb0dbe3610f23e4b0f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255462220 | pes2o/s2orc | v3-fos-license | The Slender Esophagus: Unrecognized Esophageal Narrowing in Eosinophilic Esophagitis
INTRODUCTION: Inflammation in eosinophilic esophagitis (EoE) often leads to esophageal strictures. Evaluating esophageal narrowing is clinically challenging. We evaluated esophageal distensibility as related to disease activity, fibrosis, and dysphagia. METHODS: Adult patients with and without EoE underwent endoscopy and distensibility measurements. Histology, distensibility, and symptoms were analyzed. RESULTS: Patients with EoE had significantly lower distensibilities than controls. We found a cohort with esophageal diameter under 15 mm despite lack of dysphagia. DISCUSSION: This study raises concern that current assessments of fibrostenosis are suboptimal. We describe a cohort with unrecognized slender esophagus that were identified through impedance planimetry measurements. This tool provides additional information beyond symptomatic, histologic, and endoscopic assessments.
INTRODUCTION
Eosinophilic esophagitis (EoE) is an allergic inflammatory condition characterized by esophageal infiltration of eosinophils. Natural history studies suggest that unchecked inflammation ultimately leads to fibrostenotic disease (1). However, before the onset of frank strictures, esophageal narrowing is challenging to assess symptomatically due to lifestyle changes such as food avoidance and prolonged eating. Determining the degree of fibrostenosis is challenging with esophagogastroduodenoscopy, radiography, and biopsies alone, which allow for limited sampling and assessment of esophageal diameter (2). Understanding esophageal stiffness, narrowing, and distensibility requires further modalities.
The endolumenal functional lumen imaging probe (FLIP) provides novel information on esophageal distensibility. Thus far, limited studies of adult patients reveal that patients with EoE have lower esophageal distensibility than control patients (3)(4)(5). However, the relationship between distensibility and disease activity may vary by age. We therefore sought to evaluate the relationship between histologic, endoscopic, and symptomatic findings and esophageal distensibility in adult patients with EoE and to determine the utility of EndoFLIP in distinguishing "slender" esophagus missed on routine endoscopy.
METHODS
Adult patients with EoE were prospectively recruited at the Hospital of the University of Pennsylvania. Patients were excluded if they had any anatomic esophageal abnormality unrelated to EoE, a history of chest radiation, esophageal surgery, motility disorder, or inflammatory bowel disease. Symptom assessment was performed on the day of the endoscopy. Control patients with normal esophageal biopsies were included; most of these patients underwent endoscopy for reflux, dyspepsia, nausea, and vomiting. This study was approved by our center's institutional review board. All subjects provided informed consent.
The FLIP EF-322 catheter (Medtronic, Fridley, MN) was used in this study. The probe was placed transorally and passed to the esophagogastric junction, as described by Nicodeme et al (4). Distensibility plateau was defined by the minimal esophageal body diameter at maximum esophageal distension at an intrabag pressure of 40 mm Hg using methods described by Menard-Katcher et al (5). Standard clinical practice biopsies were obtained after FLIP measurements, and eosinophil counts were assessed by pathologists. Esophageal biopsies were analyzed using the lamina propria (LP) scores from the EoE-histology scoring system (6). A 1 score of "not applicable/evaluable" was reported for samples containing ,35 mm LP thickness or samples where technical artifact impaired scoring.
Data are presented as mean values 6 SEMs or mean values 6 SDs and were analyzed by using the 2-tailed Student t test or ANOVA or the x 2 test, where applicable. A P value of less than 0.05 was considered significant. Data were analyzed with the software package Prism (GraphPad Software, La Jolla, CA). All authors had access to the study data and reviewed and approved the final manuscript.
RESULTS
Forty-eight adult patients with EoE and 17 control patients were enrolled in this study. Patients were predominantly White (92%), male (56%), and younger than 50 years (88%). Both active and inactive patients with EoE had a significantly lower distensibility index compared with control patients (P , 0.05 for active vs control and inactive vs control) ( Figure 1a). Similar to the findings of Nicodeme et al, patients with active EoE (defined as greater than 15 eosinophils per high-power field) had similar distensibility measurements as inactive patients (4). Distensibility index did not correlate with eosinophil counts in patients with EoE (R 2 5 20.06, P 5 0.0502) (Figure 1b and c). Patients with a history of stricture requiring dilation did not have significantly different distensibility compared with those without, although there was an overall small population with previous dilation (Figure 2). Strikingly, patients with a history of food impaction requiring endoscopic removal or symptoms of dysphagia in the preceding 30 days did not have differences in their distensibility compared with those without these factors. We eliminated patients with any critical narrowing (,10 mm) requiring dilation during the procedure.
In total, 13 of the 48 patients with EoE had an esophageal diameter less than 15 mm. The diameter of active patients with EoE ranged from 11.43 to 21.2 mm while the diameter of inactive patients with EoE ranged from 13.77 to 19.98 mm (Figure 1a). Of the patients with a diameter ,15 mm, 6 had no dysphagia, 6 had no prior food impaction, and 11 had no prior stricture ( Table 1). Comparison of the populations did not show any significant differences between the population with .15 mm esophagus and those with ,15 mm save for disease activity, although the presence of rings and trended toward significance. Taken together, these results demonstrate a population without known complications, symptoms, or endoscopic findings that has a narrowed esophagus.
DISCUSSION
In EoE, dysphagia symptoms, histology, and endoscopic appearance do not necessarily shed light on true diameter. Our data suggest a group of patients with a slender esophagus (diameter ranging from 10 to 15 mm) with no histologic signs of fibrosis and no frank dysphagia. Using FLIP, we identified patients with a previously unrecognized slender esophagus and targeted these patients for more aggressive management and dilation.
This study confirms prior findings in both adult and pediatric populations showing that esophageal distensibility is decreased in EoE. Furthermore, it demonstrates that the absence of disease activity does not necessarily improve distensibility in the adult population; a finding that stands in striking contrast to the pediatric population (5,7,8). A recent EoE disease severity index has been published, which focuses on symptoms, eosinophil count, endoscopic findings, the presence of LP, and the ability to pass a standard adult upper endoscope (9). Furthermore, it takes complications including emergent food impactions and a history of dilation into account. Our data reveal an EoE subgroup with an abnormal esophageal diameter that lacks obvious dysphagia, narrowing, inflammation, or complications. In addition, patients with a slender esophagus may be clinically indistinguishable from patients in deep remission due to careful food selection and behavioral adjustments. Thus, it may be challenging to assign a true disease severity score in these cases without the use of advanced technology.
One new finding from our study was that the degree of LP fibrosis as scored by extent and grade showed no relationship with distensibility. While previous reports determined that the rates of adequate LP sampling occur in approximately 50% endoscopies with biopsy, if present, it was believed to be a reliable marker of remodeling in the subepithelium (2,5). However, our results highlight that there is little difference in distensibility based on the severity of LP fibrosis. Therefore, relying solely on LP fibrosis, even when adequately sampled, may not be sufficient to evaluate subepithelial remodeling or adequately characterize the EoE phenotype.
This study highlights the dichotomy between dysphagia assessment and esophageal diameter. Simple symptom assessment does not capture the status of the esophagus. An in-depth understanding of symptoms with interrogation of eating habits is required, and measurements through impedance planimetry may provide a more complete assessment. In this study, we used FLIP to elucidate a novel cohort of patients with EoE with a slender esophagus that may be overlooked. These patients may benefit from dilation and optimization of medical management to improve both quality of life and alter remodeling.
CONFLICTS OF INTEREST
Guarantor of the article: Kristle L. Lynch, MD. Specific author contributions: K.L.L. was involved in study concept and design, acquisition of data, analysis and interpretation of data, drafting of the manuscript and critical revision of the manuscript. A.J.B. was involved in the study concept and design, drafting the manuscript, and critical revision of the manuscript. B.G. was involved in study concept and design, analysis and interpretation of data, and critical revision of the manuscript. J.K. was involved in acquisition and interpretation of data, and critical revision of the manuscript. D.S. was involved in acquisition of data, and critical revision of the manuscript. B.W. was involved in acquisition of data, and critical revision of the manuscript. C.M.-K. was involved in study concept and design, analysis and interpretation of data, and critical revision of the manuscript. C.G. was involved in acquisition and interpretation of data, and critical revision of the manuscript. G.W.F. was involved in study concept and design and critical revision of the manuscript. A.B. was involved in study concept and design, acquisition of data, analysis and interpretation of data, drafting of the manuscript and critical revision of the manuscript. No external companies had any part in the study design, data interpretation, data analysis, nor the decision to submit the article for publication. To the best of our knowledge, no conflict of interest, financial or other, exists. | 2023-01-06T22:12:26.478Z | 2023-01-05T00:00:00.000 | {
"year": 2023,
"sha1": "f7259952501c45a60eed32fcec5f7e1be5bd5db2",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.14309/ctg.0000000000000564",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6fcd8c19f9defbecc28feb7801d905df62a08b07",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246016266 | pes2o/s2orc | v3-fos-license | In situ analysis of intermediate structures in 2D materials growth: h-BN on Ru(0001)
Intermediate structures during the growth of hexagonal boron nitride (h-BN) are revealed through helium diffraction and first-principles calculations. We find that prior to the formation of h-BN from borazine molecules, a metastable (3× 3) structure is formed, and that excess deposition on the resulting 2D h-BN leads to the emergence of a (3× 4) structure. We attribute these findings to partial dehydrogenation and polymerisation of the borazine molecules upon adsorption. These steps are largely unexplored during the synthesis of 2D materials and the reported novel h-BN growth scheme means that different routes are likely to exist for other 2D materials as well. Our findings have implications for the wider class of chemical vapour deposition processes with potential applications based on exploiting these intermediate structures for the synthesis of covalent self-assembled 2D networks.
Introduction
Two-dimensional (2D) materials such as graphene and hexagonal boron nitride (h-BN) offer technological promise 1,2 but their properties are highly dependent on the perfection of the 2D layers. For this reason, intense efforts have been devoted to study and improve the growth of defect-free 2D materials. 3,4 A promising method of synthesising large-area 2D layers is chemical vapour deposition (CVD) and the CVD synthesis of atomically thin h-BN on metal substrates is described in several review articles. 5,6 The process, which is illustrated in Figure 1, involves a gas-phase precursor deposited on a solid substrate at elevated temperatures. By diffusion and dehydrogenation or fragmentation of the precursor, the adsorbates are attached to growing clusters and eventually form the 2D layer. A complete dehydrogenation of the precursor requires overcoming multiple energy barriers. As a result, it might be expected that at intermediate temperatures, dehydrogenation would not be complete, which in-turn can result in metastable or intermediate structures. For the synthesis of bulk h-BN it is known that the process involves several steps of borazine-polymerisation. [7][8][9][10] There are several routes, but even in the bulk the process has not been studied in great detail. Here, we follow a series of structural changes to identify intermediate structures in 2D growth.
While it is crucial to understand the growth process, mechanistic and kinetic studies are rare and mostly focus on the growth of nanocarbons. [11][12][13] Dehydro- Figure 1: Schematic illustrating the epitaxial growth of h-BN by chemical vapour deposition: A gaseous precursor (e.g. borazine, B3N3H6) is brought into contact with a (hot) catalyst surface (Ru), triggering the chemical reactions such as breaking of the borazine rings and dehydrogenation, followed by the assembly of the epitaxial overlayer. genation and intermediate structures during CVD of 2D materials have been proposed, [14][15][16][17][18][19] but to the best of our knowledge have not been studied experimentally. In general, kinetics and the thermochemistry of intermediate products, may lead to metastable structures. However, phase-diagrams due to partially dehydrogenated precursors have not been reported. Most studies report completed overlayer structures while the complexity and individual steps, as illustrated in Figure 1, are often ignored. In particular, previous h-BN studies using real space methods 20-23 concentrate on local order in completed h-BN structures, while reciprocal space studies [24][25][26] have provided information about long-range order. 27 In this paper we present a systematic analysis, at various temperatures beyond the ones reported for best growth conditions (1050 K-1100 K 20,22,25 ) and at various dosing rates. By following h-BN growth in situ using helium atom scattering (HAS) we demonstrate the existence of metastable structures during the formation of h-BN from borazine (B 3 N 3 H 6 ). HAS is a well-established technique for monitoring thinfilm growth modes [28][29][30] and has been used to study the quality of CVD-grown 2D materials [31][32][33][34] and interlayer interaction, 35 yet investigations of intermediate structures have not been performed. In particular, we find that there is one precursor structure with a welldefined (3 × 3) periodicity, meaning a well-defined route for the polymerisation reaction which leads to h-BN. We further find that by dosing excess borazene, a (3 × 4) structure forms, which could be attributed to a partially polymerised second-layer on top of the formed h-BN.
Our experimental results are complemented by van der Waals (vdW) corrected density functional theory (DFT) calculations which confirm the nature of the system, helping us to determine which self-assembled structures are compatible with the experimental results.
Results
The adsorption of the precursor gas (borazine) on the Ru substrate has been investigated in several other studies using Auger electron spectroscopy, Xray photoelectron spectroscopy, electron energy loss spectroscopy and low energy electron diffraction. [36][37][38] There is general consensus in the literature, that borazine only adsorbs molecularly at low (¡140 K) temperatures [37][38][39] with dehydrogenation setting in at temperatures of 150-250 K, depending on the substrate. 37,38,40 Starting from about 600 K, again depending on the metallic substrate, the B-N ring is reported to break down into its atomic constituents. 39,40 According to Paffet et al. 1000 K is necessary for h-BN formation on Ru(0001) 36,37 while hydrogen desorption occurs over a wide temperature range 38 and may even intercalate in the h-BN layer. 41,42 Helium diffraction allows in situ measurements even at growth temperature, and is known for its unique sensitivity to adsorbates, including hydrogen atoms. [43][44][45][46][47][48][49][50][51] Furthermore, unlike other established techniques, 52 HAS is completely inert and does not modify the process under investigation. 53 While the specular reflection gives an estimate of adsorbate coverage on the clean surface, the angular distribution provides insight in the time evolution of periodic structures being formed on the surface. 27,32 In the present work, CVD growth was performed at a set crystal temperature while monitoring the surface using repeated onedimensional angular diffraction scans, where we observe the emergence and disappearance of additional superstructures followed by the formation of h-BN.
For ease of comparison, the diffraction scans for the different structures are plotted as a function of the parallel momentum transfer, ∆K, (see Experimental Section) relative to the G 01 peak of the Ru(0001) substrate, |∆K/G 01 |. By converting the abscissa in this way, the position of the observed diffraction peaks directly reflects their periodicity with respect to the substrate lattice spacing. The position (shown by the arrows at the top) and spacing of the additional peaks reveal a (3 × 3) / (3 × 4) superstructure plotted in blue / green. Low exposure at lower temperatures reveals a (3 × 3) structure (blue curve, grown at 950 K), while at higher exposures and higher temperatures an intermediate (3 × 4) pattern emerges (green curve, grown at 1020 K). To improve the signal to noise ratio, the sample was subsequently cooled down for the duration of both scans and the blue curve was scaled by a factor of 3 to facilitate comparison.
A precursor structure to h-BN growth First, we describe how borazine exposure at low temperature (T < 880 K) reveals a precursor structure on Ru(0001), which by further annealing at T = 880 K can be converted to h-BN.
The purple line in Figure 2(a) shows a diffraction scan of the clean Ru(0001) substrate. Exposing Ru to 7 Langmuir (L) of borazine at a surface temperature of 600 K, results in decreased diffraction intensities and helium reflectivity. Moreover, the lack of any additional diffraction peaks is typical of a disordered structure. 51 The behaviour is consistent with earlier studies showing that the B-N ring starts to break down into its constituents only above about 600 K. 36,39,40 Upon increasing the temperature to 750 K while maintaining borazine overpressure, additional peaks start to appear between the specular and first order Ru diffraction peaks. Figure 2(b) (blue curve) shows the characteristic diffraction pattern that emerges. Equidistant peaks at |∆K/G 01 | = 0.33 and 0.66 indicate a (3 × 3) periodic structure on the surface, which we label BN I . If dosing is performed at even higher temperatures (T ≥ 880 K), in addition to the observed BN I structure, a shoulder appears to the right-hand side of the first order Ru diffraction peak, indicating the formation of a h-BN structure on the surface (vertical red dotted line in Figure 2). The peak, which occurs at |∆K/G 01 | = 1.08, is a result of the commensurate Moiré pattern on Ru 25 (see also h-BN periodicity, below).
Since a (3 × 3) superstructure composed of intact borazine molecules shows only weak binding to the substrate, the observed BN I structure, as shown later in our DFT calculations, must be composed of partly dehydrogenated borazine molecules, in line with the reported low experimental dehydrogenation temperature on other substrates. 40 Figure 3 illustrates in situ monitoring of the integrated peak intensities, which demonstrates that the BN I structure precedes the growth of h-BN. The exposure dependent intensities are obtained from repeated angular diffraction scans. Immediately after dosing begins, the (3 × 3) peaks start to rise rapidly (blue line), while only after a short delay the h-BN diffraction peak increases (red line), although less quickly than the BN I structure. The h-BN peak intensity reaches its maximum at the same point where the BN I structure disappears. We conclude that the BN I structure is converted into h-BN and acts as a precursor structure to the complete h-BN overlayer. Since the intensity of the BN I structure drops to almost zero at ≈9 L, it indicates that virtually all The temperature region where the BN I structure evolves is in excellent agreement with the desorption temperature of hydrogen reported by Paffett et al., 36 which helps confirm the dehydrogenation process. Further, as mentioned earlier, bulk h-BN is known to form by a sequence of dehydrogenation processes, in which borazine polymerises to polyborazylene, which is then cross-linked in one or more steps. 8,9 Our results suggest that a similar process happens at the ruthenium surface, but that in the 2D case, there is one clear intermediate step, i.e. the BN I structure, before the for- Figure 3: In situ monitoring of the integrated peak intensities reveals that the BNI structure acts as precursor to the h-BN overlayer. The characteristic diffraction peaks for the BNI structure and the h-BN peak are plotted versus borazine exposure at a substrate temperature of 880 K. The BNI structure increases prior to the h-BN intensity and has already disappeared when the h-BN intensity exhibits its maximum. mation of h-BN. There are several possible real space structures, which are in line with the observed periodicity and composition. The precursor imposes a B/N ratio of 1:1 which we do not expect to change. To chemically characterise the structure, X-ray photoemission spectroscopy (XPS) would typically be used for chemical characterisation of CVD-grown h-BN. 37 However, it can be difficult to obtain chemical information regarding the hydrogenation state from XPS. We have therefore focussed on supporting our structural characterisation through detailed DFT modelling of appropriate candidate structures.
DFT structural modelling We understand the BN I structure to consist of partially dehydrogenated, polymerised borazine analogous to the synthesis of bulk h-BN. Thus it is apparent that the 2D growth occurs in a step-by-step process and that not all hydrogen atoms are expected to be removed at the same time, due to the different bonding strengths to N and B atoms, as well as the bond formation between the N and the Ru atoms. Based on this attribution, we have made a vdW-corrected DFT investigation into the energetics of the BN I structure, from adsorption of the precursor gas to the complete h-BN overlayer. We start by considering a single borazine molecule in a (3 × 3) supercell (see Computational methods), and move on to partially dehydrogenated borazine polymers. From the adsorption energies of isolated borazine molecules we observe that the bonding becomes much stronger with dehydrogenation, but the calculations cannot provide a definitive answer in terms of the dehydrogenation sequence (dehydrogenation of the B atoms is slightly more favourable than of the N atoms by ≈15 meV). However, as shown later for two borazine molecules per supercell, the N atoms can dehydrogenate more easily than the B atoms and the candidate structures for our observations can be clearly distinguished in terms of the adsorption energies. Figure 4(a)). The adsorption energies E ads are given for the final optimised adsorption sites and ∆E is the difference with respect to the minimum energy configuration of the system with the same dehydrogenation state. In Table 1 we compare the binding energies from vdW-corrected DFT for an intact (B 3 N 3 H 6 ) and a partially dehydrogenated (B 3 N 3 H 3 ) borazine molecule with one molecule per (3 × 3) supercell, confirming a much stronger bonding of B 3 N 3 H 3 . We consider various initial adsorption sites (Figure 4(a)) with respect to the C 3 rotational axis through the centre of the molecule and a rotation of 60 • . Adsorption occurs in a flat face-to-face configuration, while bonding of the same adsorbates with a rotation of 0 • is slightly weaker -the results are shown in the Supplementary Information (see Supplementary DFT calculations).
In Table 1 the energy differences ∆E are given with respect to the minimum energy of the same dehydrogenation state in addition to the respective adsorption energies E ads . For both stoichiometric configurations the most favourable position is the fcc site and if the borazine molecule is initially placed on a bridge site it undergoes a transition to this position. The fcc configuration for partially dehydrogenated borazine yields an adsorption energy of E ads =−8.95 eV and is shown in Figure 4(a) with the (3 × 3) supercell highlighted by the black dashed rhombus. The results for the intact borazine molecule (B 3 N 3 H 6 ) are very similar with respect to the adsorption site, however, we obtain significantly weaker bonding strengths compared to the dehydrogenated molecule.
Based on bulk h-BN studies we conclude that it is more likely that polymerised networks are formed. 8,9 Starting from the minimum energy configuration of a single borazine molecule on the fcc site we continue by adding a second borazine molecule in the supercell. By considering various initial rotations of the additional molecule the energetically most favourable configurations were then identified. In contrast to the case of an isolated borazine molecule, the dehydrogenation sequence becomes clearly discernible in terms of the adsorption energies, with two borazine molecules. The N atoms in the ring adsorb on top of the Ru atoms and the borazine molecules lose all hydrogen atoms associated with the N atoms upon bond formation, in line with experimental results of the completed h-BN overlayer where inter-layer bonding is facilitated via the N atoms. 5,20 Such a scenario is, however, different to bulk h-BN growth where ruthenium is not present and thus interaction with the substrate may give rise to an even faster loss of hydrogen compared to bulk studies.
The calculations for two intact borazine molecules per supercell (2B 3 N 3 H 6 , not shown) yield weak binding, since the H atoms start to overlap resulting in a tilt of the complete molecules with respect to the surface. Moreover, due to desorption of hydrogen atoms from the borazine at low temperatures it is unlikely that intact borazine will remain and so intact molecules will not be considered further. 36 Therefore, we concentrate on partially and fully dehydrogenated borazine molecules. Figure 4(b,c) shows the final optimised structure for 2B 3 N 3 H 3 per supercell, illustrating that individual borazine molecules form bonds to each other. The bound B-N rings build up a nanostructured network with nanopores, i.e. where in between the B-N rings vacancies/pores of the Ru substrate are left behind. The high binding energy of the structure in Figure 4(b) with −6.28 eV compared to −6.74 eV for the complete h-BN/Ru, may therefore explain the stability of the BN I structure at temperatures ≈750 K as observed in the experiments.
The structure in Figure 4(b) acts as an intermediate prior to complete dehydrogenation which is expected at elevated temperatures. The calculations show that first the hydrogen atoms detach from the nitrogen and bind to the Ru substrate on the hcp sites, inside the nanopores. The excess hydrogen adatoms inside the nanopores are likely to desorb relatively quickly at the temperature of the experiment. 54 Such an open structure could easily act as a precursor to the complete h-BN overlayer, since each pore only has to be "filled" with an additional dehydrogenated borazine molecule. Finally, the addition of further borazine molecules in the calculations, i.e. three per supercell, essentially leads to the formation of h-BN which gives rise to the strongest binding energy in the calculations. The route from the precursor BN I structure to the final h-BN overlayer, with several intermediate steps, is illustrated in Supplementary Figure 3.
In addition to providing us with real-space structures of the observed BN I precursor, there are several points which we note from the vdW DFT calculations: Dehydrogenation of borazine always gives rise to a stronger bonding to the substrate and the results show that the thermodynamically most stable configuration for three adsorbed borazine molecules is h-BN (Supplementary Figure 2(b)). We also see from the side views in Figure 4 that there occurs always some buckling (0.21 − 0.35Å) and the adlayer is never perfectly flat. The results show that by carefully controlling the substrate temperature and thus the amount of excess hydrogen in future experiments, several BN nano-structures could be synthesized as shown for two cases in Figure 4(b,c). Moreover, careful changes of the starting conditions in the DFT calculations may even yield a "local" minimum energy configuration as in Supplementary Figure 2(c). Thus the system may be an ideal playground for the growth of different nanostructures and further metastable networks beside the ones reported in this work.
Additional structures accompanying the h-BN growth So far, we have described the formation of a BN I structure at T ≥ 750 K, which is converted to h-BN at T ≥ 880 K. However, upon complete conversion of the BN I structure to h-BN, exposing the surface to excess borazine results in the emergence of an additional structure with a (3 × 4) periodicity, which we label BN II . The green line in Figure 2(b) illustrates the corresponding diffraction pattern with the h-BN Moiré diffraction peak being still present next to the first order Ru peak. As shown in a two-dimensional diffraction scan in Supplementary Figure 5, the (3 × 4) peaks are not a subset of the h-BN Moiré pattern. In addition, a smaller peak to the left of the first order Ru peak becomes visible which can be attributed to a substrate reconstruction peak 25 due to the h-BN growth.
To monitor the growth of the BN II structure we use a smaller borazine overpressure while holding the sample temperature at 915 K. Figure 5(a) shows the evo-lution of the BN II and the BN I structure as blue and green curves, respectively. Here, the red line is again the integrated peak intensity of the h-BN diffraction peak. Immediately after exposing the surface to borazine, the BN I structure increases together with the h-BN peak. Further exposure leads to a decay of the BN I structure, while the h-BN feature still rises, indicating the growth of h-BN islands. At 7 L the h-BN diffraction peak saturates, while at the same time the BN I structure disappears. At this stage the h-BN overlayer is complete and after further dosing of borazine, the BN II structure starts to emerge. As discussed later this may be interpreted as a second layer being formed on top of h-BN.
The measurement was repeated at an even lower dosing pressure, while holding the sample at the lower temperature of 880 K. In Figure 5(b) the same behaviour is reproduced, yielding a h-BN layer with two additional structures, except that the emergence is delayed to longer/higher exposures, thus indicating a kinetically driven conversion.
With continuing borazine exposure to 20 L in Figure 5(b), the BN II structure reaches its maximum with no further changes in the scattered intensity. Together with the rise of the BN II structure the h-BN peak intensity slowly starts to decay, likely due to diffuse scattering from additional adsorbates at the surface or from domain walls of the BN II structure. Increasing the surface temperature to 1000 K gives rise to a decay of the BN II structure while the h-BN peak intensity starts to recover to its original value. Further temperature increase accelerates this process giving rise to a faster transition/conversion until the intermediate peaks disappear, leaving behind only the h-BN layer. Such a behaviour illustrates that ultimately h-BN is the most stable structure. Even though the borazine overpressure was still present, no additional peaks formed and the h-BN overlayer is the only remaining structure at the surface. Figure 5: Peak areas of the characteristic diffraction peaks representing the different structures versus borazine exposure. After the BNI structure has disappeared the h-BN peak saturates giving rise to a conversion and further exposure leads to the rise of the BNII structure. Due to a higher substrate temperature of 915 K in (a), the BNI structure disappears already after an exposure of ≈7 L compared to 10 L at 880 K in (b), thus indicating a kinetically driven conversion. Dosing in (b) is then further continued with subsequent changes of the surface temperature as stated above the diagram. After long enough exposure the BNII structure disappears leaving a strong h-BN intensity behind.
From Figure 5(b) it becomes evident, that in contrast to the BN I structure, the BN II structure is much more stable at higher temperatures since the (3 × 4) diffraction peaks are observed up to 1000 K. Further increase of the temperature to ≈ 1200 K gives rise to the surface migration of bulk-dissolved carbon, leading to the formation of graphene (see Supplementary diffraction scans) and thus eventually destroys the h-BN overlayer.
The latter may open up the possibility to study the growth of h-BN/graphene heterostructures 55-58 but is beyond the scope of the current study.
From our experiments it is likely that the BN II structure is a second chemisorbed layer on top of already grown h-BN. Earlier works on an Ir(111) substrate showed the evolution of additional compact reconstructed regions with (6 × 2) superstructure, which were attributed to reconstructed boron areas. 59 On the other hand, CVD growth on polycrystalline Cu provided evidence for boron dissolution into the bulk together with multilayer h-BN formation via intercala-tion. 60 However, both systems and studies are significantly different from our approach. E.g., the different behaviour in the first study could be due to changes of both the lattice constant and the h-BN-substrate bonding between Ru and Ir.
The BN II structure, as a second chemisorbed layer, consists of partly or completely dehydrogenated borazine molecules with a desorption temperature slightly above our performed measurements, since we see an adsorption/desorption equilibrium at temperatures <≈ 1000 K with an ultimate desorption at temperatures above this value. The existence of such a structure might be a precursor to multilayer synthesis if the original h-BN layer is of poor quality providing a high density of growth nuclei and thus explaining the reports of multilayer growth. [60][61][62][63] In a set of additional DFT calculations summarised in in Supplementary Table 2, we considered also the possibility of borazine adsorption on top of h-BN/Ru as well as the formation of bi-layer h-BN. 60-63 However, we can rule out the latter according to our deposition measurements, since we do not detect oscillations of the BN II structure or observations of any other periodicity, that would be indicative of multilayer h-BN growth. In line with the multi-stage process of h-BN bulk formation it is more likely that the BN II structure consists of adsorbed molecules or polymerised borazine structures -with a weaker bonding compared to the first h-BN layer and therefore more likely to desorb. Further unlikely scenarios are discussed in the Supplementary discussion.
h-BN growth diagram on Ru(0001)
The combination of measurements and DFT calculations allows us to conclude that the whole system passes through various structural phases: with the outcome depending strongly on substrate temperature, borazine exposure and the point where one stops. In particular, the surface temperature strongly influences the kinetics and thus the duration and appearance of the additional superstructures. Combining the experimental results we derive a growth diagram as shown in Figure 6, which describes the phenomenology of various structures arising during the CVD growth of h-BN on Ru(0001).
Below 750 K no periodic overlayer structure on the Ru(0001) surface is found. Between 750 and 880 K the BN I structure forms on the surface which upon further borazine exposure vanishes and leaves a disordered phase behind. The minimum temperature to form a h-BN overlayer on the surface was determined to be 880 K. Above this temperature we observe additional structures, starting with a (3 × 3) structure (BN I ) followed by a (3 × 4) periodic diffraction pattern (BN II ). These structures always appear in addition to the h-BN layer and ultimately vanish, leaving a complete h-BN overlayer behind (see Figure 2(a) for a diffraction scan of a complete h-BN overlayer without any additional structures). We have thus identified two kinetic barriers which need to be overcome in order to form ordered structures on the Ru substrate: A temperature of 750 K is necessary for the precursor structure to result, while at 880 K the h-BN formation sets in. As mentioned above, the BN I precursor structure is always present, however, with increasing temperature its transformation into h-BN becomes faster.
h-BN periodicity and reconstruction Finally, we illustrate that the h-BN periodicity and superstructure are strongly dependent on the experimental parameters, in particular the growth temperature. It is well known that h-BN forms a Moiré pattern on the Ru(0001) surface 20,22 due to the small lattice mismatch between a h−BN = 2.505Å and a Ru(0001) = 2.706Å. 64,65 At room temperature, such a mismatch results in a superstructure where 13 unit cells of h-BN coincide with 12 unit cells of Ru: (13 × 13) on (12 × 12). On the other hand, previous studies on a similar substrate showed that the h-BN overlayer and the substrate lock in at the temperature during the growth with the strong interlayer bonding causing the superstructure ratio to remain constant after cooling back down. 66 We show that the same holds for different growth temperatures of h-BN on Ru(0001). Detailed diffraction scans around the h-BN (01)-peak in Figure 7 illus-trate that for a h-BN synthesis at 1020 K (blue curve), the h-BN peak at |∆K/G 01 | = 1.067 fits a superstructure ratio of 16/15 perfectly, as shown by the green vertical dash-dotted line. Upon growing the h-BN overlayer at a lower temperature of 900 K (cyan curve) the h-BN peak appears at a ratio of 13/12. The small peaks to the left of the first order Ru peak in Figure 7 originate from the surface reconstruction with a 14/15 and 11/12 ratio, respectively. These reconstruction peaks can only arise if the system exhibits a true commensurate superstructure. 66,67 Figure 7: Diffraction scans of the h-BN periodicity illustrate that the exact superstructure of the overlayer depends on the growth temperature, with the blue scan for h-BN grown at 1020 K and the dashed cyan curve for h-BN grown at 900 K. The h-BN peaks at the right-hand side of the Ru peak at |∆K/G01| = 1 show, that h-BN adopts a larger superstructure with increasing growth temperature. Due to the decreasing lattice mismatch the overlayer adopts a 16/15 ratio versus a 13/12 ratio at 900 K. The "lock-in" effect (see text) is confirmed by the small (substrate) reconstruction peaks on the left-hand side. For better identification of the peaks a linear background was subtracted from the untreated data and the sample was subsequently cooled down to room temperature for the duration of the scan.
Our HAS measurements show a strong temperature dependence and thus a strong "lock-in" effect, with additional details and calculations for the superstructure ratio in Supplementary details about the h-BN superstructure. Compared to X-ray diffraction where a commensurate 14-on-13 superstructure was reported, 25 we see that only h-BN growth at lower temperature (900 K with a borazine exposure of 15 L) followed by a slow subsequent cooling provides a 13 over 12 superstructure, similar to previous studies. 20 After all, compared to the h-BN/Rh(111) system, 68 the bonding strength of the N-atoms to the Ru substrate is predicted to increase and thus one expects a stronger "lock-in" effect on Ru as observed above. Moreover, due to HAS being strictly surface sensitive, our results can be interpreted as scattering that stems solely from the h-BN nanomesh while other methods may contain contributions from the substrate structure. E.g, a coincidental overlay of the flat h-BN monolayer on a completely flat Ru substrate would not give rise to a diffraction pattern as shown in Figure 2(a) and Figure 7. Together with the above reported additional structures, it confirms the complexity of the whole system and its dependence on minute changes of the growth parameters.
Conclusion
In summary, we investigated the growth of h-BN on a Ru(0001) substrate using helium atom scattering. Employing various growth conditions, characteristic periodic structures are measured during borazine exposure in addition to the h-BN diffraction peak as outlined in the diagram of Figure 6. Between 750 and 880 K a structure with (3 × 3) periodicity, that precedes the growth of h-BN, is observed with the minimum temperature necessary to form a h-BN overlayer being 880 K. Above this temperature, in addition to the emerging h-BN layer, we observe additional structures with a (3 × 3) superstructure followed by a (3 × 4) diffraction pattern, eventually disappearing and leaving a complete h-BN overlayer behind.
It is clearly evident from our observations that a precursor structure precedes the growth of h-BN at lower temperatures and an additional structure co-exists with h-BN at higher temperatures. Both are strongly dependent on the growth conditions, but always transform into a fully h-BN covered substrate at sufficiently high temperatures, thus confirming that the latter is the thermodynamically most stable structure. Our study of the structural evolution during the arrangement of h-BN from the precursor gas illustrates steps in the formation process itself and we hope to encourage future studies linking our structural information with chemical characterisation.
We believe that these intermediate metastable structures may be present in many more systems where 2D materials are grown based on precursor-based CVD, at least at lower temperatures and for higher amounts of excess hydrogen compared to the "ideal" growth conditions. In the case studied here, they ultimately always transform into the complete 2D layer -and thus usually higher temperatures are reported as the "ideal" growth conditions for h-BN in the literature.
These intermediate structures seem to have been largely overlooked so far. Possibly, because they are difficult to detect owing to experimental complications since the structural advent of 2D materials is often not investigated during the growth itself, or is only accessible ex situ. More importantly, with increasing growth temperature the transformation to h-BN may occur so fast that they are easily missed. 23 The strong dependence regarding the emergence of these structures on temperature and exposure suggests that further uncovered "routes" and polymerisation steps are viable and the system may present an ideal playground to end up with different nanostructures. It further suggests that a careful tuning of the growth conditions via temperature and excess hydrogen from the precursor may provide new broadly applicable strategies for controlling the growth of specific nanostructures. Additional possibilities involve changing the substrate or the precursor gas, and hence tuning the thermochemistry of the surface-adsorbate complex which may further alter the subsequent reaction pathway. E.g. by changing the substrate, the metal-N bond strength may be tuned since one expects the bonding strength to increase as one moves from right to left in the transition metal series. We hope that the wide ranging implications for a controlled growth of 2D materials and nanostructures will stimulate a broad range of new research, understanding and application.
Methods
Experimental section All experimental data was obtained with the Cambridge spin-echo apparatus which uses a nearly monochromatic atomic beam of 3 He. The helium atoms scatter off the sample in a total scattering angle of 44.4°and an incident energy of 8 meV (see also Supplementary experimental details). The parallel momentum transfer ∆K is given by with k i being the incident wavevector and ϑ i and ϑ f the incident and final angles with respect to the surface normal, respectively. A more detailed description of the apparatus can be found elsewhere. [69][70][71] Compared to techniques such as scanning tunnelling microscopy (STM), HAS averages over larger surface areas, typically ≈ 3 mm 2 . Therefore, the advantage of HAS is to give precise information about any long-range periodicity of surface structures.
The Ru(0001) surface was cleaned by Ar-sputtering and annealing to 1300 K with subsequent O 2 treatment to not less than 20 L at 700 K. The adsorbed O 2 was removed by repeated flashing cycles to 1200 K. The cleanliness of the sample was determined by helium reflectivity measurements and diffraction scans to show no features of adsorbed species. After reaching reflectivities of ≈ 23% the sample was ready for the various dosing conditions. h-BN overlayers were removed by oxygen treatment at a sample temperature of 900 K, followed by the cleaning explained above. Borazine was supplied to the sample by backfilling the chamber through a leak-valve with typical overpressures between 1 × 10 −9 and 5 × 10 −8 mbar.
Computational methods For the DFT calculations we employed CASTEP, 72 a plane wave periodic boundary condition code. The plane wave basis set was truncated at an electron energy cut-off of 400 eV and we employ Vanderbilt ultrasoft pseudopotentials. 73 The Brillouin zone was sampled with a (4 × 4 × 1) Monkhorst-Pack k-point mesh. 74 The Perdew Burke Ernzerhof exchange correlation functional 75 was applied in combination with the Tkatchenko and Scheffler dispersion correction method. 76 The Ru(0001) surface was modelled by a 5-layer slab in a (3 × 3) supercell, and an additional 15Å vacuum layer for separating the periodically repeated supercells in the z-direction. Positions of the atoms in the adsorbate and in the top three layers of the Ru substrate were left fully unconstrained. For the structural optimisations, the force tolerance was set to 0.05 eV/Å.
The adsorption energies E ads are defined to be: where E tot (x + n y) is the total energy of the system, E tot (x) is the energy of the substrate, E tot (y) is the energy of the adsorbate and n is the number of adsorbed molecules. The more negative E ads , the more thermodynamically favourable it is for the species to adsorb. In order to compare the intermediate structures with a different number of atoms we calculate the binding energy E bin relative to Ru(0001) + 3 borazine molecules (3 borazine molecules are needed to form h-BN on a (3 × 3) cell) by appropriately adding or subtracting the energy of H 2 and borazine in the gas phase, to preserve stoichiometry: where E tot is the total energy of the system, E tot (H 2 ) and E tot (BZ) are the energies of H 2 and borazine which remain in the gas phase, respectively and E tot (Ru) and E tot (3BZ) are the total energies of pristine Ru(0001) and 3 borazine molecules in the gas phase. The more negative E bin , the stronger the binding and it becomes thermodynamically more favourable for the species to form.
Associated Content
The Supporting Information is available free of charge on the ACS Publications website.
In situ analysis of intermediate structures in 2D materials growth: h-BN on Ru(0001).
Ruckhofer et al.
Supplementary experimental details
All measurements were performed with the 3 He spin echo apparatus at the Cambridge Atom Scattering Centre. A schematic of the scattering chamber in the experimental setup is shown in Supplementary Figure 1.
The helium beam is produced by supersonic expansion of 3 He gas through a nozzle and enters the scattering chamber through a series of differential pumping stages. The incident helium beam is scattered off the sample, which is, together with a sample holder, mounted on a 6-axis manipulator. 1 Atoms travelling in a particular outgoing direction pass along the second arm of the instrument, at 44.4 • total scattering angle, and are then ionised and counted in a high sensitivity mass-spectrometer detector. The incidence angle, ϑ i , with respect to the surface normal, can be varied to control the momentum transfer on scattering. The Supplementary Figure 1: Schematic of the experimental setup in the helium scattering chamber. The incoming He beam is scattered off the ruthenium (Ru) sample in a fixed source-detector configuration with an angle of 44.4°. The sample is mounted onto a 5-axis manipulator and can be exposed to borazine via dosing thorough a leak valve.
Ru sample can be heated radiatively and by electronbombardment from backside of the crystal and cooled via a thermal connection to a liquid nitrogen reservoir. The entire beamline is held at high vacuum to avoid any attenuation of the helium beam, and the sample and detector chamber require ultra high vacuum levels to maintain cleanness of the sample and a low 3 He background. The dosing was performed by backfilling the scattering chamber with borazine vapour, with borazine as provided by Katchem. To monitor the dosing rates, the chamber pressure was monitored, with typical overpressures between 1 × 10 −9 and 5 × 10 −8 mbar. At stages where borazine was not used for dosing the container was held at temperatures below 0°C.
Supplementary DFT calculations
The energetically most favourable adsorption site for a single intact borazine molecule per (3 × 3) supercell according to DFT calculations is shown in Supplementary Figure 2 (0001) with the corresponding interatomic distances. In (c) and (d) the optimised geometry for three partially dehydrogenated borazine molecules is illustrated, which essentially forms a hydrogenated version of h-BN/Ru(0001). Here (c) represents a "local" minimum as also seen from Supplementary Table 2. The hydrogen atoms stick out of the surface, yielding a high corrugation and hence the buckling will be different once the structure is dehydrogenated, such as for a complete h-BN layer with its corrugation reflecting the Moiré pattern.
In addition to the calculations for one borazine molecule given in the main text we show the results for the intact and partly dehydrogenated molecule with an initial rotation of 0 • in Supplementary Table 1. When comparing the results we now see that in this case the hcp site is energetically most favourable with an adsorption energy of E ads =−8.85 eV. If the borazine molecule is initially placed on a bridge site it undergoes a transition to the hcp position. Still, the results for the 60 • rotation are energetically more favourable by ≈ 0.1 eV.
Supplementary Table 1: DFT calculations for the adsorption structures of the borazine precursor on Ru(0001), based on a (3 × 3) supercell with one molecule per cell. The results are shown for an intact (B 3 N 3 H 6 ) and partially dehydrogenated (B 3 N 3 H 3 ) adsorbate, considering various initial adsorption sites and a rotation of 0°. The adsorption energies E ads are given for the final optimised adsorption site and ∆E is the difference with respect to the minimum energy configuration of the system with the same dehydrogenation state. Supplementary Figure 2(c,d) shows that the outcome for calculations considering three partially dehydrogenated borazine molecules on Ru(0001), results in a structure similar to h-BN, except for the fact that the H atoms remain attached to the boron/nitrogen atoms. For comparison, Supplementary Figure 2(b) depicts the optimised structure for h-BN/Ru(0001).
Supplementary Table 2 illustrates that hydrogenation of the h-BN overlayer becomes thermodynamically unfavourable due to the correction with respect to molecular hydrogen in the gas phase and the high binding energy of the latter. The result is in line with hydrogenation experiments of metal supported h-BN, where atomic hydrogen exposure is required in order to facilitate the hydrogenation. 2 Interestingly, in contrast to h-BN/Ni(111), 2 H adsorption on top of the N-site is slightly more favourable than on top of the boron site for h-BN/Ru(0001) as can be seen from the adsorption energy per hydrogen atom.
From the side view in Supplementary Figure 2(c,d) it becomes evident that the closest atom to the Ru substrate and the bond length change, depending whether nitrogen or boron remain hydrogenated. In Supplementary Figure 2(d) the hydrogen atoms appear to "pull" the boron away from the surface by 0.5Å and the sp 2 hybridised bonds to nitrogen gain more sp 3 character. Therefore the boron atom moves away from the surface to optimise these bonds, forming a tetrahedral (bond angle 106°). Likewise the nitrogen binds to the Ru orbitals, thus moving closer to the surface. If hydrogen desorbs from this structure pure h-BN is formed, as seen in Supplementary Figure 2(b). The boron-nitrogen bonds become stronger and therefore boron moves 0.5Å towards the Ru, to be in the same plane as the nitrogen. In addition the nitrogen orbitals are populated from the boron and the nitrogen-Ru interaction is weakened, resulting in a movement of the nitrogen atoms 0.11Å away from the Ru surface. For pure h-BN on Ru, the boron atoms are positioned only slightly lower than the nitrogen atoms (0.14Å). This may reflect the gain in stability from Ru-B bonding when boron is moved slightly into the hole site, compared to maintaining perfect sp 2 hybridised bonds. As mentioned in the main text, we considered also borazine adsorption on top of h-BN/Ru as well as the formation of bi-layer h-BN. The physisorption energies are shown in the lower part of Supplementary Table 2, illustrating that both are thermodynamically favourable with a stronger physisorption energy for a second h-BN layer on top of h-BN/Ru. On the other hand, the corresponding binding energy for a single complete h-BN layer is −6.74 eV upon formation from 3 borazine molecules per supercell on Ru(0001). In the following we consider a possible route to the complete h-BN layer starting from the precursor structure as described in the main paper, Supplementary Figure 3 shows the route through various steps based on DFT calculations. The first (precursor) structure is strongly bound with a binding energy E bind (see Computational methods) of −6.28 eV in relation to the bare Ru surface and the molecules in the gas phase. The next step towards h-BN formation, involves dehydrogenation. The calculations show that if the three hydrogen atoms are detached only from the boron atoms they eventually reattach to the same boron sites. Therefore, initially the hydrogen atoms are detached from the nitrogen atoms which adsorb on the Supplementary Figure 3: Schematic of the possible route from the partially dehydrogenated precursor structure to h-BN via several intermediate structures based on vdW-corrected DFT calculations. As noted in the text, the precursor structure is already quite close to the binding energy for the complete h-BN layer. In contrast to the DFT calculations, where entropy contributions were not considered, additional dehdrogenation and bond breaking may occur due to the high experimental temperatures.
Supplementary
Ru substrate within the nanopores, yielding a binding energy of −3.27 eV. At sufficient surface temperature eventually all hydrogen atoms will desorb from the surface yielding the third structure with a less favourable energy of 1.26 eV. If now the nanopore is filled with one additional borazine atom, h-BN is formed yielding the lowest binding energy (−6.74 eV). Therefore we conclude that the first structure is nearly as stable as h-BN and that on the route to h-BN several energy barriers have to be overcome. It should be mentioned however, that the calculations were performed at 0 K and that no entropy contributions were considered.
Supplementary diffraction scans
h-BN, sometimes also called "white graphene", typically forms a Moiré pattern on the surfaces of reactive transition metals such as Rh(111) or Ru(0001), as mentioned in the main text. The two-dimensional h-BN layer on such surfaces exhibits periodic nanometric structures, often called "nanomesh", with areas which are elevated from the surface, and areas closer to the surface. In Supplementary Figure 4 the characteristic diffraction pattern of the clean Ru sample (green) is compared to two overlayers on the same substrate. The scans of the single layer graphene and h-BN covered Ru show additional peaks close to the specular and first order Ru diffraction peaks. The blue curve depicts the scattering result for a graphene monolayer on Ru which has been studied extensively in earlier works. 3,4 The graphene layer was grown by heating the Ru crystal to 1250 K for several minutes. Leaving the crystal at such high temperatures brings the carbon out of the bulk which then forms the honeycomb single layer graphene sheet. Graphene forms a (12-on-11) superstructure in which a (12 × 12) supercell of graphene coincides with a (11 × 11) supercell of ruthenium, giving rise to additional diffraction peaks at |∆K/G 01 | = 1/11 ≡ 0.09 and |∆K/G 01 | = 12/11 ≡ 1.09.
The diffraction pattern for h-BN on a Ru substrate is depicted in red in Supplementary diffraction peaks. 5 Indeed the feature originating from the h-BN nanomesh to the right of the Ru diffraction peak shifts to smaller values of |∆K/G 01 | with respect to graphene, giving rise to a bigger supercell.
In Supplementary Figure 4 the scans for pure Ru and graphene were performed at a sample temperature of T = 550 K while the scan of h-BN was taken at 248 K. Due to thermal expansion it gives rise to a deviation of the position of the first order substrate (Ru) peak for the h-BN scan compared to the other two measurements as indicated by the purple line. In all scans the specular peak (at |∆K/G 01 | = 0) was cut off due the high intensity and the first order diffraction peak of the Ru surface corresponds to |∆K/G 01 | = 1.
In addition Supplementary Figure 4 clearly shows that the background intensity between the Ru diffraction peaks is much lower, indicating less inelastic scattering and fewer diffuse scattering when probing the clean Ru crystal. In both diffraction scans of h-BN and graphene the background increases by two orders of magnitude due to the increase of diffuse scattering. In addition, adlayers change the corrugation at the surface which is observed by the He atoms. X-ray studies showed that the peak-to-peak corrugation height of graphene is (0.82 ± 0.15), whereas for the uppermost Ru-atomic layer it is (0.19 ± 0.02). 6 Supplementary Figure 5 Performing a two-dimensional (2D) scan confirms that the diffraction peaks in the 1D angular diffraction scan of Figure 2(b) in the main text are correctly assigned to a (3 × 4) periodicity and cannot be explained as a subset of another periodicity or as domains with different rotations. Therefore we performed diffraction scans at various azimuthal orientations, since the BN II structure has very distinct diffraction peaks in the high symmetry direction as well as along other azimuthal orientations. By rotation of the azimuthal angle of the sample a 2D-plot in reciprocal space can be created (see Supplementary Figure 5). The green cross marks the Ru diffraction peak while the red circles indicate the calculated positions of the (3 × 4) structure peaks. In the top panel three exemplary diffraction scans at specific azimuthal angles ϕ are depicted. Small angles close to the specular peak are not shown due to their high intensity in all scans. The identification of the peaks verifies the assumption that the (3 × 4) structure is present in addition to the h-BN layer on the surface and cannot be explained e.g. as being part of another superstructure or rotated domains of a (3 × 3) structure.
Supplementary discussion
In the following we discuss further scenarios of the BN II structure. As mentioned in the main text, the surface temperature strongly influences the kinetics and thus the duration and appearance of the additional superstructures. At temperatures above 1000 K the (3 × 4) structure (BN II ) slowly vanishes (see Figure 5 in the main text) which leads to the assumption that either strongly bound atoms/molecules desorb into the gas phase or convert into another structure. As mentioned earlier the dehydrogenation of borazine already starts at lower temperatures 7 leading to the assumption that the adsorbed species on Ru(0001) are at least partly dehydrogenated.
In the following we provide several scenarios for the origin of the (3 × 4) structure and discuss their plausibilities. The results could be interpreted as if borazine converts upon adsorption to both h-BN and a (3 × 3) structure (BN I ). However, given the results which are reported in the main paper, it is clear that borazine only adsorbs in a (3 × 3) superstructure, and at 880 K a (relatively fast) conversion to h-BN occurs. The h-BN and BN I structure grow together until the BN I reservoir is depleted, and no more h-BN is created. At this point we can conclude that the (3 × 4) (BN II ) is not a precursor to h-BN and is also not converted from the BN I structure. Since the (3 × 3) peaks degrade completely, the rise of the BN II structure does not compete with the conversion of the BN I structure to h-BN.
When looking at Figure 5 in the main paper one might also think that after the BN I structure vanishes and the h-BN peak saturates, that the h-BN monolayer is complete and the additional borazine exposure gives rise to a second layer being formed. This layer could consist of partly dehydrogenated borazine forming a periodic structure on top of the existing h-BN layer. According to literature, the CVD process for h-BN growth is usually considered to be self-terminating after a single layer, while some works also showed that multilayers are formed, 8 however, typically these require different growth approaches. [9][10][11][12] As described in the main paper, from our experimental observations we can rule out the behaviour of multilayer h-BN growth and ascribed the BN II structure to a second chemisorbed layer on top of h-BN.
Another possible scenario would be the growth of a superstructure in-between the already grown h-BN islands. As mentioned in the main manuscript an earlier work investigated the CVD growth of h-BN on Ir(111) and identified a (6 × 2) superstructure in-between the h-BN islands. 13 A similar behaviour could lead to the formation of a (3 × 4) structure in-between the h-BN islands on Ru. This intermediate structure eventually upon further borazine exposure converts into h-BN which connects the previously formed h-BN islands.
However the areas which formed under this condition are less stable since they convert back to a (3 × 4) structure upon heating of the sample (see phenomenological cycle equation in the main paper). Upon further annealing of the surface the structures in-between the stable h-BN islands eventually desorb from the surface leaving behind some h-BN islands.
Supplementary details about the h-BN superstructure
Supplementary Looking at the thermal expansion coefficients of bulk h-BN and the Ru(0001) surface gives a rough estimation for the temperature at which the 13/12 superstructure is favourable. The thermal expansion of bulk h-BN 14 Here the lattice constant for Ru a Ru = 2.706Å was taken for a surface temperature of 293 K, with 297 K for a h−BN , hence the subtraction of these values. The Ru thermal expansion is depicted in the upper left panel of Supplementary Figure 6, while the slope of bulk h-BN is shown as a blue line in the lower left panel. In addition, the thermal expansion for a single monolayer (ML) of h-BN as calculated by Thomas et al. 16 is drawn in orange. Taking the ratio of the values for h-BN and Ru then yields the expected superstructure at a given surface temperature, as shown in the right panel of Supplementary Figure 6. The expected fraction of 13/12 nicely fits the value of 900 K when using the bulk value of the thermal expansion. | 2022-01-19T02:16:07.668Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "0585bd57d2450c96a26739dbd8bf57232f8642c6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0585bd57d2450c96a26739dbd8bf57232f8642c6",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Physics"
]
} |
53086461 | pes2o/s2orc | v3-fos-license | First report of carbapenem-resistant Providencia stuartii in Saudi Arabia
We present the case of 31-year-old man who developed hospital-acquired pneumonia in the intensive care unit. Pathogens were identified to be carbapenem-resistant isolates of Providencia stuartii and Klebsiella pneumoniae. The patient was treated with an extended infusion of double-dose meropenem (targeting the carbapenem-resistant P. stuartii) and colistin (targeting the carbapenem-resistant K. pneumoniae) for 2 weeks. The patient's disease responded well to the prescribed regimen; his chest X-ray became normal, and all other signs of infection subsided. To our knowledge, this is the first description of the emergence of carbapenem-resistant P. stuartii due to AmpC hyperproduction in Saudi Arabia.
Introduction
Providencia species are Gram-negative bacilli that belong to the Enterobacteriaceae family. The genus Providencia contains five species: P. stuartii, P. rettgeri, P. alcalifaciens, P. heimbachae and P. rustigianii [1]. Among the Providencia species, P. stuartii and P. rettgeri are the most common causes of nosocomial infections including urinary tract infections, pneumonia, and wound and bloodstream infections [1,2]. Nosocomial infections with P. stuartii greatly affect patients' outcomes [3].
We present a case of a patient with hospital-acquired pneumonia caused by carbapenem-resistant isolates of P. stuartii and Klebsiella pneumoniae. To our knowledge, this is the first report of carbapenem-resistant P. stuartii due to AmpC hyperproduction in Saudi Arabia.
Case presentation
A 31-year-old man was admitted to our intensive care unit (ICU) from another hospital with postexploratory laparotomy and right thoracotomy for a gunshot wound to the abdomen and chest on February 2017. The patient had left arm injury with a left elbow fracture, for which he underwent open reduction internal fixation (ORIF). His condition was complicated by septic shock and acute kidney injury. At his arrival at hospital, the patient was found to have a chest infection and an infected laparotomy wound, for which empiric piperacillin/ tazobactam therapy was provided. During his prolonged ICU stay (56 days), he received several antibiotics; the patient had continuous fever, leukocytosis and persistent source of infection (abdominal wound and left-hand ORIF site wound for which he underwent frequent dressing and debridement). Written informed consent was obtained from the patient's family for publication of this case report. The study was approved by our local institutional review board (H2RI-16-Apr17-01).
P. stuartii isolates were identified using the VITEK 2 system (bioMérieux, Marcy l'Étoile, France). Susceptibility testing was determined by disc diffusion and interpreted by the Clinical and Laboratory Standards Institute criteria [4]. Phenotypic assay for detection of extended-spectrum β-lactamase, AmpC and carbapenamase production was performed as described previously [5].
The first carbapenem-resistant P. stuartii isolate was detected in the sputum on day 22 of ICU admission. The isolate was resistant to ciprofloxacin, trimethoprim/sulfamethoxazole, gentamicin, imipenem and meropenem; it was only sensitive to amikacin. We did not treat the patient according to the results of this culture because the chest X-ray was unremarkable at that time. The patient became highly febrile on day 30, so piperacillin/tazobactam 4.5 g was provided intravenously (iv) every 6 hours. On the third day of piperacillin/tazobactam therapy (day 32 of ICU admission), the fever was persistent and leukocytes were increasing, so the patient underwent septic screening (tracheal aspirate, urine, laparotomy site wound and blood), and piperacillin/tazobactam was changed to meropenem 1 g provided iv every 8 hours. Three days later (day 35), we received the results of the septic screening, which showed growth of P. stuartii and carbapenem-resistant K. pneumoniae in the urine, wound and blood. On day 37 we received the tracheal aspirate culture report, which revealed growth of carbapenem-resistant isolates of P. stuartii and K. pneumoniae. The carbapenem-resistant P. stuartii isolate was resistant to amikacin, ciprofloxacin, trimethoprim/sulfamethoxazole, gentamicin and imipenem, while it was intermediate to meropenem.
Because the patient's condition was not improving while receiving therapy with a conventional dose of meropenem (1 g provided iv every 8 hours), we changed the dosing regimen of meropenem to be 2 g delivered iv every 8 hours with extended infusion over 3 hours instead of 30 minutes. Also, we added colistin to treat the carbapenem-resistant K. pneumoniae. Colistin was prescribed as a loading dose of 9 million units iv followed by 3 million units iv every 8 hours. A follow-up septic screen was repeated on day 44. On day 47 the septic screen showed the positive growth of multidrug-resistant (MDR) Acinetobacter baumannii in the left-hand ORIF site wound and tracheal aspirate. A chest X-ray was ordered; it revealed nothing abnormal. Meropenem and colistin were discontinued after completing a course of 2 weeks. The patient was transferred to the ward after 56 days of ICU admission. He was stable with no signs and symptoms of infection.
Discussion
Antimicrobial resistance in P. stuartii is uncommon in our ICU. However, the extensive consumption of colistin, tigecycline and carbapenems in our ICU because of high rates of MDR A. baumannii, carbapenem-resistant K. pneumoniae and extendedspectrum β-lactamase-producing Enterobacteriaceae might have played a role in the emergence of carbapenem-resistant P. stuartii.
Our patient received multiple antibiotics before the isolation of the first carbapenem-resistant P. stuartii; he completed prolonged courses of colistin, tigecycline and imipenem. The use of colistin and tigecycline is associated with superinfections with P. stuartii and many MDR Gram-negative bacteria [6,7].
Many carbapenem-resistant P. stuartii cases have been reported [2]. Carbapenamase production (mainly New Delhi metallo-β-lactamase 1) is the main mechanism of carbapenem resistance in P. stuartii. Molecular typing helps in identifying the resistance genes in Providencia species. Unfortunately, our microbiology laboratory does not perform molecular typing. However, a phenotypic assay was performed and revealed AmpC production in carbapenem-resistant P. stuartii isolates recovered from our patient. Prolonged hospitalization before detection of carbapenem-resistant P. stuartii was present in one outbreak of carbapenem-resistant P. stuartii, ranging from 24 to 106 days [8].
In another outbreak of carbapenem-resistant P. stuartii [9], the median length of ICU stay was 39 days, while acquisition of carbapenem-resistant P. stuartii occurred in a median of 16 days after ICU admission. In our case, the first carbapenem-resistant P. stuartii was recovered on day 22 and the second on day 32. Both isolates were recovered from respiratory sites.
Nosocomial infections caused by carbapenem-resistant P. stuartii strains represent a challenging serious clinical threat because these strains are intrinsically resistant to last-resort agents, mainly colistin and tigecycline. Because reports of carbapenem-resistant P. stuartii are scarce, its treatment was rarely described. Our patient received a 2-week course of double-dose meropenem every 8 hours provided as an extended infusion over 3 hours. In addition, colistin was prescribed to treat the carbapenem-resistant K. pneumoniae coinfection. The use of extended infusion of meropenem for patients with hospital-acquired pneumonia has many advantages compared to a 30-minute infusion regimen; the severity of the disease can be reduced and the clinical efficacy can be improved, and organ failure recovery and long-term prognosis can be improved [10]. | 2018-11-11T01:39:44.741Z | 2018-09-20T00:00:00.000 | {
"year": 2018,
"sha1": "73de9024e04111e9a6b4a60e225ba02399fbc11a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.nmni.2018.09.007",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "73de9024e04111e9a6b4a60e225ba02399fbc11a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9317041 | pes2o/s2orc | v3-fos-license | Non-definability of languages by generalized first-order formulas over (N,+)
We consider first-order logic with monoidal quantifiers over words. We show that all languages with a neutral letter, definable using the addition numerical predicate are also definable with the order predicate as the only numerical predicate. Let S be a subset of monoids. Let LS be the logic closed under quantification over the monoids in S and N be the class of neutral letter languages. Then we show that: LS[<,+] cap N = LS[<] Our result can be interpreted as the Crane Beach conjecture to hold for the logic LS[<,+]. As a corollary of our result we get the result of Roy and Straubing that FO+MOD[<,+] collapses to FO+MOD[<]. For cyclic groups, we answer an open question of Roy and Straubing, proving that MOD[<,+] collapses to MOD[<]. Our result also shows that multiplication is necessary for Barrington's theorem to hold. All these results can be viewed as separation results for very uniform circuit classes. For example we separate FO[<,+]-uniform CC0 from FO[<,+]-uniform ACC0.
Introduction
Consider a language with a "neutral letter", i.e. a letter which can be inserted or deleted from any word in the language without changing its membership. The neutral letter concept has turned out to be useful for showing non-expressibility results. It had been used to establish super linear lower bounds for boundedwidth branching programs [4] and for the number of wires in circuit classes [12]; it also led to results in communication complexity [9]. But mostly the concept is known in the context of the Crane Beach conjecture proposed in [2]. There it was conjectured that first order logic with arbitrary numerical predicates (denoted as arb) collapses to first order logic with only linear ordering in the presence of a neutral letter. The idea is that, in the presence of a neutral letter, formulas cannot rely on the precise location of input letters and hence numerical predicates will be of little use. Let N denote the class of languages with neutral letters. Let S be a set of finite monoids and L S be the logic closed under quantification, If S is an aperiodic monoid, then the theorem is equivalent to the result of Benedikt and Libkin. For solvable monoids Roy and Straubing [22] (used ideas of Benedikt and Libkin to) showed that in the presence of neutral letters FO + MOD[<, +] collapse to FO + MOD [<]. In their paper they raised the question: does MOD[<, +] satisfy the Crane Beach conjecture? This can be answered by our main theorem.
Our results can also be viewed from the perspective of descriptive complexity of circuit classes. The books [11, ?] present the close connection between logics with monoid quantifiers and circuit classes. We know that the set of languages accepted by uniform-AC 0 circuits are exactly those definable by first order logic using order, addition and multiplication relations. Similarly CC 0 (constant depth, polynomial size circuits with MOD-gates) corresponds to MOD[<, +, * ], ACC 0 corresponds to FO + MOD[<, +, * ], TC 0 corresponds to MAJ[<, +, * ], and NC 1 corresponds to GROUP[<, +, * ] (The "group quantifier" evaluates over a finite group). It is a well known result that AC 0 is separated from ACC 0 [10], but relationships between most other classes are open. For example, we do not know whether CC 0 is different from ACC 0 . In fact we do not know whether MOD 6 [<, +, * ] contains uniform-AC 0 . This explains why the Crane Beach conjecture for prime modulo quantifiers [16], using arbitrary predicates, cannot be easily extended to composite modulo quantifiers.
We look at these separation questions from the descriptive complexity perspective. As a first step, one can ask the question of separating the logics without the multiplication relation. That is, can one separate MOD[<, +] from FO + MOD[<, +]? Is GROUP[<, +] different from FO + MOD[<, +] ? Behle and Lange [7] gave a notion of interpreting L S [<, +] as highly uniform circuit classes. Our results therefore can be summarized as: every FO[<, +] uniform constant depth polynomial size circuit with gates that compute a product in S and that recognizes a language with a neutral letter can be made FO[<]uniform.
As a consequence of our main theorem we are able to separate these uniform versions of circuit classes. For example: The theorem states that MOD[<, +] definable languages with a neutral letter are also definable in MOD [<]. Since MOD[<] cannot simulate the existential quantifiers [26] we have that FO[<, +] and MOD[<, +] are incomparable. In fact we show that no group quantifier can simulate existential quantifier if only addition is available. This gives an alternate proof of the known result [22] that FO+MOD m [<, +] cannot count modulo a prime p, where p does not divide m. Another consequence is that the majority quantifier cannot be simulated by group quantifiers if multiplication is not available, thus separating MAJ[<, +] from FO+GROUP[<, +]. Barrington's theorem [1] says that word problems over any finite group can be defined by the logic which uses only the S 5 group quantifier (the group whose elements are the set of all permutations over 5 elements) if addition and multiplication predicates are available. Our result shows multiplication is necessary for Barrington's theorem to hold. In other words S 5 cannot define word problems over S 6 if only addition is available.
Non expressibility results for various logics which uses addition and a variety of quantifiers have been considered earlier. Lynch [19] proved that FO[<, +] cannot count modulo any number. Nurmonen [21] and Niwiński and Stolboushkin [25] looked at logics with counting quantifiers equipped with numerical predicates of form y = px and a linear ordering. Ruhl [23], Schweikardt [24], Lautemann et.al. [15], Lange [14] all showed the limited expressive power of addition in the presence of majority quantifiers. Behle, Krebs and Reifferscheid [6,5] proved that non-solvable groups are not definable in the two variable fragment of M AJ [<].
For the purpose of proof we work over infinite strings which contain finite number of non-neutral letters. Our general proof strategy is similar to Benedikt and Libkin [8] or Roy and Straubing [22] and consists of three main steps.
1. Given a formula φ ∈ L S [<, +], we give an infinite set D ∈ N and an "active domain formula" φ ∈ L S [<, +] such that for all words w whose non neutral positions belong to D we have w φ ⇔ w φ . Active domain formulas quantify only over non-neutral letter positions. Our major contribution (Theorem 17) is showing this step. 2. We give another infinite set T ⊆ D and an active domain formula ψ ∈ L S [<] such that for all words w whose non neutral positions belong to T we have w φ ⇔ w ψ. This step follows from an application of Ramsey theory (Theorem 18). 3. All active domain formulas in L S [<] accept languages with a neutral letter. This is an easy observation given by Lemma 19.
Finally using these three steps we prove our main theorem. The main step is to build an active domain formula. Hence we need to show how to simulate a quantifier by an active domain formula. In the case of FO[<, +], the quantifiers, considered as Lindström quantifiers, have a commutative and idempotent monoid. Hence neither the order in which the quantifier runs over the positions of the word is important, nor does it matter if positions are queried multiple times. In Roy and Straubing this idea was extended in such a way that in the simulation of the MOD quantifier (again a commutative monoid), every position is taken into account exactly once. In their construction while replacing a MOD quantifier they need to add additional FO quantifiers and hence their construction only allows to replace a MOD[<, +] formula by an active domain FO + MOD[<, +] formula. In this paper, we construct a formula that takes every position into account exactly once and in the correct order. Moreover we do not introduce any new quantifier, but use only the quantifier that is replaced. This enables us to obtain the Crane Beach conjecture for logics whose quantifiers have a non-commutative monoid or are groups. For example MOD[<, +], GROUP[<, +], and FO + GROUP [<, +].
In contrast to previous work, we do not construct an equivalent active domain formula, but only a formula that is equivalent for certain domains. We show that it is in general sufficient to show this for one infinite domain. We also introduce a combinatorial structure called Sorting Tree which can be of interest on its own. Yet another contribution is to use inverse elements of groups to merge two sorted lists of numbers. We present our main theorem and its corollaries in Section 3 followed by a section with the proof of Theorem 17. Our main contribution is Section 5. There we replace group quantifiers by its active domain version.
Preliminaries
An alphabet Σ is a finite set of symbols. The set of all finite words over Σ is denoted by Σ * , the set of all right infinite words is denoted by Σ ω . Let Σ ∞ = Σ * ∪ Σ ω . Consider a language L ⊆ Σ ∞ and a letter λ ∈ Σ. We say that λ is a neutral letter for L if for all u, v ∈ Σ ∞ we have that uλv ∈ L ⇔ uv ∈ L. We denote the set of all languages with a neutral letter by N.
For a word w ∈ Σ ∞ the notation w(i) denotes the i th letter in w, i.e. w = w(0)w(1)w(2) . . . . For a word w in a language L with neutral letter λ, we define the non-neutral positions nnp(w) of w to be the set of all positions which do not have the neutral letter.
A monoid is a set closed under a binary associative operation and has an identity element. All monoids we consider except for Σ * and Σ ∞ will be finite. A monoid M and S ⊆ M defines a word problem. Its language is composed of words w ∈ M * , such that when the elements of w are multiplied in order we get an element in S. We say that a monoid M divides a monoid N if there exists a submonoid N of N and a surjective morphism from N to M . A monoid M recognizes a language L ⊆ Σ * if there exists a morphism h : Σ * → M and a subset T ⊆ M such that L = h −1 (T ). It is known that finite monoids recognize exactly regular languages [26]. We denote by M the set of all finite monoids, G ⊂ M the set of all finite groups and MOD the set of all finite cyclic group. We denote by U 1 the monoid consisting of elements {0, 1} under multiplication. For a monoid M , the element 1 ∈ M will denote its identity element. We also use the block product of monoids, whose definition can be found in [26]. For a set S of monoids, bpc(S) denotes the smallest set which contains S and is closed under block products.
Given a formula φ with free variables x 1 , . . . , x k , we write w, i 1 , . . . , i k |= φ if w is a model for the formula φ when the free variables x j is assigned to i j for j = 1, . . . , k. We abuse notation and let c ∈ Σ also be the unary predicate symbols of the logic we consider. That is w, i |= c(x) iff w(i) = c. Let V be a set of variables, R be a set of numerical predicates and S ⊆ M. We define the logic L S [R] to be built from the unary predicate symbols c, where c ∈ Σ, the binary predicate {=}, the predicates in R, the variable symbols V, the Boolean connectives {¬, ∨, ∧}, and the monoid quantifiers Q m M , where M ∈ S is a monoid and m ∈ M . We also identify the logic class L S [R] with the set of all languages definable in it.
Our definition of monoid quantifiers is a special case of Lindström quantifiers [18]. The formal definition of a monoid quantifier [3] is as follows. Let M = {m 1 , . . . , m K , 1} be a monoid with K +1 elements. For an m ∈ M , the quantifier Q m M is applied on K formulas. Let x be a free variable and φ 1 (x), . . . , φ K (x) be K formulas. Then w |= Q m M x φ 1 (x), . . . , φ K (x) iff the word u when multiplied gives the element m, i.e. i u(i) = m, where the i th letter of u, 0 ≤ i < |w|, is The following "shorthand" notation is used to avoid clutter. We denote by Q m M x φ α 1 , . . . , α K , the formula Q m M x φ ∧ α 1 , . . . , φ ∧ α K . Informally, this relativizes the quantifier to the positions where φ is true, by multiplying the neutral element in all other places.
Consider the monoid U 1 . It is easy to see that the word problem defined by U 1 and the set {0} defines the regular language 1 * 0(0 + 1) * . Then Q 0 U1 is same as the existential quantifier ∃, since any formula of the form ∃xφ is equivalent to Q 0 U1 x φ . So the logic L U1 [<] denotes first-order logic, FO[<]. Let C q stand for the cyclic group with q elements. Then the quantifiers Q 1 Cq corresponds to modulo quantifiers [28]. Thus L MOD [<] corresponds to all regular languages whose syntactic monoids are solvable groups [26]. For a sentence φ ∈ L S [R] we define L(φ) = {w | w φ}. The following result gives an algebraic characterization for the logic L S [<].
Results
Let S ⊆ M be any set of monoids. We show that the Crane Beach conjecture is true for the logic L S [<, +].
The proof of this theorem is given in Section 4.
Non definability Results
Theorem 2 give us the following corollaries. Proof. The word problem over G has a neutral letter. The result now follows from Theorem 2 and Lemma 1.
The majority quantifier, Maj x φ(x) is given as follows.
denotes the logic closed under majority quantifiers. It is known that the majority quantifier can be simulated by the non-solvable group S 5 if both multiplication and addition are available [29]. We show that multiplication is necessary to simulate majority quantifiers.
Proof. Consider the language L ⊆ {a, b, c} * consisting of all words with an equal number of a's and b's. L can be proven to be definable in MAJ [<]. Also note that c is a neutral element for L. By Corollary 3, and the fact that L is nonregular, Barrington's theorem [1] says that the word problem of any finite group can be defined in the logic L S5 [<, +, * ]. The following theorem shows that multiplication is necessary for Barrington's theorem to hold. Let L p be the set of all words w ∈ {0, 1} * such that the number of occurrences of 1 in w is equal to 0 (mod p). Then we get the result in [22] . Due to Lemma 1 and [26], this is a contradiction.
It is an open conjecture whether the language 1 * can be accepted by the circuit complexity class CC 0 [26]. It is also known that languages accepted by CC 0 circuits are exactly those which are definable by L MOD [<, +, * ] formulas [29].
To progress in this direction Roy and Straubing [22] had posed the question of whether 1 * / ∈ L MOD [<, +]. Below we show that this is the case.
Proof. The minimal monoid which can accept 1 * is U 1 and clearly the language is in N. By Theorem 2 if there is a formula in L G [<, +] which can define 1 * , then L G [<] can also define 1 * . From Lemma 1 it follows that the monoid U 1 divides a group. But this is a contradiction [26].
Behle and Lange [7] give a notion of interpreting L S [<, +] as highly uniform circuit classes. As a consequence we can interpret the following results as a separation of the corresponding circuit classes.
Regular languages in L S [<, +]
We now look at regular languages definable by the logic L S [<, +], for an S ⊆ M. We first show that this logic is closed under quotienting.
We now show that the logic is also closed under inverse length perserving morphisms.
Lemma 11. Let S ⊆ M. Let Σ, Γ be finite alphabets and let h : We now give an algebraic characterization for regular languages definable by L S [<, +].
Let S be a set of monoids such that, given a monoid M , it is decidable if M divides a block product of monoids in S. Then, given a regular language L, it is decidable if L ∈ L S [<]. Together with our main theorem we get that it is decidable if L ∈ L S [<, +].
Corollary 13. Let S be a set of monoids such that, given a monoid M , it is decidable if M divides a block product of monoids in S. Then, given a regular language L, it is decidable if L ∈ L S [<, +].
For FO + MOD[<, +] this was proved in [22]. Here we prove this for the special case when S = MOD.
Corollary 14. Given a regular language L, the question whether L is definable in MOD[<, +] is decidable.
Proof of the Main Theorem
In this section we handle the general proof steps as in Libkin or Roy and Straubing of removing the plus predicate from the formula in the presence of a neutral letter. We show that all these results go through even in the presence of general Lindström quantifiers. The new crucial step is Lemma 15 where we convert a group quantifier to an active domain formula without introducing any other quantifiers. The proof of this lemma is deferred to the next section.
Let S ⊆ M be any nonempty set. To prove Theorem 2 we will consider the more general logic, L S [<, +, 0, {≡ q : q > 1}] over the alphabet Σ. In this logic + is a binary function, 0 is a constant, and a ≡ q b means q divides b − a. The reason for introducing these new relations (which are definable using +) is to use a quantifier elimination procedure. All languages recognized by this logic are in L S [<, +].
The formulas we consider will usually define languages with a neutral letter. Let an active domain formula over a letter λ ∈ Σ be a formula where all quantifiers are of the form: Q m M x ¬λ(x) φ 1 , . . . , φ K . That is the quantifiers, quantify only over the "active domain", the positions which does not contain the letter λ. For the purpose of the proof we assume that the neutral letter language defined by a formula φ ∈ L S [<, +] is a subset of Σ * λ ω . The idea is to work with infinite words, where the arguments are easier, since the variable range is not bounded by the word length.
For r ∈ N we define the set D r = {r i | 0 < i ∈ N}. We say that a formula φ(x 1 , . . . , x t ) ∈ L S [<, +] collapses to φ , if φ is an active domain formula in L S [<, +] and there exists an R φ ∈ N such that for all r ≥ R φ , w ∈ Σ * λ ω with nnp(w) ⊆ D r and for all a 1 , . . . , a t ∈ N we have that In the above definition we say that R φ collapse φ to φ .
The results by Benedikt and Libkin [8], and Roy and Straubing [22] show that for all formulas φ ∈ L MOD∪U1 [<, +] there exists an active domain formula φ in that logic, such that for all words w ∈ Σ * λ ω , w φ ⇔ w φ . They assume no restriction on the non-neutral positions of w. Observe that our collapse result is different from theirs. We prove that if we consider only words, whose nonneutral positions are in D r , then any formula φ ∈ L S [<, +] is equivalent to the active domain formula φ ∈ L S [<, +]. That is, we are not concerned about the satisfiability of those words with non-neutral positions not in D r .
We show that formulas with a group quantifier, G ∈ S can be collapsed.
The proof of Lemma 15 will be given in Section 5. Benedikt and Libkin [8] gives a similar theorem for the monoid U 1 (the existential quantifier).
Recall the 3 steps for proving the main theorem given in Introduction. The following theorem proves the first step. Proof. Let φ ∈ L S [<, +]. We first claim that we can convert φ into a formula which uses only groups and U 1 as quantifiers. This follows from the Krohn-Rhodes decomposition theorem for monoids that every monoid can be decomposed into block products over groups and U 1 . This decomposition can then be converted back into a formula using the groups and U 1 as quantifiers [26].
So without loss of generality we can assume φ has only group or U 1 quantifiers. The proof is by induction on the quantifier depth. For the base case, let φ be a quantifier free formula. It is an active domain formula and therefore the claim holds. Let the claim be true for all formulas with quantifier depth < d. Lemma 15 and Lemma 16 show that the claim is true for formulas of type φ = Q m M z φ 1 , . . . , φ K with quantifier depth d, when M is a group or U 1 respectively. We are now left with proving that the claim is closed under conjunction and negation. So assume that formulas φ 1 , φ 2 collapse to φ 1 , φ 2 respectively. That is there exist R φ1 , R φ2 ∈ N such that R φ1 collapses φ 1 to φ 1 and R φ2 collapses φ 2 to φ 2 . Let R = max{R φ1 , R φ2 }. Then it is easy to see that R collapses φ 1 ∧ φ 2 to φ 1 ∧ φ 2 and R φ1 collapses ¬φ 1 to ¬φ 1 .
We have shown above that all formulas in L S [<, +] can be collapsed to active domain formulas. Now using a Ramsey type argument we obtain that addition is useless, giving us a formula in L S [<]. This corresponds to the second step in our three step proof strategy.
Let R be any set of relations on N and let φ(x 1 , . . . , x t ) be an active domain formula in L S [R]. We say that φ has the Ramsey property if for all infinite subsets X of N, there exists an infinite set Y ⊆ X and an active domain formula ψ ∈ L S [<] that satisfies the following conditions. If w ∈ Σ * λ ω and nnp(w) ⊆ Y , then for all a 1 , . . . , a t ∈ Y , w φ(a 1 , . . . , a t ) ⇔ w ψ(a 1 , . . . , a t ) The Ramsey property for first order logic has been considered by Libkin [17]. These results can be extended to our logic. Proof. Let φ ∈ L S [R] be a formula. We now prove by induction on the structure of the formula. Let P (x 1 , . . . , x k ) be a term in φ. We assume without loss of generality that for all i = j, x i = x j . Now consider the infinite complete hypergraph, whose vertices are labelled by numbers from X and whose edges are k tuple of vertices. Let i 1 , . . . , i k be some permutation of numbers from 1 to k. Consider the edge formed by the vertices v 1 < v 2 < · · · < v k . We color this edge by the formula x i1 < x i2 < · · · < x i k if P (v i1 , . . . , v i k ) is true. Observe that each edge can have multiple colors and therefore the total number of different colorings possible is k!. Ramsey theory gives us that there exists an infinite set Y ⊆ X, such that the induced subgraph on the vertices in Y will have a monochromatic color, ie. all the edges will be colored using the same color. Let us assume that the edges in Y are colored x 1 < x 2 < · · · < x k . Then for all a 1 , . . . , a t ∈ Y a 1 , . . . , a t |= R(x 1 , . . . , x k ) ⇔ a 1 , . . . , a t |= x 1 < x 2 < · · · < x k This shows that P (x 1 , . . . , x k ) satisfies the Ramsey property and thus all atomic formulas satisfy the Ramsey property. We now show that Ramsey property is preserved while taking Boolean combination of formulas. Consider the formula φ 1 (x 1 , . . . , x k ) ∧ φ 2 (x 1 , . . . , x k ). We know that by induction hypothesis there exists a formula ψ 1 and an infinite set X such that for all a 1 , . . . , a t ∈ X, w |= φ 1 (a 1 , . . . , a t ) ⇔ w |= ψ(a 1 , . . . , a t ). We can now find an infinite set Y ⊆ X and a formula ψ 2 such that the Ramsey property holds for the formula φ 2 . Therefore for all a 1 , . . . , a t ∈ Y w, a 1 , . . . , a k φ 1 ∧ φ 2 ⇔ w, a 1 , . . . , a k ψ 1 ∧ ψ 2 Similarly we can show that the Ramsey property holds for disjunctions and negations. We need to now show that active domain quantification also preserves Ramsey property. So let X be an infinite subset of N and let be a formula in L S [R]. By induction hypothesis we know that there exists an infinite set Y 1 ⊆ X and an active domain formula ψ 1 ∈ L[<] such that for all a ∈ Y t 1 the Ramsey property is satisfied. That is w |= φ 1 (a) ⇔ w |= ψ 1 (a). Now for φ 2 , using the infinite set Y 1 we can find an infinite set Y 2 ⊆ Y 1 and a formula ψ 2 satisfying the Ramsey property. Continuing like this will give us a set Y K and formulas ψ 1 , . . . , ψ K such that ∀j ≤ K and for all w ∈ Σ * λ ω with nnp(w) Therefore for the formula ψ = Q m M z ¬λ(z) ψ 1 , . . . , ψ K , we have ∀w where nnp(w) ⊆ Y K and a 1 , . . . , a t ∈ Y K that w φ(a 1 , . . . , a t ) ⇔ w ψ(a 1 , . . . , a t ) Observe that ψ is an active domain formula in L S [<].
We continue with the third step of our three step proof strategy.
Lemma 19. Every active domain sentence in L S [<] define a language with a neutral letter.
Proof. Let φ ∈ L S [<] be an active domain formula over letter λ ∈ Σ. Let w ∈ Σ ω . Let w ∈ Σ ω got by inserting letter λ in w at some positions. Let n 1 < n 2 < . . . belong to nnp(w) and m 1 < m 2 < . . . be in nnp(w ). Let ρ : nnp(w) → nnp(w ) be the bijective map ρ(n i ) = m i . We show that for any subformula ψ of φ and any t ∈ nnp(w) s , we have that w, t ψ ⇔ w , ρ(t) ψ. The claim holds for the atomic formula x > y, because n i > n j iff ρ(n i ) > ρ(n j ) for an i, j. Similarly the claim also hold for all other atomic formulas x < y, x = y and a(x) for an a ∈ Σ. The claim remains to hold under conjunctions, negations and active domain quantifications. Hence w |= φ ⇔ w |= φ. This proves that λ is a neutral letter for L(φ).
Now we can prove our main theorem.
Proof (Proof of Theorem 2). Let φ ∈ L S [<, +], such that L(φ) is a language with a neutral letter, λ. By Theorem 17 there exists an active domain sentence φ ∈ L S [<, +] over λ and a set D R such that R collapses φ to φ . Theorem 18 now gives an active domain formula ψ ∈ L S [<] and an infinite set Y ⊆ D R . We now show that L(φ) = L(ψ). Let w ∈ Σ * λ ω . Consider the word w ∈ Σ * λ ω got by inserting the neutral letter λ in w in such a way that nnp(w ) ⊆ Y . Since L(φ) is a language with a neutral letter we have that w |= φ ⇔ w φ. From Theorem 17 and Theorem 18 we get w φ ⇔ w φ ⇔ w ψ. Finally as shown in Lemma 19, ψ defines a language with a neutral letter and hence w |= ψ ⇔ w |= ψ.
Proof of Lemma 15
In this section we replace a group quantifier by an active domain formula. Here we make use of the fact that we can a priory restrict our domain as shown in the previous section.
Recall that φ = Q m G z φ 1 , . . . , φ K and G = {m 1 , . . . , m K , 1}. We know that for all i ≤ K, there exists R φi and a formula φ i such that R φi collapse φ i to φ i . Then clearly max{R φi } collapse φ i to φ i for all i ≤ K. So without loss of generality we assume φ i s are active domain formulas.
Before we go in the details we will give a rough overview of the proof idea. The group quantifier will evaluate a product j u(j) where u(j) is a group element that depends on the set of i such that w, j |= φ i . So we start and analyze the sets J i = {j | w, j |= φ i }. Since the formulas φ i are active domain formulas, we will see that there exists a set of intervals such that inside an interval the set J i is periodic. Boundary points for these intervals are either points in the domain, or linear combinations of these. In the construction of the active domain formula for φ we will show how to iterate over all these boundary points in a strictly increasing order. An active domain quantifier can only iterate over active domain positions, hence we will need nested active domain quantifiers, and a way how to "encode" the boundary points by tuples of active domain positions in a unique and order preserving way. Additionally we need to deal with the periodic positions inside the intervals, without being able to compute the length of such an interval, or even check if the length is zero. Here will make use of the inverse elements that always exist in groups.
We start by analyzing the intervals which occur. We will pick an R φ ≥ max{R φi } to collapse the formula φ. During the course of the proof we will require R φ to be greater than a few others constants, which will be specified then. But always observe that R φ will depend only on φ.
Since we consider a fixed set S for the rest of the paper, we will write L[<, +] for the logic L S [<, +, 0, {≡ q : q > 1}].
Intervals and Linear Functions
We first show that every formula ψ with at least one free variable has a normal form.
Proof. Terms in our logic are expressions of the form and atomic formulas are of the form whee σ, γ are linear functions, c ∈ Σ and m > 1. Now using any M ∈ S, where m 1 ∈ M is not the neutral element, we can rewrite . . , f alse Now consider the atomic formulas containing the free variable z in ψ(z). By multiplying with appropriate numbers, we can re-write these atomic formulas as nz = ρ, nz < ρ, nz > ρ, nz ≡ l ρ for one particular n, which is the least common multiple (lcm) of all the coefficients in ψ. Here ρ does not contain z and also it might contain subtraction. That is nz = ρ might stand for nz + ρ 1 = ρ 2 . Now we replace nz by z and conjunct the formula with z ≡ n 0.
For any formula ψ(z), the notationψ(z) denotes the normal form as in Lemma 20. Let x 1 , . . . , x s be the bounded variables occurring inφ i (z) and y 1 , . . . , y r be the free variables other than z inφ i (z). Hence the terms ρ that appear in the formulaφ i (z) can be identified as functions, : N s+r → N.
We collect all functions ρ(x, y) that occur in the formulasφ i (z) for an i ≤ K: We define the set T of offsets as a set of terms which are functions using the variables y 1 , . . . , y r as parameters: Consider the set of absolute values of all the coefficients appearing in one of the functions in R. Let α ∈ N be the maximum value among these. That is α = max{|γ| | f ∈ R, γ is a coefficient in f }. Let ∆ = s · α . Now we can define our set of extended functions. For a t ∈ T we define a set of terms which are functions using the variables x 1 , . . . , x s , y 1 , . . . , y r as parameters: We denote by F = ∪ t∈T F t . For a fixed word w ∈ Σ * λ ω and a fixed assignment of the free variables y to a we let be the set of boundary points. Note that the assignments to the functions are of strictly decreasing order. Let b 1 < b 2 < . . . < b l be the boundary points in B w,a . Then the following sets are called intervals: (−1, b 1 ), (b 1 , b 2 ), . . . , Here (a, b) = {x ∈ N | a < x < b}. We also split the set of points in B w,a depending on the offset In the following Lemma we fix a word w ∈ Σ * λ ω and an a ∈ N r . . . . , p s , a). Let p 1 > p 2 > · · · > p l be the ordered set of all p i s in the above assignment. We let ρ (x 1 , . . . , Let q be the lcm of all q where ≡ q occurs in one of the φ i . We need the following lemma, that inside an interval with only neutral letters, the congruence relations decide the truth of an active domain formula. Proof. Proof is by induction on the structure of the formulaφ i . We will now show that ∀b i ∈ nnp(w) and all subformulas ψ(z, x, y) ofφ i that w, c, b, a ψ ⇔ w, d, b, a ψ. The atomic formulas ofφ i (z, a) are of the following form: z < ρ(x, a), z = ρ(x, a), z > ρ(x, a), z ≡ q ρ(x, a), a(z) and formulas which does not depend on z. It is clear that the truth of formulas which does not depend on z, a(z) and z ≡ q ρ does not change whether we assign c or d to z. Let b ∈ nnp(w) s . By Lemma 21 we know that ρ(b, a) is in B w,a and since c, d lies in the same interval it follows that c < ρ(b, a) ⇔ d < (b, a). Similarly we can show that the truth of z > ρ, z = ρ does not change on z being assigned c or d. Thus we have that the claim holds for atomic formulas. The claim clearly holds for conjunction and negation of formulas. Now let the claim hold for subformulas . . , ψ K And hence it is closed under active domain quantification.
The following Lemma deals with the infinite interval.
Lemma 23. Let b belong to the infinite interval and a ∈ N r . If w, a φ then w, b, a φ i for any i ≤ K.
Proof. Let i ≤ K and b be in the infinite interval and w, b, a φ i . From Lemma 22 we know that all points c ≡ q b and such that c is also in the infinite interval will be a witnesses for φ i . This means the set of witnesses is infinite and hence w, a φ.
Lemma 22 says that inside an interval, the congruence relations decide the satisfiability of the formulas φ i s. This shows that it is enough to know the truth values of φ i at a distance of ≥ q from the boundary points, since the truth values inside an interval are going to repeat after every q positions. The rest of the proof demonstrates 1. How we can treat each B t differently. 2. There is an active domain formula which goes through the points in B t in an increasing order We fix the word w ∈ Σ * λ ω and assignment a. Therefore we drop the superscripts in B w,a (B w,a t ) and call them B (B t ).
Treating each B t differently
Let p = q|G|, where q was defined in the previous section and depends on the ≡ q predicates. For an element g ∈ G, we have g |G| = 1 G , so g x = g x+|G| . Recall the definitions of T, B from Section 5.
Recall from the Preliminaries (Section 2) that we denoted by u(i) the group element at position i. That is u(i) = m j iff w, a |= φ j ∧ l<j ¬φ l . Our aim is to give an active domain formula such that the formula evaluates to true iff the group element i=0 u(i) is equal to m. The rest of this subsection will be devoted to computing this product in a way which helps in building an active domain formula.
Let b < b be boundary points in B. Below we compute Observe that we can compute the product of the interval using two terms that both need to know only one boundary of the interval. It becomes simpler if we note that the two products do not really need to multiply all the elements u(i), for i ≥ b but simply agree on a common set of elements to multiply.
For a b ∈ B, we define the function IL(b) to be the length of the interval to the left of b. That is if (b , b) form an interval then IL(b) = b − b − 1. Similarly we define IR(b) to be the length of the interval to the right of b. For all k ≤ |T |, we define functions N k (b) andN k (b), which maps points b ∈ B to a group element.
if IL(b) ≥ p and r < p, b + r ≡ p 0 Inductively we define We prove this by induction over k. Let k = 0 and let (b, b ) form an interval in B. If b − b ≤ p then If the interval is large, i.e. b − b > p, then let s, t ∈ N, be the smallest, resp. the largest numbers such that b ≤ s ≤ t ≤ b and s ≡ p t ≡ p 0. Lemma 22 shows that inside an interval all positions congruent modulo q satisfy the same formulas. Therefore u(b − p)u(b − p + 1) . . . u(b − 1) = 1 G , and hence (u(b − p)u(b − p + 1) . . . u(t)) −1 = (u(t + 1) . . . u(b − 1)). So The last equality being true since u(s + 1) . . . u(t) = 1 G As induction hypothesis assume that the lemma is true for all k < k. Since N k (b ) they cancel out (whatever they compute to). Thus be all positions in B t k between b and b . By the requirements of the lemma the only positions of B between b i and b i+1 are in i<k B ti . Writing out the product we get The following Lemma shows that u(0)N |T | (0) gives the product of the group elements.
Proof. Using appropriate induction hypothesis we get that N |T | (0) = l i=1 u(i), where l > max(B). The lemma now follows from Lemma 23 which gives that u(i) = 1 G for every i in the infinite interval.
We now give active domain formulas γ m , m ∈ G, such that γ m is true iff N |T | (0) = m. For this we make use of the inductive definition of N k and show that there exists active domain formulas γ m such that w |= γ m (b) ⇔ N k (b) = m. Similarly we give active domain formulasγ m such that w |= γ m (b) ⇔N k (b) = m. Observe that N k (b) is got by computing the product where b strictly increases. This requires us to traverse the elements in B t k−1 in an increasing order. The following section builds a Sorting tree to sort the elements of B t k−1 in an increasing order.
Sorting Tree
Let t ∈ T . The aim of this section is to create a data structure, which can traverse the elements in B t in an ascending order.
For a t ∈ T , we define a tree called sorting tree, T t which corresponds to B t . The tree satisfies the following property. If the leaves of the tree are enumerated from left to right, then we get the set B t in ascending order. A node in T t is labeled by a tuple (f, A), where f (x 1 , . . . , x l ) is a function in F t , A an assignment for the variables in f such that A(x 1 ) > A(x 2 ) > · · · > A(x l ) and ∀i ≤ l : A(x i ) ∈ nnp(w).
We show how to inductively built the tree. The root is labeled by the tuple (t, {}), where t is the function which depends only on y (and hence constant on x) and {} is the empty assignment. The root is not marked a leaf node.
Consider the internal node (f (x 1 , . . . , x l ), A). It will have three kinds of children ordered from left to right as follows.
Observe that if there is no j such that j < A(x l ) and j ∈ nnp(w), then (f, A) will only have the child (f , A).
Note that in our tree construction the values of the children of a node increase from left to right. The tree is built until all functions with s variables appear in leaves and hence the depth of the tree is s + 2. Figure 1 shows part of a tree, where ∆ = 2, t = 0, R = 5 and nnp(w) = {5, 25, 625} ⊆ D R . The following lemma holds if R > 3s∆. We also assume that nnp(w) ⊆ D R . Given a node (f, A), we say the value of the node is the function f evaluated under the assignment of A (denoted by f (A)). Proof. By construction.
Next we show that for any two neighboring nodes in the tree, the values in the leaves of the subtree rooted at the left node is less than the values in the leaves of the subtree rooted at the right node. Let V (f,A) denote the set of values in the leaves of the subtree rooted at (f, A). Here R c , R c are the minimum assignments in A and A respectively. Let us assume that both coefficients α l , α l > 0. A similar analysis can be given for other combinations of α l and α l . Now since (f, A) is the left neighbor of (f , A ) we have R c < R c . Then The claim follows, since R > 3s∆. | 2012-05-04T04:10:46.000Z | 2012-04-27T00:00:00.000 | {
"year": 2012,
"sha1": "2bc80d6fd84c8a8f5d392458e1207d9958ac9e30",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1204.6179.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2bc80d6fd84c8a8f5d392458e1207d9958ac9e30",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
235360453 | pes2o/s2orc | v3-fos-license | Reactions to Recommendations and Evidence About Prostate Cancer Screening Among White and Black Male Veterans
U.S. clinical guidelines recommend that prior to screening for prostate cancer with Prostate Specific Antigen (PSA), men should have an informed discussion about the potential benefits and harms of screening. Prostate cancer disproportionately affects Black men. To understand how White and Black men reacted to a draft educational pamphlet about the benefits and harms of PSA screening, we conducted race-specific focus groups at a midwestern VA medical center in 2013 and 2015. White and Black men who had been previously screened reviewed the draft pamphlet using a semistructured focus group facilitator guide. Forty-four men, ages 55–81, participated in four White and two Black focus groups. Three universal themes were: low baseline familiarity with prostate cancer, surprise and resistance to the recommendations not to test routinely, and negative emotions in response to ambiguity. Discussions of benefits and harms of screening, as well as intentions for exercising personal agency in prevention and screening, diverged between White and Black focus groups. Discussion in White groups highlighted the potential benefits of screening, minimized the harms, and emphasized personal choice in screening decisions. Participants in Black groups devoted almost no discussion to benefits, considered harms significant, and emphasized personal and collective responsibility for preventing cancer through diet, exercise, and alternative medicine. Discussion in Black groups also included the role of racism and discrimination in healthcare and medical research. These findings contribute to our understanding of how men’s varied perspectives and life experiences affect their responses to prostate cancer screening information.
Though prostate cancer is the most common cancer among all U.S. men, the incidence and mortality of prostate cancer remain significantly higher among Black men than White men in the United States (incidence 175.2 vs. 102.3 per 100,000; mortality 37.9 vs. 17.9 per 100,000 among Black and White men, respectively) (SEER Cancer Statistics Review, 1975. Racial disparities in prostate cancer incidence and mortality stem from a combination of socioeconomic factors, including healthcare access, and prostate cancer heritability, which does not necessarily correlate with self-identified race or ethnicity (Dess et al., 2019;Rebbeck et al., 2006;Smith et al., 2017). Informed screening discussions should take into account the balance of screening benefits and harms, which may be influenced by factors associated with racial group membership, including incidence and mortality, systemic disparities, and racism. The ACS recommends initiating this informed discussion at a younger age for Black men or those with a family history of prostate cancer (Wolf et al., 2010), and the USPSTF recommends informing Black men about their increased risk for prostate cancer incidence and mortality, in order to facilitate an informed, personal decision about screening (U.S. Preventive Services Task Force, 2018).
Shared decision-making (SDM) is commonly recommended for complex screening decisions, yet many clinicians feel uncomfortable discussing clinical uncertainty (Zeuner et al., 2015), fail to mention potential harms of screening (Bhuyan et al., 2017), or screen without discussing PSA at all . Educational materials such as patient decision aids can help facilitate SDM conversations, and may even reduce health inequalities when the materials are adapted to the needs of a disadvantaged group (Durand et al., 2014). On the other hand, a recent study found that White and Black male Veterans responded differently to a prostate cancer treatment decision aid, and called for additional research to understand the "efficacy, relevance, and receptivity of prostate cancer" decision aids for Black men (Langford et al., 2020). Little is known about how people react cognitively and emotionally to the factual information presented in decision aids (Myers, 2005).
Qualitative research can help us understand how and why Black and White men may respond differently to screening tools and educational materials. Racial comparisons in health services research must be approached thoughtfully, however. Group-difference studies can perpetuate models of cultural deviance from the (White) mainstream (Hardeman, 2020) or can create a false equivalence that ignores the way outcomes are shaped by adaptations to external forces (Whitfield et al., 2008). Within-group study designs have their own limitations. For example, Ford and colleagues (Ford et al., 2006) conducted focus groups with Black men in Detroit to understand factors that influence prostate cancer screening behavioral intention, using the Preventive Health Model (PHM) (Myers et al., 1999) as a conceptual framework. They acknowledged that the focus groups responses may or may not have been unique to Black men in that health system, and could not anticipate potential differences with White or Latino men. Including Black and White men in the same study can allow researchers to understand potential differences in perceptions of a clinical intervention. Among prior qualitative studies about prostate cancer screening that included Black and White participants, there were no racial differences in baseline knowledge about prostate cancer and screening (Winterich, Grzywacz, et al., 2009), but significant differences in the perception of race as a risk factor for prostate cancer (McFall et al., 2006). Awareness of Black men's elevated risk for prostate cancer has previously been associated with receptivity to screening among Black men (Myers et al., 1994). Differences in the perception of risk may impact the way Black and White men respond to recommendations not to routinely screen with PSA, and to related evidence about the potential downstream harms of screening or overdiagnosis, which have not been addressed in existing qualitative studies (James et al., 2017).
In 2013, the VA requested our collaboration in designing patient educational materials to communicate changing prostate cancer screening guidelines to Veterans. The VA is a national integrated healthcare system serving approximately 9 million enrolled Veterans, 15.5% of whom are Black (VA Office of Health Equity, 2016). In the current study, we tested a draft patient educational pamphlet with separate focus groups of White or Black male Veterans. Our aim was to understand how White and Black men previously screened with PSA responded to the draft pamphlet. In particular, we were interested in whether reviewing the evidence related to prostate cancer screening benefits and harms would be associated with expressed intention to screen.
Design and Participants
In earlier work, we produced a draft patient educational pamphlet for the VA about prostate cancer screening, titled "The PSA Test for Prostate Cancer Screening: Why some doctors no longer recommend testing." The pamphlet was developed under the 2012 USPSTF guideline recommendation against routine screening (Moyer, 2012). That recommendation remains unchanged for men over age 70. However, for men age 55-69, guidance shifted in 2018 to a recommendation that screening decisions should be individualized, and screening should only be done after an informed discussion of potential screening benefits and harms (U.S. Preventive Services Task Force, 2018). Though guidance has shifted, the evidence related to screening benefits and harms described in the pamphlet is substantively unchanged today (Chou et al., 2011;Fenton et al., 2018). Pamphlet content was informed by input from 2 VA provider focus groups and 26 individual patient interviews with male Veterans age 50-85, stratified by age, race, and history of prior elevated PSA test. These stratifying characteristics were selected based on the PHM (Myers et al., 1999), which posits that background factors (including demographics such as age and race, medical history, and prior screening behavior), interact with cognitive and psychological factors, as well as social and programmatic influence, to inform intention to screen and subsequent screening behaviors. We identified comparable perspectives, and a similar range of reactions to screening recommendations, across all subgroups (Partin et al., 2017) and for this reason we did not tailor the pamphlet content for subgroups. In the next phase of research, the draft pamphlet was presented to patient focus groups of male Veterans at the Minneapolis VA to gauge responses and reactions, which are presented in the current paper. Focus groups were used because they provide a forum to elicit and identify the range of individual reactions; the group dynamic can help explore and clarify perspectives (Morgan, 1996). This research was approved by the Minneapolis VA and University of Minnesota Institutional Review Boards.
The original study protocol called for designing the pamphlet with input from men across stratified age, race, and PSA subgroups, as above, then testing it in four unstratified patient focus groups that included men of different ages and races, in order to verify our findings that content did not need to be tailored to subgroups. However, the number of eligible Black men in our sampling frame was limited by Minneapolis VA demographics and was further restricted by excluding those men who had recently participated in individual interviews. The first four focus groups recruited and conducted in July and August 2013 included only White men (and one man whose race was listed as "other" in the electronic medical record, and is grouped with the White men hereafter). Due to inadequate representation, the study PI then amended the study protocol and sought additional research funding to recruit and conduct two more focus groups with Black men; these were completed in November 2015. The decision to conduct Black-specific focus groups at that point was motivated by the goal of increasing representation of Black men's views and informed by the principle that more homogenous groups have more open conversations. (Branscombe et al., 1999;McFall & Hamm, 2003) Prostate cancer screening evidence and guidance did not change between 2013 and 2015, though public awareness of recommendations not to screen routinely likely increased over time. The number of focus groups conducted was decided a priori due to time constraints and resource availability and was not determined by data saturation. In qualitative research, data saturation has been defined as "the point in data collection and analysis when new information produces little or no change to the codebook" (Guest et al., 2006). Previous work has found that three to six focus groups are likely to identify 90% of themes on a topic (Guest et al., 2017) and including a saturation assessment is standard (Tong et al., 2007). To assess whether significant additional responses may have been missed by limiting the number of focus groups to these pre-determined numbers, we evaluated post-hoc data saturation.
Recruitment
Eligible participants were identified within the VA electronic medical record (EMR) using the following criteria: male sex, age 50-85, one or more outpatient visits at the Minneapolis VA Health Care System in the past year, and PSA test in the past 24 months. For the last two focus groups, only men whose race in the EMR was listed as Black or African American were considered eligible. Eligible participants were required to have a PSA test in the past 24 months because the pamphlet was specifically designed to address questions from men who had been previously screened. Men were excluded if they had a diagnosis of prostate cancer or dementia, were nursing home residents, non-English speakers, or did not have a complete address and phone number. The sampling frame of eligible men at the time of study initiation included approximately 10,850 non-Black men and 695 Black men. Due to a resource-intensive recruitment process and a relatively small number of participants needed, a random sample of 200 eligible men was selected for invitation to participate in the initial four focus groups. A second random sample of 100 eligible Black men was selected for invitation to the fifth and sixth focus groups.
Potential participants were notified of the study by mail and provided an opportunity to opt out; those who had not opted out were called by a study coordinator in random order and invited to join one of the planned focus groups. The target focus group size was 5-10; recruitment calls were discontinued once 8-10 men had agreed to participate in each focus group. A total of 102 outreach calls were made and 55 men were scheduled into focus groups. Men who agreed to be scheduled for a focus group were mailed a copy of the informed consent for review. Consents for participation and audio-recording were then reviewed in-person prior to the start of each focus group, and participants were provided an opportunity to ask questions about potential risks and benefits of participation prior to signing. Participants were compensated $40 for their time after participation.
Data Collection
All six focus groups were conducted by the same experienced facilitator, K.W., who is White, female, and has a doctorate in education, using a semistructured facilitator guide (Appendix A). As qualitative researchers, we used critical self-consciousness to observe how investigator identities (in this case mainly female, White, and highly educated) would influence power dynamics between study staff and study participants. Specifically, we considered the potential effects of having investigators observe the focus groups, and discussed whether identifying investigators as designers of the educational pamphlet might inhibit critical conversation. In the end, two to three study team members (a mix of investigators and research coordinators) observed each focus group to assist with logistics (serving coffee, collecting consent forms, etc.) and to take notes. One investigator was identified as a physician, to assist with answering any medical questions that arose, but the other investigators were only identified as study team members, to minimize the power differential and encourage open conversation. Focus groups took place at the Minneapolis VA medical center. During each focus group, participants were invited to share their familiarity or perspectives on prostate cancer screening, and were then guided, page by page, through a review and discussion of a 10-page draft pamphlet summarizing evidence and recommendations about PSA screening (Appendix B). Focus groups were audio-recorded, then transcribed by professional transcription services.
Analysis
Transcriptions were de-identified and imported into NVivo (2015, version 11) software for data management and analysis. Coders were not blinded to participant race, as this was not possible due to multiple references to race in the focus group transcripts. Focus group transcripts were analyzed using thematic analysis (Clarke et al., 2015). Two investigators, E.D and M.P., derived a draft codebook by applying In Vivo and Initial Coding methods to the focus group transcripts for first cycle coding (Saldaña, 2015). First cycle coding relied on both deductive codes, based on prior work (Partin et al., 2017) and theory from the PHM (Myers, 2005), as well as inductive, content-driven codes.
The PHM provides a relevant framework for analysis because it is rooted in several classic health behavior models, including the Health Belief Model (Strecher & Rosenstock, 1997), the Theory of Reasoned Action (Fishbein & Ajzen, 1980), and Social Cognitive Theory (Bandura, 1986), and has been validated across both Black and White populations for prostate cancer screening (Ritvo et al., 2008;Tiro et al., 2005) and other preventive health behaviors (Vernon et al., 1997). According to the PHM, background factors, which include demographics, medical history, and past screening behavior, interact with cognitive and psychological representations of screening and disease, as well as social support and the influence of family members or health professionals, to affect behavioral intentions and ultimately health behaviors. Programmatic factors within the health system may also facilitate intention and screening. Cognitive representations about disease include knowledge and awareness about the etiology of disease, perceived susceptibility, severity and duration of disease, and effectiveness of screening. Psychological or affective representations include the emotional reactions to these things. A person considering screening will compare the cognitive and affective representations associated with behavioral alternatives (i.e., screening or not screening) using a process of preference clarification. After the person has engaged in their chosen behavior, an outcome appraisal allows them to compare the anticipated consequences with their actual experience-this appraisal then feeds back into future decisions. (Myers, 2003(Myers, , 2005 All focus group transcripts were reviewed by two investigators, and the investigators met after each was reviewed to compare coding, define or revise codes, and update the codebook. After all focus group transcripts had been reviewed twice (once by each investigator), the resulting revised codebook was then applied to each focus group transcript by one investigator. The other investigator performed a 10% coding check. The two investigators met to discuss agreement or disagreement in coding following application of the codebook to each focus group transcript, and final arbitrated coding decisions were then applied.
Themes were developed and refined by E.D. and M.P. using thematic analysis (Clarke et al., 2015). During the coding process, emerging concepts were noted by investigators. Following complete coding of all six focus groups, NVivo software was used to tabulate code frequencies and patterns across and between racial groups. Some themes were common to all focus groups, whereas others emerged only in White or only in Black focus groups. Qualitative analysis was conducted between 2016 and 2018. Post-hoc data saturation was assessed by examining the number and percent of codes included in our final codebook that were identified after each focus group was coded. Because some new codes arose only in the focus groups with Black men, which were numbered five and six out of six, we also evaluated the proportion of codes that were found only in White focus groups (focus groups one to four).
Results
Six 90-minute focus groups were conducted with 5-9 participants each, for a total of 44 participants. Across four focus groups with White men, 32 participants ranged in age from 55 to 81, with mean age 68.5 years. In the two focus groups with Black men, 12 participants ranged in age from 55 to 80, with mean age 64.9 years. Major themes are summarized below with supporting quotes. Themes are grouped under the five sets of factors that predict screening behavior in the Preventive Health Model (Myers et al., 1999): (1) (1) Background factors. Black focus group participants addressed the background factors race and age, and described prior negative experiences with prostate cancer screening.
Experiences and Suspicion of Discrimination.
Only Black participants brought up the possibility that lack of good screening tools and recommendations not to routinely screen for prostate cancer were motivated by racism or ageism. They also discussed potential harms not mentioned in the pamphlet, including the concept of "cutting" and spread following biopsy or treatment (a lay model of cancer).
"Some other information regarding cancer. You see that the body has a way of enveloping any foreign ailments. And see the cancer once they start to operate they release the fluids that surround cancer that the body has protected you with. Actually the cure is worse than the ailment." (Black participant) "Once you open up the prostate and let all those cancer cells get out of there it could go anywhere." (Black participant)
(3) Social Support and Influence and (4) Programmatic Factors.
In describing baseline knowledge of prostate cancer and PSA testing, men in all focus groups reported that family and peers played a minor role and that the health care system facilitated screening without enabling informed decision making. Black men, but not White men, discussed extensively what they could do to stay healthy and prevent prostate cancer using self-care solutions. Men in the Black focus groups wondered whether diet may contribute to elevated prostate cancer risk. Some suggested alternative medicine or herbal supplements.
"As far as African-Americans, that's the term used, having prostate cancer and have a larger number than the Caucasian or whatever, I think there's more issues involved. I mean, you know, bad health overall, stress. I mean, many things contribute to diet. So, I mean, if these things are all compounded. . ." (Black participant) "So, really, the best thing for us to do is to try to stay healthy, exercise and eat properly." (Black participant) "What you can do [to stay healthy], such as, I think I heard that boron was good, selenium was good." (Black participant)
Data Saturation
The final codebook included 152 individual codes: 10 topcodes, 43 Sub-code 1, 71 Sub-code 2, and 28 Sub-code 3. Sixty-five percent of the 152 final codes were identified in the first focus group, with 88% identified by focus group four and 97% by focus group five. Of the 135 codes identified in the first four (White) focus groups, 96% were identified by focus group three. Based on these findings we are reasonably certain that conducting additional focus groups would not have resulted in substantially more or different responses.
Discussion
In this qualitative study, White and Black men reviewed an educational pamphlet that presented evidence about prostate cancer screening benefits and harms and explained why some physicians recommended against routine PSA testing. Both White and Black male focus group participants expressed negative affective reactions to screening recommendations, including surprise, resistance, fear, irritation, and confusion, as well as low baseline familiarity with prostate cancer associated with limited family or peer influence or past programmatic support. However, discussions of background factors such as age and race, cognitive and psychological representations about the salience, coherence, and potential consequences of screening, as well as intentions for exercising personal agency in prevention and screening, diverged between White and Black focus groups. We review those differences below. Participants in the four White focus groups highlighted the salience and coherence of screening, minimized the harms, and emphasized personal choice in screening decisions. Previous research with the Preventive Health Model (PHM) has found that belief in the salience and coherence of screening (i.e., the belief that screening is important, effective, and convenient), is closely associated with intention to screen among both Black and White men (Myers et al., 1994(Myers et al., , 1996Vernon et al., 1997). In our study, reviewing scientific evidence about the relatively low screening efficacy and potential harms of screening did little to alter this pro-screening stance among participants in the White focus groups. Instead, White focus group participants referred to their prior positive experiences with screening. Men reported that they had previously experienced relief and reassurance from a normal test result. They felt confident that there would be opportunities for informed conversations to avoid a cascade of downstream consequences. Outcome appraisals from past experiences outweighed anticipated or reported potential consequences of screening during the process of preference clarification (Myers, 2003(Myers, , 2005. This finding is consistent with previous work demonstrating that informed discussions may have less impact on screening intentions than underlying beliefs and prior experiences (Farrell, 2002;Riikonen et al., 2019).
In contrast to the White male focus groups, participants in the Black male focus groups responded to the educational pamphlet by devoting almost no discussion to potential benefits of PSA testing. Despite their elevated risk for prostate cancer, Black men were deterred by the potential harms of PSA screening described in our educational pamphlet, and added additional harms to the conversation. Whereas previous work has found that awareness of Black men's elevated risk for prostate cancer was associated with receptivity to screening (Myers et al., 1994), another study showed that few Black men perceived their personal risk as being high (Myers et al., 1996). Black men also brought up experiences of discrimination in healthcare and racism in scientific research. These prior experiences with discrimination may have contributed to outcome appraisals that swayed men's assessments of preventive health behavioral alternatives (Myers, 2003(Myers, , 2005. Critical Race Theory teaches us to consider how the racialized experiences of Black people may contribute to health beliefs and behaviors (Ford & Airhihenbuwa, 2010a, 2010b. Previous studies have reported that Black men identify racism, acting through intergenerational oppression, poverty and diet, as a root cause of prostate cancer disparities (Hunter et al., 2015). Participants in the Black focus groups emphasized personal and collective responsibility for cancer prevention outside of the healthcare system, through diet, exercise, and alternative medicines. This response is consistent with prior findings that Black men see prostate cancer as a collective threat requiring a coordinated approach for community prevention and protection (McFall et al., 2006). Previous work has reported that Black men are more likely to consider prostate cancer screening with digital rectal exam (DRE) as an affront to masculinity, compared to White men (Winterich, Quandt, et al., 2009). The role of masculinities, or gender identity, was not a prominent part of our focus groups discussions, likely because our study focused on PSA testing, rather than DRE.
The differences between focus groups comprising White men and Black men surprised us because they contrasted with our findings from individual interviews in an earlier part of this research. In those interviews, we encountered comparable perspectives, and a similar range of reactions to screening recommendations, across racial groups (Partin et al., 2017). This apparent discrepancy may be attributable to a more significant race-of-interviewer effect in the one-on-one interviews that derives, in part, from social desirability to avoid tension during an interview (Bobo & Fox, 2003). Participants in race-specific focus groups may feel more comfortable acknowledging the role of prejudice than individual interviewees-consistent with a general principle that focus groups with increased homogeneity have more open conversations (Branscombe et al., 1999;McFall & Hamm, 2003). Non-Black clinicians should be aware that racial dynamics can influence their one-on-one conversations with Black patients.
Many non-Black primary care providers may be hesitant to engage in discussions of race or racism with Black or other minority patients. However, our findings are consistent with prior studies that found that experiences of racism inform Black patients' perspectives on healthcare and intentions related to screening and treatment (Hunter et al., 2015). Successful health communication relies on understanding and addressing patients' perspectives, even when those are uncomfortable for providers to confront. The counseling literature calls on counselors in multicultural environments to recognize their own assumptions, values, beliefs, biases and privilege in order to conduct culturally competent counseling (Ratts et al., 2016). Several prominent medical journals have recently published commentaries calling for clinicians to begin more explicitly addressing racism in health education and patient communication (Acosta & Ackerman-Barger, 2017;Carroll, 2020;Hardeman et al., 2016). Future work should elicit patient perceptions and reactions to explicitly addressing racism in the context of cancer screening and treatment decision conversations.
Our work has several strengths: few prior qualitative studies have included and compared responses from both White and Black men-ours is the first to do so since the 2012 U.S. Preventive Services Task Force guidelines were released (James et al., 2017). We tested a pamphlet that included information about both benefits and harms of prostate cancer screening; harms are often omitted from cancer screening research and guidelines (Caverly et al., 2016). Our draft educational pamphlet used recommended techniques to quantify and visually communicate the absolute risks and benefits of screening (Trevena et al., 2013). As men reviewed the pamphlet, we were able to assess their cognitive and affective responses to the decision-making situation, which have long been overlooked in the development of factual decision aids (Myers, 2005).
Our findings are tempered by some limitations: the views of White and Black male Veterans in the upper Midwest may differ from other parts of the United States and non-Veterans. All of the men in this study had been previously screened with PSA, which may influence their opinion of the test. The White and Black focus groups were also conducted several years apart due to resource limitations, leading to the possibility that secular trends could cause the differences in reactions between groups. Contemporary events or contextual factors present during the data collection periods may have affected the focus group discussions. However, there were no significant changes to prostate cancer screening guidelines during this interval, and all focus groups were conducted by the same experienced facilitator. The use of only a female White focus group facilitator is a potential limitation, however. We don't know how responses may have differed with a male or Black facilitator, though we note that participation was robust among both White and Black focus groups. Greater racial diversity among our research team in general would likely have provided additional perspectives in the conduct and analysis of this study. Since completing this study, our research group has developed and engaged with a diverse research advisory panel composed of patients who represent the communities we study. We suggest that future work in this field build on similar partnerships, and consider the use of a critical racial analytical lens in study design, conduct, and analysis. We did not consider factors such as age, class, or education level in our analysis. Prior studies have found that Veterans who use VA care do not experience the same degree of difference in healthcare access and outcomes as patients in other health systems (Riviere et al., 2020). Focus group participants ranged in age from 55 to 81, including some men over age 70, an age group for which most guidelines continue to recommend against routine screening (Carter et al., 2018; National Center for Health Promotion and Disease Prevention, 2019; U. S. Preventive Services Task Force, 2018). Due to the mixed age-group format we are unable to differentiate responses by age. Now that most guidelines incorporate age and life expectancy into their recommendation statements (e.g., men ages 55-69 vs. men 70 years and older), future work should evaluate age-and life expectancy-specific responses to evidence and screening recommendations. The pamphlet presented to the focus groups in this study was designed in response to earlier guidelines that advised against routine PSA testing for all men. However, men's responses to evidence of benefits and harms of PSA testing, which has not changed substantially, remain relevant in light of newer guidance to have an informed discussion with patients.
Conclusions
Participants in White and Black focus groups reacted differently to evidence about benefits and harms of PSA screening, in part due to personal and historical experiences of discrimination in healthcare. These findings contribute to the body of knowledge about how men's varied perspectives and life experiences affect their responses to prostate cancer screening information. | 2021-06-08T06:16:43.996Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "c65216dbb287996094263e6769c97ca7753f103d",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/15579883211022110",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "8b279fa5f73d3333d038e62fdb3b510f99e7a4a6",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220303743 | pes2o/s2orc | v3-fos-license | Interactions between decision-making and emotion in behavioral-variant frontotemporal dementia and Alzheimer’s disease
Abstract Negative and positive emotions are known to shape decision-making toward more or less impulsive responses, respectively. Decision-making and emotion processing are underpinned by shared brain regions including the ventromedial prefrontal cortex (vmPFC) and the amygdala. How these processes interact at the behavioral and brain levels is still unclear. We used a lesion model to address this question. Study participants included individuals diagnosed with behavioral-variant frontotemporal dementia (bvFTD, n = 18), who typically present deficits in decision-making/emotion processing and atrophy of the vmPFC, individuals with Alzheimer’s disease (AD, n = 12) who present with atrophy in limbic structures and age-matched healthy controls (CTRL, n = 15). Prior to each choice on the delay discounting task participants were cued with a positive, negative or neutral picture and asked to vividly imagine witnessing the event. As hypothesized, our findings showed that bvFTD patients were more impulsive than AD patients and CTRL and did not show any emotion-related modulation of delay discounting rate. In contrast, AD patients showed increased impulsivity when primed by negative emotion. This increased impulsivity was associated with reduced integrity of bilateral amygdala in AD but not in bvFTD. Altogether, our results indicate that decision-making and emotion interact at the level of the amygdala supporting findings from animal studies.
Introduction
Emotions play an important part in many of our decisions (Bechara et al., 2000;Clore and Huntsinger, 2007). Choosing to save for our children's education rather than buying our dream car not only involves options with different reward magnitude and delays but also options with distinctive affective content. How emotions interact with decision-making processes, however, is still largely unresolved.
Relevant to this study, some key regions underlying decisionmaking-vmPFC and amygdala-are known to play a central role in emotion processing (Hommer et al., 2003;Lindquist et al., 2012;Herman et al., 2018;Kelley et al., 2018) and are extensively connected (Haber and Knutson, 2010;Schardt et al., 2010;Patin and Hurlemann, 2011;Plichta and Scheres, 2014). While the vmPFC appears to respond to both negative and positive stimuli (Winecoff et al., 2013;Yang et al., 2020), the amygdala is traditionally known from animal and human lesion studies as the hub for processing negative emotions (LeDoux, 1998;Adolphs et al., 2005). Human neuroimaging studies also support the view for a central role of the amygdala in processing negative emotions (Davis and Whalen, 2001), although amygdala activation during positive emotion processing has been reported as well (Garavan et al., 2001;Hamann and Mao, 2002). Because of their mutual connections, it is not surprising that contextual information such as emotion shifts choices on the delay discounting task toward being more patient or impulsive (Lempert and Phelps, 2016).
The majority of studies show that short (1.5 seconds, Guan et al., 2015) or long (15 seconds, Augustine and Larsen, 2011) exposure to negative emotional pictures increases the propensity to choose smaller-sooner over larger-later rewards, whereas exposure to positive pictures shifts decisions toward choosing larger-later rewards (Guan et al., 2015;Cai et al., 2019). Similar findings were also reported in studies using emotional episodic future thinking as the emotional cue (Liu et al., 2013;Lin and Epstein, 2014;Zhang et al., 2018). Some studies find opposite findings, with effects specific to particular conditions, namely reports of increased delay discounting following positive emotion in extraverted individuals (Hirsh et al., 2010) and decreased delay discounting following fearful faces (Luo et al., 2014). Arousing pictures, regardless of emotion, also tend to increase delay discounting (Wilson and Daly, 2004;Sohn et al., 2015).
This study aimed to identify the relations between decisionmaking and emotion processing and their biological mechanisms, using a lesion model. Inclusion of patients with behavioral-variant frontotemporal dementia (bvFTD) and Alzheimer's disease (AD), presenting with atrophy in the key brain regions of the reward and emotion network (vmPFC, limbic lobe) will clarify the role of emotion on delay discounting and the contribution of each brain region in delay discounting. bvFTD is a neurodegenerative condition characterized by marked changes to personality and interpersonal conduct as evidenced by their increase in 'impulsive, rash or careless actions' (Rascovsky et al., 2011). Patients with bvFTD also show disruption in emotional processing (Lavenu et al., 1999;Keane et al., 2002;Fernandez-Duque and Black, 2005;Kipps et al., 2009;Kumfor et al., 2013aKumfor et al., , 2014a. Atrophy is typically reported in emotion-specific brain regions namely in vmPFC and insula (Seeley et al., 2008), which extends into subcortical regions with disease progression (Landin-Romero et al., 2017). Given their behavioral deficits-decision-making and emotion processing-and atrophy of the vmPFC, we would anticipate a correlation between reduced grey matter intensity in the vmPFC and increased delay discouting, regardless of emotion.
The predominant clinical feature of Alzheimer's disease in contrast is an impairment in episodic memory (McKhann et al., 2011), mainly attributed to atrophy of structures of the mediotemporal limbic system such as hippocampus and amygdala (Scheltens et al., 1992;McKhann et al., 2011;Poulin et al., 2011) and progressing to parietal, posterior cingulate and frontal cortices with disease (Nestor et al., 2003;Dickerson et al., 2009;Landin-Romero et al., 2017). Early in the disease process, interpersonal behavior and emotion processing are relatively preserved in AD, although some facets of emotion processing and behavior are impaired (Cummings, 1997;Hoefer et al., 2008) and worsen with disease progression (Bidzan et al., 2012;Kumfor et al., 2014b;Bertoux et al., 2015a). AD patients, although overall capable of recognizing emotions, can be severely impaired in retrieving emotions relevant to autobiographical memories for example (Irish et al., 2011;Kumfor et al., 2013a). While emotion processing deficit is considered a core feature of bvFTD (Rascovsky et al., 2011), emotion processing-to some extent-remains comparatively preserved in AD (Lavenu et al., 1999). Despite relatively preserved decision-making and emotion processing compared to bvFTD, we would anticipate emotion to interact with delay discounting performance in AD. We would also expect reduced amygdalar grey matter integrity to increase delay discounting and weaken the interactions between emotions and decision-making.
Few studies have investigated delay discounting in bvFTD and AD. Increased delay discounting has been reported in bvFTD compared to AD (Lebreton et al., 2013;Bertoux et al., 2015b) and in healthy controls (Beagle et al., 2020), while Chiong et al. (2016) reported similar performance between bvFTD, AD and controls. AD patients show a trend for increased delay discounting compared to healthy controls (Lebreton et al., 2013;Bertoux et al., 2015b;Beagle et al., 2020). Brain-behavior associations with delay discounting performance in bvFTD and AD are less clear as most studies only included behavioral data (Bertoux et al., 2015b), only reported patterns of brain atrophy (Lebreton et al., 2013) or investigated brain-behavior correlations across etiologies (Lansdall et al., 2017;Beagle et al., 2020). The only study investigating brain-behavior correlations in bvFTD and AD (Chiong et al., 2016) failed to find significant correlations between brain atrophy and delay discounting probably because of the lack of between-group behavioral differences. Only one study investigated or reported brain-behavior correlations in bvFTD and AD in decision-making tasks other than the delay discounting task (Kloeters et al., 2013). Using the Iowa Gambling Task, this study found that decisionmaking deficits were attributed to frontal atrophy in bvFTD and to temporal/parietal atrophy in AD.
To identify the influence of emotion on delay discounting, we presented individuals diagnosed with bvFTD or AD, and healthy controls, emotional or neutral pictures before each choice on a delay discounting task. Given their divergent patterns of brain atrophy and clinical features, we predicted that bvFTD would exhibit greater impulsivity overall compared with the other two groups, and that AD would be more impulsive than controls. In addition, we hypothesized that due to their deficits in emotion processing, bvFTD would not show any emotion-induced modulation of delay discounting. In contrast, we expected AD to show a similar emotion-induced modulation of delay discounting than controls, namely increased delay discounting, that is impulsivity, for negative emotions and decreased delay discounting for positive emotions. At the anatomical level, we expected the decision-making deficits to relate to distinct neural structures (Kloeters et al., 2013). Based on lesion studies (Sellitto et al., 2011;Peters and D'Esposito, 2016), we predicted that increased delay discounting in the bvFTD group would correlate with decreased grey matter intensity in the vmPFC, regardless of emotional valence. In the AD group, given the limited vmPFC atrophy, we anticipated that atrophy of the amygdala and other limbic structures would be related to increased delay discounting as demonstrated in animal studies (Winstanley et al., 2004;Floresco and Ghods-Sharifi, 2007;Ghods-Sharifi et al., 2009). In addition, because of its central role in processing negative emotion, we also hypothesized that reduced grey matter intensity in the amygdala in AD would counteract the expected increased delay discounting in the negative condition.
Participants
Twenty-two patients diagnosed with bvFTD, 15 patients with AD and 15 education-and age-matched healthy controls were recruited from FRONTIER, the frontotemporal dementia research clinic in Sydney, Australia. Calculation of sample size was based on an a priori power analysis using G * Power (Faul et al., 2007). For an alpha level of 0.05, an anticipated effect size of 0.06 (medium) and a power of 0.80, the estimated total sample is 36 participants (12 in each group). All patients underwent a comprehensive neurological examination, a neuropsychological assessment, and a structural brain MRI. Diagnosis was established according to relevant clinical diagnostic criteria at the time of testing for probable or possible bvFTD (Rascovsky et al., 2011) andAD (McKhann et al., 2011). Diagnosis was established by multidisciplinary agreement based on cognitive, clinical and imaging data. Exclusion criteria for patients and controls included: presence of a primary psychiatric disorder, presence of other dementia or neurological disorders, and/or history of alcohol or substance abuse. All healthy controls underwent the comprehensive neuropsychological assessment and the brain MRI and were required to score >88/100 on the ACE-III to ensure they did not have any significant cognitive impairments. All participants or their Person Responsible provided informed consent in accordance with the Declaration of Helsinki. The South Eastern Sydney Local Health District and the University of New South Wales ethics committees approved the study.
Neuropsychological assessment
The ACE-III was used to assess general cognition (Hsieh et al., 2013;So et al., 2018). Disease severity was assessed with the Frontotemporal Lobar Degeneration-Modified Clinical Dementia Rating Scale Sums of Boxes (CDR-FTLD SoB) (Knopman et al., 2008), and disease duration was measured in years from the first onset of symptoms.
Delay discounting task
The ability to delay gratification was assessed with the Monetary Choice Questionnaire (MCQ, Kirby et al., 1999). The MCQ comprises 27 dichotomous choices asking participants to choose between a smaller, immediate monetary reward or a larger, delayed monetary reward (e.g. 'Would you prefer $15 today or $35 in 13 days?'). Estimates of delay discounting were calculated for all reward magnitudes as well as for each different reward magnitude, categorized as low-($25-35), medium-($50-60) and high-magnitude ($75-85) trials. Indifference points were calculated with the classically used hyperbolic discounting equation: V=A/(1+kD) (Mazur, 1987) where V represents the present value of the delayed reward A at delay D, and k is a free parameter that determines the discount rate. Larger values for k indicate a preference for smaller immediate reward. Because of skewness, k values were log-transformed (logk) (Gray et al., 2016). Although the monetary rewards were hypothetical, real and hypothetical rewards lead to similar patterns of discounting (Johnson and Bickel, 2002;Madden et al., 2003). Prior to each choice, an emotional picture (Positive, POS; Negative, NEG; or Neutral, NEU) was presented for 5 seconds and participants were instructed to vividly imagine that they were witnessing the event/content depicted in it ( Figure 1).
To control that they understood the task correctly, participants completed a training session consisting of three trials, Fig. 1. Experimental design. The delay discounting task consisted of three blocks containing either positive (POS), negative (NEG) or neutral (NEU) pictures and presented in randomized order. Participants were first instructed to vividly imagine witnessing the picture and then asked to make a choice on the delay discounting task.
during which they were asked on one random trial to indicate (i) which choice would pay sooner and (ii) which choice would pay greater. Only participants completing the training session and answering correctly the control questions were retained for the analyses.
Participants completed three blocks (POS, NEG or NEU) of the delay discounting task in a randomized order. Each trial began with a fixation cross presented on a 21.5 inch monitor for 500 ms, a picture displayed for 5000 ms and a screen containing both choices displayed until participants responded. An interstimulus interval (ISI) of 1000-2000 ms preceded the following trial. Participants indicated their choices by pressing the left or right arrow of a keyboard, according to the choice displayed on the left or the right of the screen. Each block lasted approximately 5 min. The three blocks were separated by a 5-minute break during which participants completed various questionnaires. Stimulus delivery and subjects' responses for both tasks were controlled using E-prime 2.0 software (Psychology Software Tools, Pennsylvania, USA).
Questionnaires
Between each delay discounting block, participants completed the present and future sections of the Zimbardo Time Perspective Inventory, which comprises 37 items ranging from 1 (very untrue) to 5 (very true) and grouped into present-hedonistic, present-fatalistic and future dimensions (Zimbardo and Boyd, 2015).
At the end of the experimental session, participants rated valence and arousal for a subset of pictures (n = 15) of each emotion category using the Self-Assessment Manikin (Lang et al., 1997) and a scale from 1 to 9 (valence: 1 = very negative to 9 = very positive; arousal: 1 = relaxed to 9 = aroused). The picture remained on the screen until the response was recorded.
Statistical analyses
Data were analysed using IBM SPSS Statistics, 24.0 (SPSS Inc., Chicago, Ill., USA). Normally distributed variables, as determined with Shapiro-Wilks tests, were compared across groups using mixed or one-way ANOVAs followed by Sidak post hoc tests. Variables not normally distributed across our sample were analysed by Kruskal-Wallis ANOVA followed by Mann-Whitney U tests. Categorical measures (e.g. sex) were analysed by Chi-square tests. Effect sizes are reported using the partial eta-square (η 2 ).
We investigated delay discounting (logk) with a 3 × 3 mixed ANOVA with within factor of Emotion (POS, NEG or NEU) and between factor of Group (bvFTD, AD and CTRL). Significant interactions were followed by simple effects at each combination of levels of the other factors and followed by Sidak post hoc tests. Additionnally, we investigated effects of Emotion for each reward magnitude separately using the same statistical analysis.
Correlations between the significant delay discounting conditions (Pos, Neg and Neu) in bvFTD and AD and respective valence/arousal ratings (Pos, Neg and Neu) were analysed using Spearman rank coefficient. Only correlations surviving Bonferroni correction for multiple comparisons were kept.
Data pre-processing. Voxel-based morphometry (VBM) was conducted using SPM12 (Welcome Department of Cognitive Neurology, London, UK), in Matlab R2018a (Mathworks, Natick, Massachusetts, USA). First, T1-weighted images were segmented into six tissue probability maps in the native space. Both the original T1-weighted and the segmented maps were screened during an image quality control. Two participants (1 bvFTD and 1 AD) were removed for the subsequent pre-processing steps and statistical analyses due to motion during the acquisition or segmentation failure. A DARTEL template was computed using all the grey and white matter probability maps which satisfied our criteria for quality control. Last, grey matter probability maps were spatially normalized to the Montreal National Institute (MNI) space according to the transformation parameters from the corresponding DARTEL template. Images were modulated and smoothed with a Gaussian filter of full width at half maximum of 8 mm.
VBM analyses.
Patterns of grey matter intensity decrease were explored using a whole-brain general linear model comprising bvFTD, AD and CTRL groups as well as age and total intracranial volume (to account for individual differences in head size) as regressors of non-interest. The total intracranial volume was assessed in the patient's space prior to spatial normalization by summing thresholded grey matter, white matter and corticospinal fluid probability maps (threshold = 0.2) and counting non-zero voxels. Differences in grey matter intensities between groups (bvFTD vs control; AD vs control) were assessed using t-tests.
Next, correlations between delay discounting and grey matter intensity were investigated. Scores for each delay discounting condition (POS, NEG or NEU) were entered simultaneously into the design matrix. Age and total intracranial volume were included as regressors of non-interest. Correlations were first investigated between delay discounting and grey matter intensity combining all participants (bvFTD, AD and CTRL). Then, the same analyses described above were conducted to investigate correlations in each patient group combined with controls in order to identify the neural correlates of delay discounting distinct to each patient group. Inclusion of controls has been shown to increase statistical power to detect brain-behavior relationships across the entire brain (e.g. Kumfor et al., 2013b).
Voxel-wise statistical analyses are reported using a cluster size of at least 50 voxels, at statistical threshold of P < 0.001, uncorrected for multiple comparison. This approach minimizes Type I error while balancing the risk of Type II error (Lieberman and Cunningham, 2009). Significant results were overlaid on the Montreal Neurological Institute (MNI) standard brain using MRIcron (https://www.nitrc.org/projects/mricron).
Demographic and neuropsychological profiles
Twenty-two individuals diagnosed with bvFTD, 15 with Alzheimer's disease and 15 older healthy controls were recruited for this study. Seven participants (4 bvFTD and 3 AD), however, failed the delay discounting task training session, and their data were therefore removed from the analyses. As such, the final samples included 18 bvFTD, 12 AD and 15 CTRL participants. As reported in Table 1, groups were well matched on age (P = 0.636). Although groups are statistically matched on sex (P = 0.080) and education level (P = 0.066), the bvFTD group is marginally composed of more men of lower education than the control or AD groups. Patient groups did not differ on disease duration (P = 0.261) or disease severity (CDR-FTLD Sob, P = 0.989) either. AD patients were however significantly more impaired on general cognition than bvFTD (ACE-III, P < 0.001). Both patient groups had significantly greater disease severity (P < 0.001) and impaired general cognition (P < 0.001) than controls. Excluded participants tended to be more impaired on the ACE-III than their respective samples (bvFTD included: 82.0 ± 9.9, bvFTD excluded: 73.2 ± 14.6, P = 0.15; AD included: 65.9 ± 12.6, AD excluded: 53.6 ± 6.8, P = 0.06).
Correlations
Correlations between delay discounting and judgement of valence and arousal were apparent only in AD, where decreased delay discounting in the positive condition correlated with increased judgement of positive valence (r (10) = −0.802, P = 0.002; Figure 2E).
Patterns of atrophy.
Patterns of atrophy in the clinical groups were typical of these diseases (Nestor et al., 2003;Seeley et al., 2008;Landin-Romero et al., 2017) Table 1). Compared with CTRL, bvFTD showed decreased grey matter intensity in the medial prefrontal cortex, frontal and temporal gyri, ACC, as well as subcortical regions including the hippocampus and striatum. In contrast, AD showed a significant bilateral decrease of grey matter intensity in the medial temporal lobe, including the hippocampus and amygdala, as well as in the precuneus and the insula.
Neural correlates of delay discounting. Correlations between POS, NEG and NEU delay discounting and grey matter intensity revealed that, irrespective of diagnosis, increased delay discounting in the NEG condition was associated with reduced grey matter integrity in the amygdala (P < 0.001, cluster FWEcorrected) and occipital gyrus bilaterally (P < 0.001, uncorrected; Figure 3; Table 2). In contrast, no specific patterns of association emerged for the positive and neutral conditions. Further analyses on each patient group combined with controls showed distinct patterns of grey matter intensity in bvFTD and AD correlating with POS, NEG or NEU delay discounting ( Figure 4; Table 3). Increased delay discounting in the NEG condition in AD was associated with reduced grey matter intensity in bilateral amygdala, vmPFC, ACC and hippocampus. No such associations were observed in the bvFTD group. Marginal frontal and temporal areas were associated with positive and neutral delay discounting, respectively, in AD and bvFTD.
Discussion
This study revealed different patterns of modulation of emotion on decision-making in the two most common younger-onset dementia syndromes, AD and bvFTD, which were associated with specific neural changes. Supporting our hypotheses, bvFTD patients showed greater delay discounting compared to AD and controls, but no modulation according to emotion. In contrast, AD patients showed increased delay discounting in the negative condition, which was associated with greater bilateral amygdala atrophy. No specific pattern of brain atrophy was observed in bvFTD.
The increased impulsivity observed in bvFTD aligns with previous studies reporting impulsive decision-making in this population (Strenziok et al., 2011;Gleichgerrcht et al., 2012;Bertoux et al., 2013Bertoux et al., , 2015bKloeters et al., 2013;Lebreton et al., 2013;Lansdall et al., 2017;Beagle et al., 2020). One recent report, however, failed to show any deficits on the delay discounting task in bvFTD compared with AD and controls (Chiong et al., 2016). The authors argued that this was due to the very early disease stage of their patients. Our findings challenge this interpretation as we find evidence of decision-making deficits on the delay discounting task in patients with a similar disease severity (mean MMSE = 26, converted from ACE-III score, Matias-Guiu et al., 2018).
As anticipated, compared to AD, bvFTD patients failed to show the negative emotion-induced modulation of delay discounting, a finding compatible with a primary deficit in emotion processing in bvFTD. Patients with bvFTD indeed show deficits in recognizing negative emotions (Lough et al., 2006;Goodkind et al., 2015) and emotional expression in faces and voices (Keane et al., 2002;Lavenu and Pasquier, 2005) and emotional blunting (Mendez et al., 2006). Grossmann et al. (2010) showed that bvFTD patients were also less sensitive to negative contextual features when making social decisions: negatively biased scenarios were judged as less negative than controls in bvFTD, whereas positively biased scenarios were rated equally in bvFTD and controls. Alternatively, these findings could be due a failure in decoding the physiological arousal signals in response to negative emotional stimuli. Indeed, previous studies have reported reduced physiological responses (e.g. skin conductance) in response to emotional videos (Kumfor et al., 2019), unpleasant odours (Perry et al., 2017) or pain (Fletcher et al., 2015). bvFTD indeed judged pictures as less arousing than AD and controls, whereas valence ratings were similar across groups. This indicates that the emotional impairment in bvFTD results from a reduced arousal triggered by the pictures rather than a primary deficit in recognizing their emotional content. The specificity of the effect to the negative condition in AD could follow from an effect of arousal on delay discounting rather than an effect of negative emotion per se. Studies have indeed shown that arousing pictures, regardless of emotion, increased delay discounting compared to neutral pictures (Ariely and Loewenstein, 2006;Sohn et al., 2015). Future studies using objectives measures of arousal (e.g. skin conductance) are needed to clarify this point. Across groups, increased delay discounting for the negative (but not the positive or neutral) condition was associated with reduced grey matter intregrity in bilateral amygdala and occipital gyri. Group-specific analyses indicated that this association was mediated primarily by the AD group which showed reduced grey matter integrity in bilateral amygdala, vmPFC and parahippocampal gyri that correlated with increased delay discounting in the negative condition. These findings indicate that the amygdala is involved in delay discounting, especially within an emotionally negative context. Our findings demonstrate for the first time in humans that amygdala damage increases delay discounting, mirroring animal studies where excitotoxic lesions of the BLA increased delay discounting (Winstanley et al., 2004;Floresco and Ghods-Sharifi, 2007;Ghods-Sharifi et al., 2009). Impact of amygdalar damage on various decision-making tasks has been reported before (Bechara et al., 1999;Bar-On et al., 2003;Hanten et al., 2006;Brand et al., 2007;Weller et al., 2007;De Martino et al., 2010), but never on delay discounting to date.
The direction of the correlation between amygdala integrity and delay discounting was not anticipated given the known role of the amygdala in processing negative emotion. This finding adds to the structural neuroimaging controversy in the field of delay discounting as to whether delay discounting is correlated with increased or decreased grey matter intensity (Cho et al., 2013;Tschernegg et al., 2015;Pehlivanova et al., 2018). Importantly, although central to negative emotion processing, the amygdala is not the only brain region supporting negative emotion processing. Indeed, lesion studies have shown that the amygdala is necessary but not sufficient to process negative emotions as, apart from fear, amygdala damage does not preclude from triggering and feeling other negative emotions (Anderson and Phelps, 2002;Feinstein et al., 2011). One candidate region is the vmPFC, which regulates emotion through top-down inhibition of the amygdala (Andrewes and Jenkins, 2019). Deficient inhibitory control of the vmPFC on the amygdala has been shown to lead to hyper-emotional reactivity and pathologically elevated levels of negative affect (Quirk and Gehlert, 2003;Milad et al., 2006;Rauch et al., 2006;Motzkin et al., 2015). In situations where the affective/emotional signals are absent (i.e. delay discounting with no emotional component or neutral emotion), the amygdala would be less involved, possibly favoring the vmPFC recruitment (Sellitto et al., 2011;Peters and D'Esposito, 2016). This interpretation is consistent with our lack of amygdala involvement in the neutral delay discounting condition. Our study suggests that the amygdala is involved in delay discounting rather than purely in processing emotions, in line with animal studies (Winstanley et al., 2004;Floresco and Ghods-Sharifi, 2007;Ghods-Sharifi et al., 2009).
The association that we found between increased delay discounting in the negative condition and reduced grey matter intensity in the occipital cortex further demonstrates the involvement of broad network during emotion processing and delay discounting task. fMRI and lesion studies have shown that emotional stimuli, particularly arousing, negative, stimuli recruit not only the amygdala but also the visual cortices (Vuilleumier et al., 2004;Sabatinelli et al., 2009;Motzkin et al., 2015). Similarly, involvement of the occipital cortex on delay discounting tasks has been attributed to visual attention (Luo et al., 2009) or vividness of imagined event in episodic delay discounting tasks (Hu et al., 2016) (Luo et al., 2009;Olson et al., 2009). Some limitations should be acknowledged. Our sample of bvFTD patients was heterogenous in terms of disease severity, disease duration and atrophy pattern compared to AD, which may have precluded other correlations with other brain regions (e.g. vmPFC) to emerge in this group. It should be noted, however, that the absence of correlation in the bvFTD group alone does not indicate that both dementia groups statistically differed. Future studies using larger and more homogeneous groups are needed to resolve these concerns. Importantly, whereas the role of the vmPFC in delay discounting has been clearly demonstrated from lesion studies (Sellitto et al., 2011;Peters and D'Esposito, 2016) and brain stimulation studies (Manuel et al., 2019), evidence from bvFTD is less convincing (Chiong et al., 2016) even in very impaired and homogeneous bvFTD samples. Nevertheless, this limitation does not detract from our main message demonstrating the role of the amygdala in emotional delay discounting.
The absence of the predicted pattern of increased delay discounting in the negative condition in healthy controls when all reward magnitudes were grouped was unexpected. It is likely that this lack of emotion-induced modulation of delay discounting follows from overall reduced variability and impulsivity in our healthy control group, which prevented emotion-related modulations to clearly emerge. Emotion-induced modulation of delay discounting may thus be apparent only under high impulsivity conditions. Supporting this interpretation, our findings show that older controls did exhibit the negative emotioninduced increase in delay discounting but only under the condition of highest impulsivity (i.e. low-magnitude trials). Effects of emotion on delay discounting have been typically reported in young healthy adults (Hirsh et al., 2010;Augustine and Larsen, 2011;Benoit et al., 2011;Liu et al., 2013;Lin and Epstein, 2014;Luo et al., 2014;Guan et al., 2015;Sohn et al., 2015;Zhang et al., 2018). Findings on age-related differences on delay discounting have been mixed. Several studies have reported young individuals to be more impulsive on delay discounting tasks compared to older adults (Green et al., 1999;Whelan and McHugh, 2009;Jimura et al., 2011;Lockenhoff et al., 2011;Eppinger et al., 2012). Other studies, however, have shown no age-related differences (Samanez-Larkin et al., 2011;Roalf et al., 2012;Rieger and Mata, 2015;Seaman et al., 2016) or even increased delay discounting with age (Read and Read, 2004). In sum, although reduced compared to what we could have expected in young individuals, the control group did show emotion-induced modulation of delay discounting. The bvFTD group in contrast showed no emotion-induced modulation of delay discounting for any reward magnitude, further supporting their deficit in emotion processing.
Altogether, this study demonstrates the close connections between emotion processing and decision-making and the conditions under which these vary, in this instance dementia. Our findings have relevance for policymakers when developing health warning messages that aim to dissuade risky behaviors (Nan and Qin, 2019) or for improving negative health behaviors associated with increased delay discounting in clinical populations. A recent study showed promising findings demonstrating that computerized working memory training decreases the rate of delay discounting in older controls (Felton et al., 2019). Improvements in emotion recognition have been reported after computerized emotion recognition training in schizophrenia (Russell et al., 2006) and Huntington's disease (Kempnich et al., 2017) suggesting an avenue for emotion recognition training as a mean of reducing impulsivity. These clinical interventions based on costs/benefits and emotion detection are however more likely to work in AD than in bvFTD.
Supplementary data
Supplementary Material is available at SCAN online.
Funding
This work was supported by funding to ForeFront, a collaborative research group dedicated to the study of frontotemporal dementia and motor neuron disease, from the National Health and Medical Research Council (NHMRC) (APP1037746) and the Australian Research Council (ARC) Centre of Excellence in Cognition and its Disorders Memory Program (CE11000102). ALM is supported by the Swiss National Science Foundation, grant no. P300P1_171478 and P4P4PS_183817. OP is supported by an NHMRC Senior Research Fellowship (GNT1103258). RLR is supported by the Appenzeller Neuroscience Fellowship and the ARC Centre of Excellence in Cognition and its Disorders Memory Program (CE110001021). FK is supported by an NHMRC-ARC Dementia Research Development Fellowship (GNT1097026).
Declarations of interest
None. | 2020-06-25T09:03:33.881Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "9ff8edaeb0cb5db12b7c262ea81dd1aa5f54ef54",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1093/scan/nsaa085",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d128a20474cdd3d46b365f39c8e9b879e3b173da",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
247569135 | pes2o/s2orc | v3-fos-license | Chemical synthesis of Torenia plant pollen tube attractant proteins by KAHA ligation
The synthesis of secreted cysteine-rich proteins (CRPs) is a long-standing challenge due to protein aggregation and premature formation of inter- and intramolecular disulfide bonds. Chemical synthesis provides reduced CRPs with a higher purity, which is advantageous for folding and isolation. Herein, we report the chemical synthesis of pollen tube attractant CRPs Torenia fournieri LURE (TfLURE) and Torenia concolor LURE (TcLURE) and their chimeric analogues via α-ketoacid-hydroxylamine (KAHA) ligation. The bioactivity of chemically synthesized TfLURE protein was shown to be comparable to E. coli expressed recombinant protein through in vitro assay. The convergent protein synthesis approach is beneficial for preparing these small protein variants efficiently.
These small proteins, like all CRPs, contain multiple disulfide bonds that contribute to protein stability and are essential for their biological activities. [48][49][50][51] In the case of TfLURE and TcLURE natural proteins, the connectivity of cysteine residues via disulfide bonds has not yet been identified because of difficulties of isolating enough natural proteins from plant pistils. Recombinant expression of CRPs is challenging due to difficulties with the aggregation, precipitation, and identification of the correctly formed disulfide topology of active or natural proteins in the oxidative folding step. 52,53 These challenges have slowed progress in the investigation of molecular mechanisms of pollen tube attraction due poor access to LURE CRPs and the construction of associated probes. TfLURE and TcLURE can be expressed in E. coli and the activity has been demonstrated through in vitro pollen tube attraction assays. 43-46 However, the isomeric purity after in vitro oxidative refolding has not been analyzed and the proteins retained a His-tag, which was used for purification. Structurally defined, untagged LURE proteins would benefit from a reliable chemical synthesis that could support quantitative analysis, structure-activity relationship (SAR) studies and sitespecific chemical modifications for bioimaging. 54 Using chemical synthesis, significant quantities of the linear CRPs can be produced, purified and folded under carefully controlled oxidative protein folding conditions. Herein, we document an efficient chemical synthesis of Torenia LURE proteins (TfLURE and TcLURE) and their analogues through a-ketoacid-hydroxylamine (KAHA) ligation.
Our initial attempt to synthesize TfLURE and TcLURE via 9-fluorenylmethoxycarbonyl-solid phase peptide synthesis (Fmoc-SPPS) as single chains was unsuccessful and promoted to us to switch to a two-fragment a-ketoacid-hydroxylamine (KAHA) ligation strategy. KAHA ligation is the chemoselective ligation of an unprotected peptide fragment containing a C-terminal a-ketoacid with another unprotected peptide fragment containing an N-terminal 5-oxaproline. 55 The acidic reaction conditions of KAHA ligation are often beneficial for solubilizing the peptide segments. This variant of the ligation strategy leads to the introduction of a non-canonical homoserine (Hse) residue at the ligation site after rearrangement. 56 Hse differs from canonical serine by an additional methylene group.
Based on the amino sequences of the LUREs, we deemed the linkage between Phe21-Ser22 as suitable for KAHA ligation (see Fig. 1b). The preparation of peptides bearing C-terminal phenylalanine a-ketoacids is well established 57,58 and the ligation site at this particular position introduces only a minimal substitution of Ser to Hse, which is unlikely to have a strong effect on the protein structure, function, and biological activity. [59][60][61][62]
Protein synthesis
In our preliminary studies we prepared the peptide segments with unprotected cysteine residues, but we observed premature formation and scrambling of disulfide bonds during purification. In order to improve the handling of the peptide segments before refolding, we selected the orthogonal acetamidomethyl (Acm) for Cys protection, which benefits from well-established deprotection protocols. 63 We prepared the Cys(Acm)-protected a-ketoacid segments using established Fmoc-SPPS procedures on polystyrene resin preloaded with protected Fmoc-Phe-a-ketoacid. 57,58 After cleavage of the peptides from the resin with acid, the crude peptides were purified via reverse-phase high performance liquid chromatography (RP-HPLC) to obtain pure Cys(Acm)-protected a-ketoacid peptide segments 1a and 1b (Scheme 1) in good yields (16-20% based on the initial resin loading). The Cys(Acm)-protected 5-oxaproline segments were prepared using Fmoc-SPPS on HMPB-ChemMatrix s resin, followed by acidic cleavage and purification via RP-HPLC. This provided the desired peptide segments 2a and 2b in good yields (25-30%).
For chemical synthesis of the TfLURE protein through KAHA ligation we coupled 20 mM segment 1a and 24 mM of segment 2a in 50% (v/v) aqueous dimethyl sulfoxide (DMSO) with 0.1 M oxalic acid at 60 1C for 24 h. The KAHA ligation reaction proceeded smoothly with a maximum conversion to give the ligation product 3a. The resulting crude reaction mixture containing depsi-peptide 3a (Scheme 1 and Fig. 2A(ii)) was diluted ten-fold with 6 M guanidine hydrochloride (GdnÁHCl) and the pH was adjusted to 9.6. This induced an O-to-N-acyl shift to deliver the linear protein 4a. The reaction was monitored using analytical RP-HPLC ( Fig. 2A(iii)) and was complete after 2 h. The rearranged protein was purified via preparative RP-HPLC to deliver the desired cysteine-protected protein 4a in 64% yield, and the identity was confirmed via electrospray ionization high-resolution mass spectrometry (ESI-HRMS) analysis.
The six cysteine Acm protecting groups of protein 4a were removed via treatment with 1% AgOAc (w/v) in 50% (v/v) aqueous AcOH at 50 1C. The deprotection reaction proceeded smoothly and the reaction was completed in 2 h. RP-HPLC purification yielded the completely deprotected peptide 5a
Paper
RSC Chemical Biology (Scheme 1 and Fig. 2B(ii)) in reduced form in 70% yield, and the identity was confirmed via ESI-HRMS analysis.
Refolding of the denatured protein was performed as previously described. 64 First, we dissolved the reduced protein 5a at 0.5 mM concentration in denaturation buffer (6 M GdnÁHCl, 0.3 M Tris, pH 7.0) and allowed it to stir at room temperature open to the air.
After one hour, the solution was diluted eight-fold with the folding buffer (5 mM reduced glutathione, 2.5 mM oxidized glutathione, pH 8.2) and stirred at 4 1C for 24 h. We were pleased to see that the major peak via analytical RP-HPLC had shifted and resulted in a new sharp peak, indicating the thermodynamically most stable, disulfide-linked, folded TfLURE protein 6a (Scheme 1 and Fig. 2C(ii)). The crude mixture was purified using preparative RP-HPLC and lyophilized to afford pure folded TfLURE protein 6a in 32% yield. The identity of the folded protein was confirmed via ESI-HRMS analysis (see Sections 3.4 and 3.5, ESI †). The ESI-HRMS data clearly indicated that the reduced peptide 5a lost a mass equivalent to six protons. This confirms the formation of three disulfide bridges in the folded TfLURE protein 6a.
Synthesis of rhodamine-labeled TfLURE
Fluorescent labeling is a powerful strategy to study the localization and dynamics of proteins involved in pollen tube guidance. 65 Therefore, we selected sulforhodamine B 67,68 as a fluorescent dye to attach selectively to the N-terminus of the TfLURE protein sequence. We coupled the sulforhodamine B dye onto the N-terminus of the Cys(Acm)-a-ketoacid segment while it was still on the resin, which was synthesized in an identical manner as 1a.
After acidic cleavage of the peptide from the resin, purification via RP-HPLC provided the desired sulforhodamine B-labeled peptide segment 1a 0 in 12% of yield (see Section S4.1, ESI †).
Under the optimized KAHA ligation conditions, we performed the ligation reaction between 20 mM of segment 1a 0 and 24 mM of segment 2a in 1 : 1 DMSO/water with 0.1 M oxalic acid at 60 1C. The ligation reaction proceeded smoothly within 24 h to yield depsi-peptide 3b (Scheme 1 and Fig. 3A(ii)). The O-to-N-acyl shift was initiated by diluting ten-fold with 6 M GdnÁ HCl, and adjusting the solution to pH 9.6. After 2 h, the reaction mixture was purified using preparative RP-HPLC, which furnished the desired protein 4b in 54% yield (Scheme 1 and Fig. 3A(iii)). Upon Acm deprotection of 4b using 1% AgOAc (w/v) in 50% (v/v) aqueous AcOH for 2 h at 50 1C, we obtained completely deprotected reduced peptide 5b in 60% yield (Scheme 1 and Fig. 3B(ii)).
The reduced peptide 5b was denatured using 6 M GdnÁHCl with 0.3 M Tris buffer pH 7.0 stirred at room temperature for 1 h open to the air, then the protein was folded using our optimized folding conditions by diluting with 8-fold of 5 mM reduced glutathione and 2.5 mM oxidized glutathione set to pH 8.2, then incubation at 4 1C for 24 h. The folded protein was purified via RP-HPLC, resulting in the pure folded sulforhodamine B-labeled TfLURE 6b in 36% yield (Scheme 1 and Fig. 3C(iii)), which we further confirmed via ESI-MS analysis (see ESI †).
Bioassay of TfLURE 6a
We evaluated the bioactivity of our chemically synthesized TfLURE 6a through in vitro pollen tube attraction assays, which have been previously reported. 43,45 Gelatin beads containing 6a (100 nM) were placed in front of the pollen tube of Torenia fournieri (ca. 50 mm in distance) and the protein gradually diffused. The synthesized TfLURE 6a attracted 45% (n = 11) of pollen tubes (Fig. 4 and 5). Comparable attraction was observed (50%, n = 22) with recombinant His-tagged TfLURE proteins. We therefore concluded that the homoserine mutation at the ligation site of synthetic TfLURE 6a did not affect the bioactivity.
Synthesis of TcLURE and analogues
After the bioassay confirmed that our synthesized protein 6a was active and that the introduction of homoserine did not affect the pollen tube attraction, we sought to synthesize TcLURE. There are eight residues that are different between TfLURE and TcLURE, and these differences are responsible for the species-specific pollen tube attraction. Four of them (X 1 , X 2 , X 3 , X 4 ) are embedded in the a-ketoacid segment in the synthetic route and the other residues (X 5 , X 6 , X 7 , X 8 ) are in the 5-oxaproline segment (Fig. 1). We also elected to synthesize chimeric proteins (TfTcLURE and TcTfLURE) using our established KAHA ligation strategy. TfTcLURE and TcTfLURE can be prepared via exchange of the TfLURE and TcLURE segments 1a, 1b, 2a and 2b shown in Scheme 1.
Under our optimized KAHA ligation and rearrangement conditions, we performed ligation reactions according to segment selection shown in Scheme 1 and synthesized proteins 4c, 4d and 4e in good yields (60-72%). Using our established Acm deprotection conditions, we removed the six Acm groups from 4c, 4d and 4e through treatment with 1% AgOAc in 50% aqueous AcOH for 2 h at 50 1C. The deprotected reduced proteins 5c, 5d, and 5e were isolated in 65-72% yields (Scheme 1). We then
Paper
RSC Chemical Biology performed the folding reaction under our optimized folding conditions for the reduced proteins 5c, 5d, and 5e. The folding proceeded smoothly and produced folded TcLURE 6c, TfTcLURE 6d and TcTfLURE 6e in 24-30% yields after RP-HPLC purification. The final purified folded proteins 6c, 6d and 6e were confirmed via ESI-MS analysis (see ESI †).
Bioassay of protein analogues
We examined TfLURE 6a and the synthetic analogues TfTcLURE 6d and TcTfLURE 6e through an in vitro pollen tube attraction assay to elucidate the species-preferentiality in pollen tube attraction. TcTfLURE 6e showed a comparable activity (35%, n = 35) to TfLURE 6a (45%, n = 11). This suggests that the different residues in the a-ketoacid segment (X 1 , X 2 , X 3 , X 4 ) do not strongly contribute to species-preferentiality. On the other hand, TfTcLURE 6d showed a lower attraction activity (17%, n = 34). Therefore, residues embedded in the 5-oxaproline segment (i.e., X 5 , X 6 , X 7 , X 8 ) appear to be more responsible for the preferentiality in the attraction of T. fournieri pollen tubes.
Conclusions
In conclusion, we developed a versatile synthetic strategy for cysteine-rich pollen tube attractant LURE proteins from Torenia through KAHA ligation. The chemically synthesized TfLURE protein 6a showed a comparable attraction of pollen tubes to the recombinant protein. We employed a rapid and efficient convergent synthesis to access the LURE proteins (TfLURE and TcLURE) and their hybrid variants (TfTcLURE and TcTfLURE). Using these proteins, we identified the amino acid residues (Gly26, Asp27, Trp33, and Ser51) responsible for the speciesspecific pollen tube attraction in T. fournieri.
Conflicts of interest
There are no conflicts to declare. | 2022-03-20T15:14:30.930Z | 2022-03-18T00:00:00.000 | {
"year": 2022,
"sha1": "72b46fd07cec3f0ca9ab3eef83ea4b785c843dfd",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/cb/d2cb00039c",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0582cab89c84c02559cdf945617132922a7485c3",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219504995 | pes2o/s2orc | v3-fos-license | Research on the Signal Characteristics of Internal Gas Leakage in Safety Valve
Based on the acoustic emission theory, this paper explores the signal characteristics of internal gas leakage in safety valve. In this paper, a simulation test-bed and an acoustic emission signal detection system for internal leakage in safety valve are designed and built. Through theoretical analysis and experimental research, the influence of factors such as inlet pressure of safety valve, size of leakage hole of sealing surface are explored, and the quantitative relationship between the gas volume leakage rate and the average signal level ASL of the characteristic parameters of acoustic emission signal is established.
Introduction
The safety valve is an automatic valve, which uses the pressure of the medium itself to discharge the specified amount of fluid to prevent the pressure of pressure bearing devices and equipment such as boilers, pressure vessels or pressure pipes from exceeding the predetermined safety value then causing overpressure and damage, so as to ensure the normal operation of the equipment and the personnel safety [1]. However, when the safety valve is affected by medium corrosion, erosion aging, improper operation and other factors, the sealing surface is easy to be damaged, resulting in internal leakage, causing medium loss and waste, even serious safety accidents. Traditional detection methods need experienced technicians to listen to the sound to judge or remove the safety valve from the pipeline for off-line detection, which is time-consuming and laborious, and cannot grasp the internal leakage of the safety valve in time.
Acoustic emission detection technology is a dynamic non-destructive detection method, with the advantages of convenient detection, no need to stop production and low cost [2][3][4][5]. It is of great significance to use this technology to monitor the internal leakage of the safety valve so as to repair and replace the damaged safety valve in time, prevent medium waste and reduce the probability of safety accidents.
Acoustic emission theory
Acoustic emission refers to a physical phenomenon [6] in which an object or material is subjected to deformation or external force, producing a transient stress wave due to the rapid release of elastic energy, as shown in Figure 1. When acoustic emission occurs in materials, each acoustic emission signal emitted by the acoustic source contains the information of the internal structure or defect property and state change of materials. Therefore, sensitive instruments can be used to receive and process the acoustic emission signal. Through the analysis and research of the characteristic parameters of the acoustic emission source, the position, state change degree and changing trend of the internal defects of materials or structures can be inferred. The acoustic emission signal of internal leakage in safety valve carries the information of leakage point. The acoustic emission signal is picked up by the acoustic emission sensor, and the leakage degree of the safety valve can be judged by analyzing and processing the signal.
Time-frequency characteristics of acoustic emission signal of internal leakage in safety valve
The acoustic emission signal of internal leakage in safety valve belongs to the continuous acoustic emission signal. In order to extract the information representing the characteristics of internal leakage in safety valve from the acoustic emission signal in time domain, it is necessary to use the average value of signal characteristics rather than the instantaneous value. The average signal level ASL represents the average value of signal level in the sampling time, which can be used as the criteria for determining the internal leakage in safety valve [7]. The average signal level ASL is expressed as follows: (3) Where: P is the acoustic power, W. The logarithm of acoustic power is linear with the logarithm of leakage rate [9], namely: Where: Q is the leakage rate, ml/min; b and c are coefficients respectively, and their values are related to safety valve type, leakage type, leakage hole size, inlet pressure, type of discharge medium, size of safety valve body and other factors.
Test device and detection system
The detection system for internal leakage in safety valve [10] consists of internal leakage simulation test-bed of safety valve and signal acoustic emission detection system for internal leakage in safety valve.
The internal leakage simulation test-bed is composed of nitrogen gas source, pressure regulating valve, pressure gauge, flowmeter and internal leakage simulation prototype of safety valve, as shown in Figure 2.
Figure 2. The internal leakage simulation test-bed
Spring loaded safety valve of HTO series of 3K4 specification developed by Beijing Aerospace Power Research Institute is selected as the internal leakage simulation prototype of safety valve, with nominal inlet pressure of 600 pounds. In order to simulate the different leakage state of safety valve, the safety valve clack is processed. There are four valve clacks used in the test: three of them have been slotted, with the size of 0.15mm * 0.5mm, 0.3mm * 0.5mm and double holes of 0.15mm * 0.5mm * 2, respectively, to simulate contact surface damage and particle accumulation; the other one has been manually polished on the sealing surface by sandpaper to simulate sealing surface scratch. Because the leakage rate of gas in safety valve is very small, the volume leakage rate of gas is measured by soap-film flowmeter.
The acoustic emission detection system for internal leakage signal of safety valve adopts the acoustic emission system produced by American Physical Acoustics Company, including sensor, preamplifier, acoustic emission acquisition card and supporting acoustic emission software. The center frequency of acoustic emission sensor is 30kHz, and the gain of preamplifier is 40dB.
Analysis of test results
For the internal leakage detection test of safety valve, first, fix the internal leakage simulation prototype of safety valve on the internal leakage simulation test-bed, and connect it to the internal leakage detection system of safety valve. Open the air source, adjust the pressure at the inlet of the prototype and control the gas flow. When the leakage is stable, measure the flow through the soap-film flowmeter, and record the acoustic emission signal of internal leakage in the safety valve through the acoustic emission system. Change the inlet pressure of safety valve (0.01MPa, 0.02MPa, …, 0.1MPa) to measure and test different leakage states. Finally, through the acoustic emission signal detection system for the internal leakage in the safety valve, the signal is collected, amplified and filtered to extract the required signal eigenvalues. Replace the leakage holes of different types and sizes of safety valves and repeat the above test process. Based on the analysis of the acoustic emission signals of internal leakage of the above types of leakage holes under different inlet pressures, the following conclusions are drawn:
Time domain and frequency domain characteristics of acoustic emission signal of internal leakage in safety valve
(1) The peak frequency range of internal leakage in safety valve is 20-30kHz, which is related to the type of leakage holes. For square slotted leakage holes, the peak frequency is near 20kHz; for manually polished leakage holes, the peak frequency is near 30kHz. (2) For the same type of leakage holes, the peak frequency of internal leakage signal in the safety valve does not change with the size of the hole diameter and the inlet pressure. But for the same leakage hole, the amplitude of peak frequency increases with the increase of inlet pressure.
(3) By observing whether the acoustic emission signal has peak value in frequency domain, it can preliminarily judge whether the safety valve has internal leakage.
Relationship between acoustic emission signal and leakage rate in safety valve
If the leakage rate of safety valve can be calculated according to the characteristic value of acoustic emission signal of internal leakage in the safety valve, the degree of internal leakage or damage of sealing surface of safety valve can be judged more directly. According to equation (5), the relationship between the ASL value of acoustic emission signal of safety valve and the logarithm of leakage rate is linear. Therefore, the leakage rate of safety valve can be estimated by measuring the ASL value of the acoustic emission signal of internal leakage in safety valve. Taking 0.15mm * 0.5mm square leakage hole as an example, by collecting the ASL value of acoustic emission signal under different leakage rates, and using origin to make a linear fitting between the two, the curve is fitted, as shown in Figure 8: Therefore, for the internal leakage of safety valve, the leakage rate and average signal level can be fitted according to log ASL b Q c =+ , and the leakage rate of safety valve can be estimated by measuring the average signal level ASL of the acoustic emission signal of internal leakage.
Conclusion
Through theoretical analysis and experimental research, this paper explores the influence of factors such as the inlet pressure of safety valve, the size of leakage holes, the type and the number of leakage holes on the acoustic emission signal, and establishes the quantitative relationship between the gas volume leakage rate and the average signal level ASL of the characteristic parameters of acoustic emission signal in the process of internal leakage in safety valve, and draws the following conclusions: (1) It is the first time to apply acoustic emission technology to the internal leakage detection in safety valve. The acoustic emission technology can effectively detect the internal leakage state in safety valve, which provides a new method for the internal leakage detection of safety valve.
(2) The peak frequency range of internal leakage in safety valve is 20-40kHz. The specific value is related to the type of internal leakage. The influence of hole size and inlet pressure is not significant. The peak frequency of contact surface damage and particle accumulation is about 20kHz, while the peak frequency of contact surface scratch is about 30kHz. (3) For multi-hole leakage, the ASL value of acoustic emission signal is related to the leakage rate of single leakage hole, but it is independent of the total leakage rate, and there is no superposition of acoustic emission signals of each leakage hole.
(4) For internal leakage in safety valve, given the average signal level ASL value of internal leakage in safety valve, the leakage rate of safety valve can be estimated by equation. | 2020-05-28T09:18:12.400Z | 2020-05-19T00:00:00.000 | {
"year": 2020,
"sha1": "5492f69aa50b9800ce5e53d5cea313a6c7383cb8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/799/1/012001",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "dfc43ac6f7e8b9452d8dd4878eae3ff70b8c9aa3",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
238856658 | pes2o/s2orc | v3-fos-license | Attenuation Coefficient Estimation for PET/MRI With Bayesian Deep Learning Pseudo-CT and Maximum-Likelihood Estimation of Activity and Attenuation
A major remaining challenge for magnetic resonance-based attenuation correction methods (MRAC) is their susceptibility to sources of magnetic resonance imaging (MRI) artifacts (e.g., implants and motion) and uncertainties due to the limitations of MRI contrast (e.g., accurate bone delineation and density, and separation of air/bone). We propose using a Bayesian deep convolutional neural network that in addition to generating an initial pseudo-CT from MR data, it also produces uncertainty estimates of the pseudo-CT to quantify the limitations of the MR data. These outputs are combined with the maximum-likelihood estimation of activity and attenuation (MLAA) reconstruction that uses the PET emission data to improve the attenuation maps. With the proposed approach uncertainty estimation and pseudo-CT prior for robust MLAA (UpCT-MLAA), we demonstrate accurate estimation of PET uptake in pelvic lesions and show recovery of metal implants. In patients without implants, UpCT-MLAA had acceptable but slightly higher root-mean-squared-error (RMSE) than Zero-echotime and Dixon Deep pseudo-CT when compared to CTAC. In patients with metal implants, MLAA recovered the metal implant; however, anatomy outside the implant region was obscured by noise and crosstalk artifacts. Attenuation coefficients from the pseudo-CT from Dixon MRI were accurate in normal anatomy; however, the metal implant region was estimated to have attenuation coefficients of air. UpCT-MLAA estimated attenuation coefficients of metal implants alongside accurate anatomic depiction outside of implant regions.
I. INTRODUCTION
he quantitative accuracy of simultaneous positron emission tomography and magnetic resonance imaging (PET/MRI) depends on accurate attenuation correction.Simultaneous imaging with positron emission tomography and computed tomography (PET/CT) is the current clinical gold standard for PET attenuation correction since the CT images can be used for attenuation correction of 511keV photons with piecewise-linear models [1].Magnetic resonance imaging (MRI) measures spin density rather than electron density and thus cannot directly be used for PET attenuation correction.
A comprehensive review of attenuation correction methods for PET/MRI can be found at [2].Briefly, current methods for attenuation correction in PET/MRI can be grouped into the following categories: atlas-based, segmentation-based, and machine learning-based.Atlas-based methods utilize a CT atlas that is generated and registered to the acquired MRI [3]- [6].Segmentation-based methods use special sequences such as ultrashort echo-time (UTE) [7]- [11] or zero echo-time (ZTE) [12]- [16] to estimate bone density and Dixon sequences [17]- [19] to estimate soft tissue densities.Machine learning-based methods, including deep learning methods, use sophisticated machine learning models to learn mappings from MRI to pseudo-CT images [20]- [26] or PET transmission images [27].There have also been methods that estimate attenuation coefficient maps from the PET emission data [28], [29] or directly correct PET emission data [30]- [32] using deep learning.
For PET alone, an alternative method for attenuation correction is "joint estimation", also known as maximum likelihood estimation of activity and attenuation (MLAA) [33], [34].Rather than relying on an attenuation map that was measured or estimated with another scan or modality, the PET activity image (-map) and PET attenuation coefficient map (map) are estimated jointly from the PET emission data only.However, MLAA suffers from numerous artifacts and high noise [35].
In PET/MRI, recent methods developed to overcome the limitations of MLAA include using MR-based priors [36], [37], constraining the region of joint estimation [38], or using deep learning to denoise the resulting -map and/or -map from MLAA [39]- [42].Mehranian and Zaidi's [36] approach of using priors improved MLAA results however this was not demonstrated on metal implants.Ahn et al and Fuin et al's methods [37], [38] that also use priors were able to recover metal implants in the PET image reconstruction, but the -maps were missing bones and other anatomical features.Furthermore, their methods require a manual or semiautomated segmentation step to delineate the regions where to apply the correct priors (such as the metal implant region).The approaches by Hwang et al [39]- [41] and Choi et al [42] that utilize supervised deep learning resulted in anatomically correct and accurate -maps; however, the method was not demonstrated in the presence of metal implants.
Utilizing supervised deep learning is considered a very promising method for accurate and precise PET/MRI attenuation correction.However, the main limitation of a supervised deep learning method is the finite data set that needs to have a diverse set of well-matched inputs and outputs.
In PET/MRI, the presence of metal implants complicates training because there are resulting metal artifacts in both CT and MRI.Furthermore, the artifacts appears differently: a metal implant produces a star-like streaking pattern with high Hounsfield unit values in the CT image [43] and a signal void in the MRI image [37].This makes registration between MRI and CT images difficult and the artifacts lead to intrinsic errors in the training dataset.
In addition, there will arguably always be edge cases and rare features that cannot be captured with enough representation in a training data set.Images of humans can have rare features not easily obtained (e.g., missing organs due to surgery, a new or uncommon implant).Under these conditions, a standard supervised deep learning approach may produce incorrect predictions and the user (or any downstream algorithm) will be unaware of the errors.
A recent study by Ladefoged et al [44] demonstrated the importance of a high-quality data set in deep learning-based brain PET/MRI attenuation correction.A large, diverse set of at least 50 training examples were required to achieve robustness and they highlighted that the remaining errors and limitations in deep learning-based MR attenuation correction were due to "abnormal bone structures, surgical deformation, and metal implants." In this work, we propose the use of supervised Bayesian deep learning to estimate predictive uncertainty to detect rare or previously unseen image structures and estimate intrinsic errors that traditional supervised deep learning approaches cannot.
Bayesian deep learning provides tools to address the limitations of a finite training dataset: the estimation of epistemic and predictive uncertainty [45].A general introduction to uncertainties in machine learning can be found at [46].
Epistemic uncertainty is the uncertainty on learned model parameters that arises due to incomplete knowledge or, in the case of supervised machine learning, the lack of training data.Epistemic uncertainty is manifested as a diverse set of different model parameters that fit the training data.
The epistemic uncertainty of the model can then be used to produce predictive uncertainty that capture if there are any features or structures that deviate from the training dataset on a test image.This allows for the detection of rare or previously unseen image structures without explicitly training to identify these structures.
Typical supervised deep learning approaches do not capture the epistemic nor predictive uncertainty because only one set of model parameters are learned and only a single prediction is produced (e.g., a single pseudo-CT image).
In this work for PET/MRI attenuation correction, the predictive uncertainty is used to automatically weight the balance between the deep learning -map prediction from MRI and the -map estimates from the PET emission data from MLAA.When the model is expected to have good performance on a region in a test image, then MLAA has minimal contribution.However, when the model is expected to have poor performance on regions in a test image, then MLAA has a stronger contribution to the attenuation coefficient estimates of those regions.
Specifically, we extend the framework of Ahn et al's MLAA regularized with MR-based priors [37] and generate MR-based priors with a Bayesian convolutional neural network (BCNN) [47] that additionally provides a predictive uncertainty map to automatically modulate the strength of the MLAA priors.We demonstrate a proof-of-concept methodology that produces anatomically correct, accurate, and precise -maps with high SNR that can recover metal implants for PET/MRI attenuation correction in the pelvis.
II. MATERIALS AND METHODS
UpCT-MLAA is composed of two major elements: initial pseudo-CT characterization with Bayesian deep learning through Monte Carlo Dropout [47] and PET reconstruction with regularized MLAA [37].The algorithm is depicted in Fig. 1 and each component is described in detail below.
A. Bayesian Deep Learning
The architecture of the BCNN is shown in Fig. 2. It was based on the U-net-like network in [21] with the following modifications: (1) Dropout [47], [48] was included after every convolution, (2) the patch size was increased to 64 × 64 × 32 voxels, and (3) each layer's number of channels was increased by 4 times to compensate for the reduction of information capacity due to the Dropout.The PyTorch software package [49] (v0.4.1, http//pytorch.org) was used.
Inputs to the model were volume patches of the following dimensions and size: 64 pixels × 64 pixels × 32 pixels × 3 channels.Each channel was a volume patch of the biascorrected and fat-tissue normalized Dixon in-phase image, Dixon fractional fat image, and Dixon fractional water image, respectively, at the same spatial locations [50].The output was a corresponding pseudo-CT image with size 64 pixels × 64 pixels × 32 pixels × 1 channel.ZTE MRI was not used as inputs to this model since it has been demonstrated that accurate HU estimates can be achieved with only the Dixon MR pulse sequence [22], [50].
1) Model Training
Model training was performed similarly to our previous work [21], [50].The loss function was a combination of an L1-loss, gradient difference loss (GDL), and Laplacian difference loss (LDL): where ∇ is the gradient operator, Δ is the Laplacian operator, is the ground-truth CT image patch, and ̂ is the output pseudo-CT image patch with = 0.01 and = 0.01.The Adam optimizer [51] (learning rate = 1 × 10 −5 , 1 = 0.9, 2 = 0.999, = 1 × 10 −8 ) was used to train the neural network.An L2 regularization ( = 1 × 10 −5 ) on the weights of the network was used.He initialization [52] was used and a mini-batch of 4 volumetric patches was used for training on two NVIDIA GTX Titan X Pascal (NVIDIA Corporation, Santa Clara, CA, USA) graphics processing units.The models were trained for approximately 68 hours to achieve 100,000 iterations.
B. Pseudo-CT prior and weight map
Generation of the pseudo-CT estimate and variance image was performed through Monte Carlo Dropout [47] with the BCNN described above.The Monte Carlo Dropout inference is outlined in Fig. 1.A total of 243 Monte Carlo samples were performed to generate a pseudo-CT estimate and a variance map: (2) where is a sample of the BCNN with Dropout, is the input Dixon MRI, and N is the number of Monte Carlo samples.Inference took approximately 40 minutes per patient on 8 NVIDIA K80 graphics processing units.We include a detailed description of the sources of uncertainties and variations in the Supplementary Material.
The pseudo-CT estimate was converted to a -map with a bilinear model [1] and the variance map was converted to a weight map with a range of 0.0 to 1.0 with the following empirical transformation: where 2 ( ⃗) is the variance at voxel position ⃗.The sigmoidal transformation was calibrated by inspecting the resulting variance maps.It was designed such that the transition band of the sigmoid covers the range of variances in the body and finally saturates at uncertainty values of bowel air and metal artifact regions.With the constants chosen, the transition band of the sigmoid corresponds to variances of 0 to ~100,000 HU 2 (standard deviations of 0 to ~300 HU).The weight map was then linearly scaled to have a range of 1 × 10 3 to 5 × 10 6 , called .The low values correspond to regions with high uncertainty and thus the estimation for these regions would be dominated by the emission data.Additional information about the empirical transformation is provided in the Supplementary Material.
The weight map was additionally processed to set weights outside the body (e.g.air voxels) to 0.0 so that these were not included in MLAA reconstruction.A body mask was generated by thresholding (> -400 HU) the pseudo-CT estimate.The initial body mask was morphologically eroded by a 1-voxel radius sphere.Holes in the body were then filled in with the imfill function (Image Processing Toolbox, MATLAB 2014b) at each axial slice.The body masks were then further refined by removing arms as in our previous work [14].
C. Uncertainty estimation and pseudo-CT prior for robust Maximum Likelihood estimation of Activity and Attenuation (UpCT-MLAA)
UpCT-MLAA is a combination of the outputs of the BCNN and regularized MLAA.The process is depicted in Fig. 1.MRI and CT images of patients without metal implants were used to train the BCNN.
We explicitly trained the network only on patients without metal implants to force the BCNN to extrapolate on the voxel regions containing metal implant (i.e."out-of-distribution" features) to maximize the uncertainty in these regions.
Thus, a high variance (>=~1 × 10 5 HU 2 ) emerged in implant regions compared to a low variance in normal anatomy (0 to ~2.5 × 10 4 HU 2 ) with the uncertainty estimation as can be seen in Fig. 1.The -map estimate and the weight map were then provided to the regularized MLAA [37] to perform PET reconstruction (5 iterations with 28 subsets, each iteration consists of 1 TOF-OSEM iteration and 5 ordered subsets transmission (OSTR) iterations, as described above, ℎ =2 × 10 4 ).Specifically, the MR-based regularization term in MLAA is: where indexes over each voxel in the volume. is determined from the mean pseudo-CT image and is determined from the variance image through the weight map transformation.The formulation in eq. 5 is slightly different than in Section 2.3.2 of [37] but has the same effect.
III. PATIENT STUDIES
The study was approved by the local Institutional Review Board (IRB).Patients who were imaged with PSMA-11 signed a written informed consent form while the IRB waived the requirement for informed consent for FDG and DOTATATE studies.
Patients with pelvic lesions were scanned using an integrated 3 Tesla time-of-flight PET/MRI system [53] (SIGNA PET/MR, GE Healthcare, Chicago, IL, USA).The patient population consisted of 29 patients (Age = 58.7 ± 13.9 years old, 16 males, 13 females): 10 patients without implants were used for model training, 16 patients without implants were used for evaluation with a CT reference, and three patients with implants were used for evaluation in the presence of metal artifacts.
A.
PET/MRI Acquisition.The PET acquisition on the evaluation set was performed with different radiotracers: 18
Pre-processing consisted of filling in bowel air with softtissue HU values and copying arms from the Dixon-derived pseudo-CT due to the differences in bowel air distribution and the CT scan being acquired with arms up, respectively [14].
MRI and CT image pairs were co-registered using the ANTS [54] registration package and the SyN diffeomorphic deformation model with combined mutual information and cross-correlation metrics [14], [21], [50].
D.
Data Analysis.Image error analysis and lesion-based analysis were performed for patients without metal implants: the average (µ) and standard deviation (σ) of the error, mean-absolute-error (MAE), and root-mean-squared-error (RMSE) were computed over voxels that met a minimum signal amplitude and/or signalto-noise criteria [21].Global HU and PET SUV comparisons were only performed in voxels with amplitudes > -950 HU in the ground-truth CT to exclude air, and a similar threshold of > 0.01 cm -1 attenuation in the CTAC was used for comparison of AC maps.Bone and soft-tissue lesions were identified by a board-certified radiologist.Bone lesions are defined as lesions inside bone or with lesion boundaries within 10 mm of bone [56].A Wilcoxon signed-rank test was used to compare the SUVmax biases compared to CTAC of individual lesions.
In the cases where a metal implant was present, we qualitatively examined the resulting AC maps of the different IV.RESULTS
A. Monte Carlo Dropout
Representative images of the output of the BCNN with Monte Carlo Dropout is shown in Fig. 3.The same mask used for the weight maps was used to remove voxels outside the body.The pseudo-CT images visually resemble the ground-truth CT images for patients without implants.While in patients with implants, the metal artifact region in the MRI was assigned air HU values.Nonetheless, the associated standard deviation maps highlighted image structures that the network had high predictive uncertainty.The most important of which are air pockets and the metal implant.The BCNN highlighted these regions and structures in the standard deviation image without being explicitly trained to do so.
An additional example of the uncertainty estimation is provided in Supp.Fig. 1.The input MRI had motion artifacts due to breathing and arm truncation due to inhomogeneity at the edge of the FOV.Like the metal implants, the BCNN highlighted the motion artifact region and arm truncation in the variance image without being explicitly trained to do so.
B. Patients without implants
The PET reconstruction results for the patients without implants are summarized in Fig. 4. The RMSE is reported along with the average () and standard deviation () of the error as RMSE ( ± ).Additional results for the pseudo-CT, AC maps, and PET data are provided in Supp.Fig. 2 to 5.
1) Pseudo-CT results
The total RMSE for the pseudo-CT compared to goldstandard CT across all volumes were 98 HU (−13 ± 97 HU) for ZeDD-CT and 95 HU (−6.5 ± 94 HU) for BpCT.The BpCT is the same pseudo-CT image used in UpCT-MLAA.
4) Lesion uptake and SUVmax.
The results for lesion analysis for patients without implants are shown in Fig. 4.There were 30 bone lesions and 60 soft tissue lesions across the 16 patient datasets.The RMSE w.r.t.CTAC PET SUV and SUVmax are summarized in Table I.For SUVmax of bone lesions, no significant difference was found for ZeDD PET and BpCT-AC PET (p = 0.116) while PET ZeDD PET and UpCT-MLAA PET were significantly different (p = 0.037).For SUVmax of soft tissue lesions, ZeDD PET and BpCT-AC PET were significantly different (p < 0.001) while no significant difference was found between ZeDD PET and UpCT-MLAA PET (p = 0.16).
C. Patients with metal implants
Figs. 5 and 6 show the different AC maps generated with the different reconstruction processes and associated PET image reconstructions on two different radiotracers ( 18 F-FDG and 68 Ga-PSMA) and Fig. 7 shows the summary of the SUVmax results.Additional results for pseudo-CT, AC maps, and PET images are provided in Supp.Fig. 6 to 11.
1) Metal implant recovery.
Figs. 5b (1 st and 2 nd column) and 6b (1 st and 2 nd column) show the AC map estimation results.
BpCT-AC filled in the metal implant region with air since the metal artifact in MRI appears as a signal void.Although reconstructing using naive MLAA recovers the metal implant, the AC map was noisy and anatomical structures were difficult to depict.The addition of regularization (increasing ) reduces the noise, however over-regularization eliminates the presence of the metal implant.The use of a different radiotracer also influenced reconstruction performance: the MLAA-based methods performed worse when the tracer was 68 Ga-PSMA compared to 18 F-FDG with low regularization.In contrast, UpCT-MLAA-AC recovered the metal implant while maintaining high SNR depiction of anatomical structures outside the implant region for both radiotracers.The high attenuation coefficients were constrained in the regions where high variance was measured (or where the metal artifact was present on the BpCT AC maps).Qualitatively, the MLAA-based methods (UpCT-MLAA and Standard MLAA) show uptake around the implant, whereas BpCT-AC PET and CTAC PET show the implant region without any uptake.When compared to the NAC PET, the MLAA-based methods better match what is depicted within the implant region.Quantitatively, Table I summarizes the SUV results for voxels in-plane of the metal implant and out-plane of the metal implant.
2) PET image reconstruction
3) SUVmax quantification.Fig. 7 shows the comparisons of SUVmax of lesions in-plane and out-plane of the metal implant and Table II and Table III list the RMSE values for SUV and SUVmax.There were 6 lesions in-plane and 15 lesions out-plane with the metal implants across the 3 patients with implants.Only UpCT-MLAA provided relatively low SUVmax quantification errors on lesions both in-plane and out-plane of the metal implant.
For lesions in-plane of the metal implant, BpCT-AC PET had large underestimation of SUVmax, naive MLAA PET had better mean estimation of SUVmax but had a large standard deviation.The addition of light regularization to MLAA improves the RMSE by decreasing the standard deviation at the cost of increased mean error.Increasing regularization increases RMSE but reduces the bias error with increased standard deviation.UpCT-MLAA PET had the best agreement with CTAC PET.Only Naive MLAA and UpCT-MLAA had results where a significant difference could not be found when compared to CTAC (p > 0.05).
For lesions out-plane of the metal implant, the trend is reverse for BpCT-AC PET and the MLAA methods.BpCT-AC PET had the best agreement with CTAC PET and the MLAA methods showed decreasing RMSE with increasing regularization.UpCT-MLAA had the second-best agreement with CTAC PET.No significant difference could be found for all methods when compared to CTAC (p > 0.05) V. DISCUSSION This paper presents the use of a Bayesian deep convolutional neural network to enhance MLAA by providing an accurate pseudo-CT prior alongside predictive uncertainty estimates that automatically modulate the strength of the priors (UpCT-MLAA).The method was evaluated in patients without and with implants with pelvic lesions.The performance for metal implant recovery and uptake estimation in pelvic lesions in patients with metal implants was characterized.This is the first work that demonstrated an MLAA algorithm for PET/MRI that was able to recover metal implants while also accurately depicting detailed anatomic structures in the pelvis.This is also the first work to synergistically combine supervised Bayesian deep learning and MLAA in a coherent framework for simultaneous PET/MRI reconstruction in the pelvis.The UpCT-MLAA method demonstrated similar quantitative uptake estimation of pelvic lesions to a state-of-the-art attenuation correction method (ZeDD-CT) while additionally providing the capability to perform reasonable PET reconstruction in the presence of metal implants and removing the need of a specialized MR pulse sequence.
One of the major advantages of using MLAA is that it uses the PET emission data to estimate the attenuation coefficients alongside the emission activity.This gives MLAA the capability to truly capture the underlying imaging conditions that the PET photons undergo.This is especially important in simultaneous PET/MRI where true ground-truth attenuation maps cannot be derived.Currently, the most successful methods for obtaining attenuation maps are through deep learning-based methods [20]- [28].However, these methods are inherently supervised model-based techniques and have limited capacity to capture imaging conditions that were not present in the training set nor conditions that cannot be reliably modeled, such as the movement and mismatch of bowel air and the presence of metal artifacts.Since MLAA derives the attenuation maps from the PET emission data, MLAA can derive actual imaging conditions that supervised model-based techniques are unable to capture.Furthermore, this eliminates the need for specialized MR pulse sequence (such as ZTE for bone) since the bone AC would be estimated by MLAA instead.This would allow for more accurate and precise uptake quantification in simultaneous PET/MRI.
To the best of our knowledge, only a few other methods combines MLAA with deep learning [39]- [42].Their methods apply deep learning to denoise an MLAA reconstruction by training a deep convolutional neural network to produce an equivalent CTAC from MLAA estimates of activity and attenuation maps.This method inherently requires ground-truth CTAC maps to train the deep convolutional neural network and thus is affected by the same limitations that supervised deep learning and model-based methods have.Unlike their method, our method (UpCT-MLAA) preserves the underlying MLAA reconstruction while still providing the same reduction of crosstalk artifacts and noise.
Our approach is different from all other approaches because we leverage supervised Bayesian deep learning uncertainty estimation to detect rare and previously unseen structures in pseudo-CT estimation.There are only a few previous works that estimate uncertainty on pseudo-CT generation [57], [58].Klages et al [57] utilized a standard deep learning approach and extracted patch uncertainty but did not assess their method on cases with artifacts or implants.Hemsley et al [58] utilized a Bayesian deep learning approach to estimate total predictive uncertainty and similarly demonstrated high uncertainty on metal artifacts.Both approaches were intended for radiotherapy planning and our work is the first to apply uncertainty estimation towards PET/MRI attenuation correction.We High uncertainty was present in many different regions.Metal artifact regions had high uncertainty because they were explicitly excluded in the training process-i.e., an out-ofdistribution structure.Air pockets had high uncertainty likely because of the inconsistent correspondence of air between MRI and CT-i.e., intrinsic dataset errors.Other image artifacts (such as motion due to breathing) have high uncertainty likely due to the rare occurrence of these features in the training dataset and its inconsistency with the corresponding CT images.Bone had high uncertainty since there is practically no bone signal in the Dixon MRI.Thus, the CNN likely learned to derive bone value based on the surrounding structure and the variance image shows the intrinsic uncertainty and limitations of estimating bone HU values from Dixon MRI.Again, these regions were highlighted by being assigned high uncertainty without the network being explicitly trained to identify these regions.
Method
On evaluation with patients without implants, we demonstrated that BpCT was a sufficient surrogate of ZeDD-CT for attenuation correction across all lesion types: BpCT provided comparable SUV estimation on bone lesions and improved SUV estimation on soft tissue lesions.However, the BpCT images lacked accurate estimation of bone HU values that resulted in average underestimation of bone lesion SUV values (-0.9%).The average underestimation was reduced with UpCT-MLAA (-0.3%).Although the mean underestimation values improved, the RMSE of UpCT-MLAA was higher than BpCT-AC (3.6% vs. 3.2%, respectively) due to the increase in standard deviation (3.6% vs. 3.1%, respectively).This trend was more apparent for soft tissue lesions.The RMSE, mean error, and standard deviation were worse for UpCT-MLAA vs. BpCT.Since the PET/MRI and CT were acquired in separate sessions, possibly months apart, there may be significant changes in tissue distribution.This could explain the increase in errors of BpCT-AC under UpCT-MLAA.
On the patients with metal implants, UpCT-MLAA was the most comparable to CTAC across all lesion types.Notably, there was an opposing trend in the PET SUVmax results for lesions in/out-plane of the metal implant between BpCT-AC and the MLAA methods.These were likely due to the sources of data for reconstruction.BpCT-AC has attenuation coefficients estimated only from the MRI whereas Naïve MLAA has attenuation coefficients estimated only from the PET emission data.The input MRI were affected by large metal artifacts due to the metal implants that makes the regions appear to be large pockets of air.Thus, in BpCT-AC, the attenuation coefficients of air were assigned to the metal artifact region.For lesions in-plane of the implant, this led to a large bias due to the bulk error in attenuation coefficients and a large variance due to the large range of attenuation coefficients with BpCT-AC, while this is resolved with MLAA.For lesions out-plane of the implant, the opposite trend arises.For MLAA the variance is large due to the noise in the attenuation coefficient estimates.This is resolved in BpCT-AC since the attenuation coefficients are learned for normal anatomical structures that are unaffected by metal artifacts.The combination of BpCT with MLAA through UpCT-MLAA resolved these disparities.
A major challenge to evaluate PET reconstructions in the presence of metal implants is that typical CT protocols for CTAC produce metal implant artifacts that may cause overestimation of uptake and thus does not serve as a true reference.Since our method relies on time-of-flight MLAA, we believe that our method would produce a more accurate AC map, and therefore more accurate SUV map.This is demonstrated by the lower SUVmax estimates of UpCT-MLAA compared to CTAC PET.However, to have precise evaluation, a potential approach to evaluate UpCT-MLAA is to use metal artifact reduction techniques on the CT acquisition [43] or by acquiring transmission PET images [59].
Accurate co-registration of CT and MRI with metal implant artifacts was a limitation since the artifacts present themselves differently.Furthermore, the CT and MRI images were acquired in separate sessions.These can be mitigated by acquiring images sequentially in a tri-modality system [60].
Another limitation of this study was the small study population.Having a larger population would allow evaluation with a larger variety of implant configurations and radiotracers and validation of the robustness of the attenuation correction strategy.
Finally, the performance of the algorithm can be further improved.In this study, we only sought to demonstrate the utility of uncertainty estimation with a Bayesian deep learning regime for the attenuation correction in the presence of metal implants: that the structure of the anatomy is preserved and implants can be recovered while still providing similar PET uptake estimation performance in pelvic lesions.Our proposed UpCT-MLAA was based on MLAA regularized with MRbased priors [27], which can be viewed as uni-modal Gaussian priors.We speculate that this could be further improved by using Gaussian mixture priors for MLAA as in [36].The major task to combine these methods would be to learn the Gaussian mixture model parameters from patients with implants.With additional tuning of the algorithm and optimization of the BCNN, UpCT-MLAA can potentially produce the most accurate and precise attenuation coefficients in all tissues and in any imaging conditions.
VI. CONCLUSION
We have developed and evaluated an algorithm that utilizes a Bayesian deep convolutional neural network that provides accurate pseudo-CT priors with uncertainty estimation to enhance MLAA PET reconstruction.The uncertainty estimation allows for the detection of "out-of-distribution" pseudo-CT estimates that MLAA can subsequently correct.We demonstrated quantitative accuracy in pelvic lesions and recovery of metal implants in pelvis PET/MRI.
Sources of uncertainty and variations
There are three different predictive uncertainties that are utilized in our work: total voxel uncertainty--and its components--patch uncertainty, and voxel-wise uncertainty.
Total voxel uncertainty is the combination of patch uncertainty (uncertainty due to changes in input patch) and uncertainty of each voxel for the same input patch (uncertainty due to changes in the model).These can be decoupled and independently estimated.
Patch uncertainty comes from variations of the response of the CNN due to changes in the input data.Whereas voxel uncertainty (for the same input patch) come from variations of the network parameters with respect to the same input.Mathematically, the predictive likelihood for a single voxel can be written completely as follows: Thus, patch uncertainty and (patch-specific) voxel uncertainty can be independently obtained but are tightly coupled together when calculating total voxel uncertainty.In the final prediction for this work, we utilize total voxel uncertainty that incorporates both patch uncertainty and (patch-specific) voxel uncertainty.
Fig 1 .
Fig 1.Schematic flow of UpCT-MLAA.Monte Carlo Dropout is first performed with the BCNN, then the outputs are provided as inputs to PET reconstruction with regularized MLAA.
Fig 2 .
Fig 2. Deep convolutional neural network architecture used in this work.
Fig 3 .
Fig 3. Representative intermediate image outputs of the BCNN with Monte Carlo Dropout compared to the reference CT images for patients without metal implants (columns 1 and 2) and patients with metal implants (columns 3 and 4).The voxel-wise standard deviation map is shown instead of variance for better visual depiction.Regions with high standard deviation correspond to bone, bowel air, skin boundary, implants, blood vessels, and regions with likely modeling error (e.g.around the bladder in the standard deviation map in the rightmost column.) F-FDG (11 patients),68 Ga-PSMA-11 (7 patients),68 Ga-DOTATATE (1 patient).The PET scan had 600 mm transaxial field-of-view (FOV) and 25 cm axial FOV, with time-of-flight timing resolution of approximately 400 ps.The imaging protocol included a six bed-position whole-body PET/MRI and a dedicated pelvic PET/MRI acquisition.The PET data were acquired for 15-20 min during the dedicated pelvis acquisition, during which clinical MRI sequences and the following MRAC sequences were acquired: Dixon (FOV = 500 × 500 × 312 mm, resolution = 1.95 × 1.95 mm, slice thickness = 5.2 mm, slice spacing = 2.6 mm, scan time = 18 s) and ZTE MR (cubical FOV = 340340340 mm, isotropic resolution = 222 mm, 1.36 ms readout duration, FA = 0.6°, 4 µs hard RF pulse, scan time = 123 s).
Fig 4 .
Fig 4. Representative images of bone and soft tissue lesions for patients without implants (A, reproduced from (20)), scatter plots of SUV in every lesion voxel (B), and box plots of the SUVmax in each lesion (C).This shows that BpCT-AC and UpCT-MLAA-AC is near equivalent to ZeDD-CTAC in patients without implants when comparing to CTAC.
Fig 5 .
Fig 5. Representative images from metal implant patient #3 imaged with 18 F-FDG.Shown are the CT, Dixon in-phase, and NAC PET images (a), AC maps (b, first and second column), and associated PET reconstructions (b, third column).The AC maps are shown in two different window levels to highlight bone and soft tissue (b, first column) and the metal implant (b, second column).
Fig 6 .
Fig 6.Representative images from metal implant patient #1 imaged with 68 Ga-PSMA.Shown are the CT, Dixon in-phase, and NAC PET images (a), AC maps (b, first and second column), and associated PET reconstructions (b, third column).The AC maps are shown in two different window levels to highlight bone and soft tissue (b, first column) and the metal implant (b, second column).
Fig 7 .
Fig 7. Box plot summarizing the results comparing to CTAC PET for patients with implants.The red crosses denote outliers.
( * | * , , ) = 1 ∑ ∫ ( * | * , )(|, )dθ * ∈ * where * is the predicted value at the i-th voxel, * is the set of neighboring and overlapping input patches, N is the number of patches to predict the value of a single voxel, are the network parameters, , are the training input/output pairs, and (|, ) is the posterior distribution of the network parameters given the training pairs that is learned during model training.The final predicted value is obtained by taking the expectation of the model predictions over the predictive likelihood: ( * , ) − * ̂)2 =1 * ∈ * where M is the number of Monte-Carlo samples used in inference.Voxel uncertainty corresponds to the following term in the predictive likelihood: ∫ ( * | * , )(|, )dθ and corresponds to the following summation in the prediction and variance: ∑ ( * , ) =1 ∑ ( ( * , ) − * ̂)2 =1The patch uncertainty comes from averaging the predictions of different input patches for each single voxel and corresponds to the summation in the prediction and variance:∑ ( * , ) * ∈ * ∑ ( ( * , ) − * ̂)2 * ∈ *Suppose that there is no model uncertainty, and the network parameters are fixed to be ̂ (only one set of network parameters used in all inferences).The predictive likelihood will then be ( * | * , , )Suppose that we do not process overlapping patches and only extract voxel uncertainty, the predictive likelihood will be: ( * | * , , ) = ∫ ( * | * , )(|, )dθ And the final predicted value will be: * ̂= 1 ∑ ( * , ) =1And the variance will be:
TABLE I LESION
SUV ERRORS OVER THE VOLUME COMPARED TO CTAC IN PATIENTS WITHOUT IMPLANTS reconstructions and quantitatively compared SUVmax with reference CTAC PET.High uptake lesions and lesion-like objects were identified on the PET images reconstructed with UpCT-MLAA and separated into two categories: (1) in-plane with the metal implant, and (2) out-plane of the metal implant.A Wilcoxon signed-rank test was used to compare the SUV and SUVmax values between the different reconstruction methods and CTAC PET. | 2020-01-10T13:08:31.000Z | 2020-01-10T00:00:00.000 | {
"year": 2021,
"sha1": "9dd348e8227d28e7c6f6e09dde53c391beb4b99c",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/7433213/9812930/09560134.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "9dd348e8227d28e7c6f6e09dde53c391beb4b99c",
"s2fieldsofstudy": [
"Engineering",
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Physics",
"Engineering"
]
} |
16540804 | pes2o/s2orc | v3-fos-license | A systematic approach in rehabilitation of hemimandibulectomy: A case report
Loss of mandibular continuity results in deviation of remaining mandibular segment toward the resected side primarily because of the loss of tissue involved in the surgical resection. The success in rehabilitating a patient with hemimandibulectomy depends upon the nature and extent of the surgical defect, treatment plan, type of prosthesis, and patient co-operation. The earlier the mandibular guidance therapy is initiated in the course of treatment; the more successful is the patient's definitive occlusal relationship. Prosthodontic treatment coupled with an exercise program helps in reducing mandibular deviation and improving masticatory efficiency. This case report describes prosthodontic management of a patient who has undergone a hemimandibulectomy and was rehabilitated using provisional guide flange prosthesis followed by a definitive maxillary and mandibular cast partial denture with precision attachments designed to fulfill the patient's needs and requirements.
The patient was diagnosed with early squamous cell carcinoma involving left buccal mucosa and mandibular alveolus and thus left side hemimandibulectomy was performed 6 months ago. Radiation therapy was completed a month before. Extraoral examination revealed facial asymmetry, deviated lower third of face, decreased mouth opening, significant deviation of mandible to left side on mouth opening, left corner of mouth drooping downward, angular cheilitis, and left condyle and ramus absent on palpation. The patient could manually guide herself into occlusion. Intraoral examination revealed left mandibular defect distal to lateral incisor, surgical skin graft seen on resected side, 23-27; 34-37, 32-43, and 45-47 teeth missing. Maxillary and mandibular arches were partially edentulous, representing Kennedy's Class II and Class I condition respectively. Both the ridges were smooth, round with well-keratinized mucosa with sufficient height and width for support. Root pieces were present in the 46, 47 region. Orthopantomogram revealed the absence of the mandible distal the mandibular left canine [ Figure 1b]. The case was diagnosed as Cantor and Curtis Class II mandibular defect. Treatment plan decided was mandibular guide flange prosthesis to aid in correction of mandibular deviation, followed by a definitive prosthesis of a maxillary cast partial denture with double row of teeth on nonresected side and a mandibular cast partial denture retained by precision attachments with a buccal guiding flange.
Preliminar y impressions were made in addition silicone-putty (Ad-Sil Putty, Prime Dental Pvt. Ltd., Mumbai, Maharashtra, India) in an adhesive coated custom tray. Due to limited mouth opening, a satisfactory impression could not be made in a stock tray. Custom trays were fabricated in autopolymerizing acrylic resin (DPI Auto polymerized acrylic resin, Mumbai, Maharashtra, India) on primary casts of another patient having a closely resembling arch form [ Figure 2a]. The maxillary impression was made in two parts, held together by orientation blocks made on polished surface of custom tray [ Figure 2a and b]. Casts were poured in Type III dental stone (Dutt Stone, Dutt Industries, Mumbai, Maharashtra, India). Denture base was fabricated in autopolymerizing acrylic resin (DPI Auto polymerized acrylic resin, Mumbai, Maharashtra, India), and occlusal rims were fabricated in modeling wax (Maarc, Shiva Product, Mumbai, Maharashtra, India) and jaw relation was recorded. The patient's tactile sense or sense of comfort was used to assess the vertical dimension of occlusion. The patient was advised to move the mandible as far as possible to the untreated side manually and then gently close the jaw into position to record a functional maxillomandibular relationship. Maxillary cast was mounted using facebow record (Hanau Spring bow; Whipmix Corporation, Louisville, KY, USA) on a semi-adjustable articulator (Hanau Wide-Vue; Whipmix Corporation, Louisville, KY, USA) and mandibular with reference to the recorded jaw relation. The prosthesis was designed with a buccal guiding flange and a supporting flange on the lingual side [ Figure 3a]. Retention was provided by retentive clasps made from 19 gauge round, stainless steel orthodontic wire (KC Smith and CO, Monmouth, UK). The guide flange extended superiorly on the buccal surface of the maxillary premolars allowing the determined occlusal closure. The guide flange was sufficiently blocked to avoid trauma to the maxillary teeth and gingival during functional movements. Acrylization was done using heat cure acrylic resin (DPI Heat polymerized acrylic resin, Mumbai, Maharashtra, India). Clear acrylic (DPI Heat polymerized clear acrylic resin, Mumbai, Maharashtra, India) was used for flange for esthetic purpose. The prosthesis was finished and polished and inserted intraorally [ Figure 3b]. The patient wore the guiding flange for 4 months followed by extraction of root pieces in the region 46, 47.
The definitive prosthesis was then fabricated consisting of maxillary and mandibular cast partial denture. Crown preparation was done for 33, 44, and final impression was made in addition silicone (Ad-Sil light body, Prime Dental Pvt. Ltd., Mumbai, Maharashtra, India). The cast was poured in die stone Type IV and wax pattern was made. Extracoronal attachment OT strategy (Rhein 83, USA) was attached to the pattern such that it directed toward the center of the and blocked out. Casts were duplicated in refractory material (Wirovest, Bego, Germany) using agar (Wirogel M, Bego, Germany). Wax pattern was made and casting was done to obtain cast partial denture framework [ Figure 4b].
Framework trial was done, followed by recording the jaw relation. Teeth arrangement (Acryrock, Ruthinium Dental Products Pvt. Ltd., India) was done [ Figure 5a-c] and trial denture was evaluated. Acrylization was done in heat cure acrylic resin (Lucitone 199, Dentsply, York Division, USA) [ Figure 6a]. Denture was finished, polished, and inserted in patient's mouth [ Figure 6b and c]. Patient wore the denture for 10 days to acclimatize and the guiding flange was cut off. Significant reduction in mandibular deviation was observed and maximum intercuspation could be achieved due to the guidance from the twin row of teeth. The patient was very satisfied with the functional and esthetic performance of the prosthesis. Patient has been on a periodic recall for 4 years.
DISCUSSION
Loss of mandibular continuity results in deviation of remaining mandibular segment toward the resected side primarily because of the loss of tissue involved in the surgical resection. It also causes rotation of mandibular occlusal plane inferiorly on the defect side. The pull of the suprahyoid muscles on the residual mandibular fragment causes inferior displacement and rotation around the fulcrum of the remaining condyle thus giving the tendency to an anterior open bite. [3] Greater the loss of tissue, greater will be the deviation of the mandible to the resected side, thus compromising the prognosis of the treatment. [4,5] The techniques described to reduce mandibular deviation by restraining the patients neuromuscular system include exercise programs, removable partial denture prosthesis for dentulous patients and complete denture prosthesis for edentulous patients together with modification in the occlusal scheme to compensate for deviation. [6] This ar ticle describes functional rehabilitation of hemimandibulectomy patient who has undergone resection without reconstruction. Guide flange helps in such cases to prevent deviation of the mandible, improve masticatory function and esthetics. This therapy is most successful in patients for whom the resection involves only bony structures, with minimal sacrifice of tongue, floor of the mouth, and adjacent soft tissues. [4] The exercise as suggested by Beumer et al. [7] was suggested to the patient. In this procedure, following maximum opening, the patient manipulates the mandible by grasping the chin and moving the mandible away from the surgical side. These movements tend to loosen scar contracture, reduce trismus, and improve maxilla-mandibular relationships. The guide flange was used for a period of 4 months until the patient experienced considerable decrease in deviation (improvement was observed after 4 weeks of insertion). The success in rehabilitating a patient with hemimandibulectomy depends upon the nature and extent of surgical defect, treatment plan, type of prosthesis, and patient co-operation. The earlier the mandibular guidance therapy is initiated in the course of treatment, the more successful is the patient's definitive occlusal relationship. [8] Any delays in the initiation of mandibular guidance appliance therapy, due to problems such as extensive tissue loss, radiation therapy, radical neck dissection, flap necrosis, and other postsurgical morbidities may result in an inability to achieve normal maxillomandibular relationship. [9] Root pieces were extracted later during the stage of the definitive prosthesis since the patient had a history of radiation therapy and the extraction was delayed to avoid osteoradionecrosis.
Definitive treatment involved fabrication of a maxillary cast partial denture with two rows of teeth. The arrangement helped in better intercuspation and thus improves mastication. The palatal row of teeth provided favorable occlusal relationship, and the buccal row of teeth supported the cheeks. A functional occlusal record was obtained in wax placed lingual to the maxillary posterior teeth and used as an index to arrange the palatal row of teeth. To obtain stable occlusal intercuspation, the mandibular teeth on the unresected side were arranged buccal to the crest of the ridge and teeth on the resected side more lingually. The guide flange was cut off from the mandibular cast partial denture once the patient was acclimatized to the new prosthesis. The twin row of teeth helped maintain intercuspation thereafter. Mastication was confined to the nonresected side only and the teeth on the resected side provided bilateral occlusion and thus stabilization of the prosthesis. [2] Recalls were carried out over a period of 4 years, and the patient reported an increase in masticatory efficiency and seemed happy with the treatment.
Attachment retained prosthesis in such cases is valuable because of the stress breaking effect. Esthetics is greatly improved without any metal display. Retention provided by the attachment can be increased with the various retentive caps as per the patients comfort. In this case, extracoronal attachment was used on a single tooth on ether sides, thus special attention was given to maintain favorable crown: Root ratio. Mesial rests coupled with precision attachments were used for effective stress distribution. Both teeth exhibited sufficient root length and bone support. The teeth were evaluated during the periodic recalls and a healthy periodontal status was maintained. Considerable improvement in facial profile of the patient was observed posttreatment [ Figure 7a and b] and further improvement was seen during recall visits.
Adell et al. [10] have carried out a retrospective evaluation to evaluate the possibility of providing every patient with dental rehabilitation after segmental resections and primary jaw reconstructions. Osseointegrated implants are the more recent and advanced treatment modality for craniofacial reconstruction. However, they require extensive period for healing and acceptance of graft and are expensive. Thus, more immediate and economical means of prosthetic rehabilitation are preferred by most patients. [4]
CONCLUSION
The prognosis of the prosthesis in functional rehabilitation of hemimandibulectomy patient who has undergone resection without reconstruction is guarded. Guide flange prosthesis is most common treatment modality. However, in cases where sufficient numbers of abutment teeth are not present and where deviation is massive, providing twin occlusion rehabilitates the patient functionally. Surgical reconstruction by implants and grafts of various types is the ideal treatment when feasible. However, it is not always feasible in every patient, alternative prosthodontic approach has to be considered to restore the esthetics and function in such subject.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2017-11-19T03:31:13.565Z | 2016-04-01T00:00:00.000 | {
"year": 2016,
"sha1": "4f98dcfb9f45f63101612081c2d6f4b0113d6162",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0972-4052.164914",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef6dd528bc23d22fbfc96d93445a29a4264de651",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
207330586 | pes2o/s2orc | v3-fos-license | Orbital angular momentum light frequency conversion and interference with quasi-phase matching crystals
Light with helical phase structures, carrying quantized orbital angular momentum (OAM), has many applications in both classical and quantum optics, such as high-capacity optical communications and quantum information processing. Frequency conversion is a basic technique to expand the frequency range of fundamental light. The frequency conversion of OAM-carrying light gives rise to new physics and applications such as up-conversion detection of images and high dimensional OAM entanglements. Quasi-phase matching (QPM) nonlinear crystals are good candidates for frequency conversion, particularly for their high-valued effective nonlinear coefficients and no walk-off effect. Here we report the first experimental second-harmonic generation (SHG) of OAM light with a QPM crystal, where a UV light with OAM of 100 is generated. OAM conservation is verified using a specially designed interferometer. With a pump beam carrying an OAM superposition of opposite sign, we observed interesting interference phenomena in the SHG light; specifically, a photonics gear-like structure is obtained that gives direct evidence of OAM conservation, which will be very useful for ultra-sensitive angular measurements. We also develop a theory to reveal the underlying physics of the phenomena. The methods and theoretical analysis shown here are also applicable to other frequency conversion processes, such as sum frequency generation and difference-frequency generation, and may also be generalized to the quantum regime for single photons.
Allen [20,21] has demonstrated the OAM transformation and conservation in frequency conversion in LBO crystal. Zeilinger's group [12] has realized high-dimensional OAM entanglement in the spontaneous parametric down-conversion processes. In all these nonlinear interaction processes, the total OAM conservation of light plays a very important role. The frequency conversion of OAM lights will be very useful in up-conversing detection of images [26] and generating of OAM light from a fundamental OAM light at special wavelengths (in the UV or mid-infrared frequency domains), which are hard to produce them with traditional method. For nonlinear processes with crystals, the benefits from quasi-phase matching (QPM) when compared with birefringence phase matching make QPM crystals good candidates for frequency conversion of OAM light, particularly for their high-valued effective nonlinear coefficients and no walk-off effect. Then some important questions are coming naturally: can we use QPM crystals for nonlinear frequency conversion of OAM light? Whether the total OAM of light is conserved in such nonlinear processes? Is frequency conversion of OAM superposition state possible? So far, no experimental work has been reported on such frequency conversion processes, although one theoretical study has appeared on the process of sum frequency generation (SFG) and second-harmonic generation (SHG) [27].
In this work, the previous posed questions are answered, we report the first experimental generation of OAM-carrying UV light in SHG with a QPM type-I PPKTP crystal. We demonstrate the conservation of OAM in the SHG process, which concurs with the theory of Ref. 27. Moreover, we observe a very interesting interference phenomenon by transforming the pump light into a hyper-superposition of polarization and OAM states. We directly see a photonic gear-like structure that has never before been observed or discussed in three-wave mixing processes. This phenomenon can be regarded as direct evidence of OAM conservation. The photonic gear can be rotated by rotating the pump-beam polarization, an effect that can be used for ultra-sensitive angular measurements. These observations can be well explained by the theory we have developed. The method we demonstrate here provides a new way to generate OAM light via frequency conversion in QPM crystals. Moreover, because of its low diffraction, UV light can enhance the resolution of OAM light-based imaging. Using the SHG process in OAM light-based ultra-sensitive angular measurements [18], resolutions can be further enhanced by a factor of 2. Our approach may also be used in sum frequency generation (SFG) or difference-frequency generation (DFG) [28] at the single-photon level. This will be very useful for quantum information processing using the OAM degrees of freedom of photons.
We first demonstrate OAM conservation in the SHG process. Figure 1 shows the different blocks used in our experiments; blocks a, b, and e are used in the conservation demonstration. Block-a is used to generate OAM light with the proper polarization using vortex phase plates (VPPs, from RPC photonics). Block-b performs frequency conversion and comprises two lenses (both have the same focus length of 125mm), a type-I PPKTP crystal, and a UV filter used to remove the pump light. The 1 mm×2 mm×10 mm PPKTP crystal, supplied by Raicol Crystals, was designed for SHG of wavelengths from 795 nm to 397.5 nm. Both end faces are anti-reflection coated for these two wavelengths. The measured nonlinear conversion efficiency for the PPKTP crystal is 1%/W for a Gaussian pump mode.
In our experiments, the pump power was 10 mW, which produces an SHG light power of around 1 μW.
The laser light we used was from a continuous wave Ti: sapphire laser (Coherent, MBR 110, less than 100 KHz line width when locked). The measured phase matching temperature of the crystal is 64.3°C.
The temperature of the crystal was controlled with a semiconductor Peltier cooler with stability of ±2 mK. Block-e is a specially designed balanced interferometer for generating light in a superposition of Block-e is a specially designed balanced interferometer used to determine the OAM value of the input light. Block-f has the same function as block-c, but uses an SLM instead of a VPP. When we use pump-beam light with a hyper-superposition of polarization and OAM states for SHG, a photonic gear-like structure is obtained. Before showing the experimental results, we first give a detailed theoretical description. We use quantum mechanics to describe the transformation of light in block-c or -f, and a configuration similar to that presented in Refs. 17, 29, and 30 is used in our experiment. We assume that the input beam of the interferometer is in a Gaussian mode and is polarized in the horizontal direction. The input state can be expressed as where H denotes the polarization degree of freedom and 0 represents the OAM degrees of freedom. After passing through block-c (or block-f), the light is transformed into the state ( ) [(cos (2 ) sin (2 ) ) (sin (2 ) cos (2 ) ) where θ and δ are the angles of the fast axis of the half-wave plate (HWP) with respect to the vertical axis at the respective input and output ports of the block, and l is the OAM quantum number imprinted on the two counter-propagated beams in the interferometer. The output SHG light is in the form (see Supplementary Information for details) where Γ is a constant of renormalization. This expression shows that the output of the SHG light is a superposition of OAM states of 2l , 0, and 2l − that depends on the angle of δ . We now focus on the case 8 π δ = ; apart from a relative phase of 8θ , the first two terms have the same amplitude. As mentioned before, an interference pattern with 4l maxima is generated in the intensity distribution of the outer ring, giving direct evidence of OAM conservation in the SHG process. More interesting is that the interference pattern can be rotated if the phase θ is changed, indicating that the total phase of the pump beam is preserved in the SHG process. This behaviour is similar to a mechanical gearwhen θ changes by 4 π , the pattern rotates through angle 2l π and can be exploited for ultra-sensitive measurements of angles. Furthermore, by changing δ, we can switch easily between states 2l , 2l LG . For small OAM, the diffraction of the LG mode is blurred and hard to see; only a dim point can be distinguished at the centre, and hence we cannot observe the multi-ring structure. By rotating the angle of the HWP in the input port of the interferometer in block-c, a rotation in the output image is observed. We also find that the image of the SHG light is clearer than the input; this is because waves of shorter wavelength are diffracted less. ; the first state is the same as that prepared using VPPs, the second is an asymmetrical state. Using this configuration, the two counter-propagated beams have the same optical length with an intrinsically stable phase between them. The results are shown in Figure 4. The first image in each row is the phase diagram of the SLM for generating LG modes with specific l value; the other images are similarly arranged as in For each l , the number of maxima in the intensity profile of the outer ring is the same as theoretically predicted. For large l , there is an additional SHG light in the central region (rows e, f) arising from limitations in creating the mode at the SLM (which are arising from high-order LG mode with the same OAM and unmodulated light, respectively). There would be no such artefact if high-quality VPPs were used (see row b in Figure 3 for comparison). In row e, the OAM of UV is 100, corresponding to 200 maxima in its intensity profile. We cannot increase the OAM further because the SLM cannot operate at high powers; also, our CCD camera has a limited resolution.
For the asymmetrical state 7 LG , 16 0 LG − , and 1 7 LG − ; the interference pattern of the pump has 15 maxima, whereas the SHG light has 30 maxima. The pattern is not sufficiently clear as the LG modes with different absolute values of l have different diffraction properties. Hence the two modes do not completely overlap in the far-field. The first image in each row is the phase diagram of SLM for generating a specific OAM-carrying light. The second and fourth images are the respective interference patterns for the pump light, projected onto the diagonal polarization direction, and the SHG light, directly observed after block-b using CCD camera. The third and fifth images are the corresponding theoretical patterns.
In summary, two experiments using the type-I QPM PPKTP crystal have been conducted to investigate OAM transformation and conservation in the SHG process. In the first of the two, we verified that OAM is conserved in the SHG by directing the pump and SHG OAM light into a specially designed balanced interferometer. The conservation law is confirmed by counting the maxima in the interference intensity profile. As the QPM crystal has a high-valued effective nonlinear coefficient and no walk-off effect, it provides a new method to generate OAM light by frequency conversion in QPM crystals. The image resolution depends on the wavelength of light used; shorter wavelengths yield better image resolutions. UV OAM light would be suitable for OAM light-based phase imaging. In the second of the experiments, we observed a very interesting interference phenomenon when pumping the PPKTP crystal with a superposition of two OAM states of opposite sign. The output SHG light intensity profile depended on the polarization of the pump light. A photonics gear-like structure is observed that can be rotated when the pump polarization is rotated. This effect can be used for remote sensing, OAM light-based ultra-sensitive angular measurements, and detection of spinning objects [31]. This interference effect can also be used for optical switching between different SHG patterns generated by controlling the polarization of the pump beam. We also gave analytical expressions for propagation of the SHG light for the tight focus approximation. All experimental phenomena can be well explained within the theory we have developed. For SFG and DFG conversions, the method is not limited to just the classical regime, and can be extended into the quantum regime for single photons. We use the language of quantum mechanics to describe the transformation of light via blocks c or f. We use a configuration similar to that described in Refs. 17, 29, and 30. We assume a Gaussian spatial mode polarized in the horizontal direction for the input beam of the interferometer. The input state can be expressed in the tensor product form in which the first state, here H , gives the polarization degrees of freedom and the second, here 0 , gives the OAM degrees of freedom. The function of the half-and quarter-wave plates is to apply a unitary rotation to the polarization degrees of freedom. We use the Jones calculus notation, with convention The functions of the quarter-and half-wave plates, whose fast axes are at angles ϕ and θ with respect to the vertical axis, are given by the respective 2×2 matrices ( ) ( ) cos(2 ) sin(2 ) 1 , sin (2 ) cos (2 ) 2 cos (2 ) sin(2 ) 1 , sin (2 ) cos (2 ) After passing through the plates, the polarization of the beam becomes ( ) ( ) ( ) ( ) | 2018-04-03T04:06:33.336Z | 2014-05-08T00:00:00.000 | {
"year": 2014,
"sha1": "de672c9ed4fc4d4ad0f33e5bfc017ee0604e0879",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.22.020298",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "de672c9ed4fc4d4ad0f33e5bfc017ee0604e0879",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
2274124 | pes2o/s2orc | v3-fos-license | Glutathione Transferases Sequester Toxic Dinitrosyl-Iron Complexes in Cells
It is now well established that exposure of cells and tissues to nitric oxide leads to the formation of a dinitrosyl-iron complex bound to intracellular proteins, but little is known about how the complex is formed, the identity of the proteins, and the physiological role of this process. By using EPR spectroscopy and enzyme activity measurements to study the mechanism in hepatocytes, we here identify the complex as a dinitrosyl-diglutathionyl-iron complex (DNDGIC) bound to Alpha class glutathione S-transferases (GSTs) with extraordinary high affinity (KD = 10-10 m). This complex is formed spontaneously through NO-mediated extraction of iron from ferritin and transferrin, in a reaction that requires only glutathione. In hepatocytes, DNDGIC may reach concentrations of 0.19 mm, apparently entirely bound to Alpha class GSTs, present in the cytosol at a concentration of about 0.3 mm. Surprisingly, about 20% of the dinitrosyl-glutathionyl-iron complex-GST is found to be associated with subcellular components, mainly the nucleus, as demonstrated in the accompanying paper (Stella, L., Pallottini, V., Moreno, S., Leoni, S., De Maria, F., Turella, P., Federici, G., Fabrini, R., Dawood, K. F., Lo Bello, M., Pedersen, J. Z., and Ricci, G. (2007) J. Biol. Chem. 282, 6372–6379). DNDGIC is a potent irreversible inhibitor of glutathione reductase, but the strong complex-GST interaction ensures full protection of glutathione reductase activity in the cells, and in vitro experiments show that damage to the reductase only occurs when the DNDGIC concentration exceeds the binding capacity of the intracellular GST pool. Because Pi class GSTs may exert a similar role in other cell types, we suggest that specific sequestering of DNDGIC by GSTs is a physiological protective mechanism operating in conditions of excessive levels of nitric oxide.
More than 30 years ago it was discovered that paramagnetic dinitrosyl iron complexes (DNICs) 2 can be formed in biological systems (1). These compounds can be observed in isolated cells or tissues incubated or perfused with NO or NO-generating systems (1)(2)(3)(4)(5), but traces are also present in tissues under physiological conditions (4). Such complexes, in which ferrous ion coordinates two nitric oxide molecules together with two other ligands, with characteristic EPR spectra centered at about g ϭ 2.03 made possible their discovery in cells or tissues. Actually this is the only way in which NO can be observed directly in living systems. Although the natural occurrence of DNICs has been demonstrated unequivocally, their chemical identity in vivo is still ambiguous. In fact, even if DNICs exist as free low molecular mass complexes of the general formula (NO) 2 (RS) 2 Fe, e.g. the dinitrosyl-diglutathionyl iron complex (DNDGIC) and dinitrosyl-dicysteinyl iron complex, the existence of such free complexes in vivo has never been demonstrated; they always appear bound to unknown proteins (1). The binding to proteins is possible after replacing one thiol ligand of the free complex with a protein serine, tyrosine, or cysteine to complete the coordination shell of the iron. All these paramagnetic species show very similar EPR spectra centered around g ϭ 2.03, and this technique is unable to define their precise chemical composition (6). Also the physiological role of DNICs is controversial; it has been suggested that they function as more stable natural NO carriers, but they are also known to have toxic effects in biological systems (1). In particular, DNDGIC at micromolar concentrations is a potent and irreversible inhibitor of glutathione reductase (7,8).
We recently proposed that glutathione transferases (GSTs) could be involved in the DNIC binding, storage, and detoxification in living systems (9 -11). GSTs represent a group of enzymes ubiquitously distributed in all organisms and devoted to the cell defense. The mammalian GSTs have been grouped in at least eight classes termed Alpha, Kappa, Mu, Omega, Pi, Sigma, Theta, and Zeta (12)(13)(14)(15)(16)(17)(18)(19). These enzymes catalyze the conjugation of GSH to the electrophilic center of many toxic compounds and also promote GSH-mediated reduction of organic hydroperoxides. In addition, GSTs may act as ligandins for xenobiotics (20) and also as antiapoptotic proteins through protein-protein interaction with Jun kinase (21). Recently, we demonstrated that Alpha, Pi, and Mu class GSTs, which represent 90 -95% of all mammalian GSTs, bind the dinitrosyl-diglutathionyl iron complex with extraordinary high affinity, showing K D values of 10 Ϫ10 -10 Ϫ9 M (9 -11). The association of DNDGIC to GSTs has been thoroughly investigated, revealing that one of the glutathiones in the iron complex binds to the enzyme G-site, whereas the other GSH molecule is lost and is replaced by a tyrosine phenolate in the coordination of the ferrous ion (11). Thus, strictly speaking, the bound complex is a monoglutathionyl species (DNGIC). The x-ray crystallographic structure of DNGIC bound to GSTP1-1 has been solved recently, confirming the structure proposed on the basis of molecular modeling studies (22). Binding of DNGIC to the first subunit of the dimeric Alpha, Pi, and Mu GSTs also triggers a peculiar intersubunit communication, which lowers the affinity of the second subunit (11).
We found evidence that in crude liver homogenates one target of DNICs could be the pool of GSTs (10), which thus could represent a significant part of the "unknown" proteins that apparently bind DNICs. However, no previous studies have assessed the occurrence of such a complex-enzyme association in living cells nor has any physiological role of this phenomenon been defined. Furthermore, the intracellular iron source for DNIC formation has never been determined. This study demonstrates that DNDGIC is formed spontaneously in intact rat hepatocytes after exposure to GSNO; this complex is never detected as free species but always bound to GSTs. The preferential binding proteins in rat hepatocytes are the Alpha class GSTs, which stabilize the complex for many hours. Ferritin is the likely iron source for DNDGIC, but the amount of complex formed never exceeds the buffer capacity of the endogenous pool of GSTs. Evidence is also given that this highly specific interaction is essential to protect glutathione reductase against irreversible inactivation by DNDGIC.
Preparation of Rat Liver Homogenate-Rat liver homogenate was prepared starting from 10 g of Sprague-Dawley male rat liver washed twice with 200 ml of phosphate-buffered saline. The tissue was homogenized in 100 ml of 0.25 M sucrose and centrifuged at 1000 ϫ g to remove the nuclear fraction. The estimated concentration of the GST pool was 18 M. Alternatively, the rat liver was homogenized in 30 ml of 0.25 M sucrose to obtain a more concentrated GST medium (56 M).
Hepatocytes were isolated from male Wistar rats (2 months old, 100 -120 g) as reported previously (26). Rats were anesthetized by pentobarbital (50 mg/kg body weight, injected intraperitoneally) before rapid killing by cervical dislocation and subsequent liver dissection. Experiments were carried out in accordance with the ethical guidelines for animal research (Italian Ministry of Health).
Preparation of Subcellular Fractions-After perfusion with 0.25 M sucrose and heparin to remove blood, livers from male rats (about 10 g) were excised, minced, and homogenized in a Potter-Elvehjem in 0.25 M sucrose and 10 mM potassium phosphate buffer, pH 7.4 (50 ml per 5 g of liver). After a brief centrifugation to remove unbroken cells, the homogenate was incubated with 1 mM GSNO for 2 h and then centrifuged at 1000 ϫ g for 10 min to isolate the nuclear fraction. The nuclear pellet was washed three times with 20 ml of 0.25 M sucrose and 10 mM potassium phosphate buffer, pH 7.4. The collected supernatants were centrifuged at 3,300 ϫ g for 10 min to isolate the mitochondrial fraction. With similar procedures the lysosomal fraction (16,300 ϫ g for 20 min) and the microsomal pellet (105,000 ϫ g for 30 min) were isolated. Each fraction was washed three times with 10 volumes of 0.25 M sucrose in 10 mM potassium phosphate buffer, pH 7.4. Each fraction was tested for purity through measurement of the activities of several marker enzymes, typically located in separate cellular compartments as follows: glucose-6-phosphate dehydrogenase for the cytosol, cytochrome oxidase for mitochondria, acid lipase for lysosomes, and glucose-6-phosphatase for microsomes. In addition, the quality of isolated nuclei was examined using electron microscopy (not shown). Cross-contamination in each fraction was below 10%. The nuclear fraction showed less than 2% of cytosol contamination; the mitochondrial fraction contained less than 1% of nuclei as judged by DNA content.
Glutathione Reductase Activity-Glutathione reductase activity was assayed at 25°C using a solution of 1 mM GSSG and 0.1 mM NADPH in 1 ml (final volume) of 0.1 M potassium phosphate buffer, pH 7.4. The activity was followed spectrophotometrically at 340 nm.
EPR Analysis-Samples for EPR experiments were usually prepared using hepatocytes in phosphate-buffered saline or rat liver homogenate in 0.25 M sucrose with DNDGIC added from a freshly made stock solution. EPR measurements were carried out at room temperature with a Bruker ESP300 X-band instrument (Bruker, Karlsruhe, Germany) equipped with a high sensitivity TM 110 -mode cavity. To optimize instrument sensitivity, spectra were recorded using samples of 80 l contained in flat glass capillaries (inner cross-section 5 ϫ 0.3 mm) (27). Unless otherwise stated, spectra were measured over a 200-G range using 20 milliwatts power, 2.0 G modulation, and a scan time of 42 s; typically 4 -40 single scans were accumulated to improve the signal to noise ratio. The EPR signal was quantified by comparison with standard samples containing known concentrations of DNDGIC and GST, as described previously (11). The limit of detection was ϳ2 M, and the range was linear up to at least 50 M DNGIC-GST.
Calculation of Intracellular DNIC Concentrations-DNDGIC and DNGIC-GST were determined on the basis of EPR spectra. Calculations of the cytosolic concentration of both DNGIC-GST and GSTs in rat hepatocytes and in rat liver homogenates were made assuming a hepatocyte volume of 8 ϫ 10 Ϫ12 liters and a cytosol volume corresponding to 56% of the cell volume. The volume of the cytosol is 0.28 ml per g of fresh liver (28). The concentration of the cytosolic GSTs was 0.7 mM.
Theoretical Inhibition of the Cytosolic GSTs Because of DNDGIC Binding-An inhibition simulation algorithm has been developed based on the following assumptions. (a) In the male rat liver, Alpha and Mu GSTs are 43 and 56%, respectively (29,30). These values were confirmed for our male rat liver preparations by means of high pressure liquid chromatography (31). (b) Specific activities of Alpha and Mu GSTs are 16 and 22 units/mg, respectively. These values are the weighted average of the specific activities of the three major Alpha isoenzymes, i.e. GSTA1-1 (18 units/ mg), GSTA2-2 (18 units/mg), and GSTA3-3 (14 units/mg), and of the two major Mu isoenzymes, i.e. GSTM1-1 (29 units/mg) and GSTM2-2 (15 units/mg) (32). (c) K D values for the high and low affinity binding sites of Alpha and Mu GSTs were reported previously (11). (d) Half-site inhibition is operative for the Alpha GSTs, i.e. 95% inhibition when the enzyme is half-saturated (11).
Statistics-Results are shown as the mean Ϯ S.D. of at least three experiments.
DNDGIC-GST Interaction in Rat Liver
Homogenate-In a first approach, we verified that in rat liver GST represents almost exclusively the binding protein for DNDGIC among all the cytosolic protein components present. Kinetics and EPR experiments were used for this purpose. Incubation of variable amounts of DNDGIC in a liver homogenate (56 M total GSTs) caused instantaneous and concentration-dependent loss of GST activity. By considering the relative levels of Alpha and Mu GSTs, their different affinities for the complex (K D ϭ 10 Ϫ10 and 10 Ϫ9 M for Alpha and Mu GSTs, respectively (11)), and their different specific activities (see "Experimental Procedures"), it is possible to calculate the extent of this inhibition in case DNDGIC binds stoichiometrically and exclusively to GSTs, assuming that the isoenzyme with higher affinity (Alpha GST) is involved first. As shown in Fig. 1a, the inhibition calculated corresponds well to that found experimentally. The inhibition pattern of the purified pool of liver GSTs is also very similar (Fig. 1a). Somewhat less inhibition can be observed using more diluted samples (5 M GSTs; data not shown), probably because of incomplete saturation of the low affinity sites of GSTs and to the instability of free DNDGIC at very low concentrations (10). The inhibition pattern observed using Alpha class specific co-substrates, like cumene hydroperoxide and 7-chloro-4-nitrobenzo-2-oxa-1,3-diazole, gave further indication that Alpha GST but not Mu GST is primarily involved in DNDGIC interaction (data not shown). As expected, the EPR analysis of the homogenate after reaction with substoichiometric DNDGIC confirmed that all complex is bound to protein (Fig. 2). It should be remembered that in rat liver homogenate the GST-DNGIC signal is stable for many hours, whereas DNDGIC in a GST-depleted homogenate appears as a free species and is highly unstable, with a t1 ⁄ 2 of 10 min (10).
GSNO Forms DNDGIC in Rat Liver Homogenate-Incubation of 1 mM GSNO in rat liver homogenate (56 M GSTs) depleted only of the nuclear fraction induces a time-dependent accumulation of DNIC that reaches an apparent plateau of ϳ18 M after about 2 h of incubation (Fig. 1b). This is followed by a second phase with a very slow increase that ends only after 14 -16 h, at a concentration of ϳ26 M DNIC (not shown). The EPR spectra showed that the iron complex does not exist as a free species but is entirely bound to proteins (Fig. 3), and the spectrum is identical to that obtained after addition of authentic DNDGIC to the homogenate. The identity of DNGIC-GST is confirmed by the GST inhibition pattern that is close to that expected assuming GSTs to be the sole target of this complex (Fig. 1b). Increasing the final concentration of GSH in the homogenate up to the physiological levels in rat hepatocytes (10 mM) results in faster kinetics of the first phase for DNDGIC formation, but the final amount of complex formed is the same (not shown). The kinetics of DNDGIC formation also depends on GSNO concentration (in the range from 0.2 to 5 mM), but the final concentration of DNGIC-GST does not change appreciably (Fig. 1c). Thus it appears that iron availability is the limiting factor for the final level of the complex. In our experimental conditions, DNDGIC never exceeds the amount of the endogenous GST pool, which is 56 M. Only by adding 50 M of exogenous ferrous ions to the homogenate can the typical EPR signal of unbound DNICs be seen, superimposed on a large GST-DNGIC signal (Fig. 3). In that case, the GST activity almost disappears, and the amount of the bound DNIC corresponds to the concentration of the entire pool of cytosolic GSTs.
DNDGIC Formation in Intact Hepatocytes-Exposure of rat hepatocytes to GSNO causes a time-dependent intracellular accumulation of a paramagnetic species with an EPR spectrum centered at g ϭ 2.03, very similar to that obtained in the crude homogenate after incubation with GSNO and reasonably because of a DNGIC-GST complex (Fig. 3). Also in this case, the kinetics of DNIC formation is proportional to the GSNO concentration (within 0.5 and 2 mM), whereas the final level of the complex is almost independent (data not shown). After 2 h of incubation with 1 mM GSNO, DNGIC-GST reaches a plateau of 12 M in the sample, corresponding to an intracellular concentration of about 0.19 mM (Fig. 1d). As in the homogenate, the EPR signal was stable for several hours; this stability might be due to a steady-state equilibrium between decomposition and re-synthesis of the complex in the presence of an excess of GSNO. However, after repeated washing of the cells, the EPR signal was still stable for hours, thus suggesting that true stabilization occurs in the cell. At fixed times, hepatocytes were sonicated and centrifuged at 105,000 ϫ g. The cytosolic DNGIC-GST was measured by EPR spectroscopy and compared with the degree of GST inhibition. As observed in the homogenate, the inhibition pattern parallels DNDGIC formation, and it also approaches the inhibition curve calculated for exclusive binding of DNDGIC to the endogenous GSTs (Fig. 1d). Interestingly, DNGIC-GST never exceeds the concentration of the intracellular GSTs pool; it actually becomes similar to the concentration of the high affinity binding sites of Alpha GST (0.15 mM).
We noticed that the concentration of DNIC measured in the cytosol (0.16 mM) is about 20% lower than that observed in intact cells, suggesting that a not negligible amount is retained by intracellular organelles or cell membranes. In fact, the 105,000 ϫ g pellet showed the presence of a bound DNIC with an EPR spectrum very similar to that of the DNGIC-GST complex (Fig. 3). Further details were obtained by isolating nuclear, mitochondrial, lysosomal, and microsomal fractions after 1 h of incubation of a rat liver homogenate with 1 mM GSNO. All subcellular fractions contain detectable amounts of the bound DNIC, but it is mainly localized in the nuclear fraction (Fig. 4). An identical distribution of bound DNICs was found by incubating separately each subcellular component with DNDGIC, indicating that the protein counterpart is constitutively bound to these fractions and not associated as a consequence of DNDGIC binding. As Alpha and Mu GSTs are considered cytosolic enzymes and the peculiar membrane-bound microsomal MGST1 is found to have scarce affinity for DNDGIC, 3 these results might indicate the presence of unknown proteins associated with subcellular organelles, able to bind DNDGIC but different from cytosolic GSTs. Unexpectedly, after treatment with 10 mM KCN to displace the complex (9), we find considerable GST activity associated with these components (not shown). The accompanying paper (31) demonstrates that the EPR signal is entirely because of the DNGIC-GST species and that significant amounts of GSTs, mainly GSTA1-1 and GSTA2-2, are associated with the nucleus.
Ferritin Is the Likely Iron Source for DNDGIC in Hepatocytes-The amount of GST-DNGIC generated both in intact hepatocytes and in liver homogenates after exposure to GSNO requires mobilization of iron from the intracellular iron storage proteins. The cytosolic free iron pool is only 5 M (33), a concentration 2 orders of magnitude lower than that of the DNDGIC formed in the cell after GSNO treatment. It has been reported previously that iron can be mobilized from ferritin by NO-generating systems (34). We confirm here that, in the presence of GSNO and GSH, iron is readily extracted from purified horse ferritin to produce free DNDGIC (Fig. 5a), and similar results were obtained using transferrin as the iron source (data not shown). Interestingly, the kinetics of DNDGIC formation from ferritin and its final concentration are independent of the presence of GST (data not shown), indicating that GST is not a kinetic or thermodynamic drawing force for DNDGIC; the complex is formed at the same rate in the presence or absence of GST, the only difference being that in the first case the complex will bind immediately to the transferase. Although the kinetics of DNDGIC formation depends directly on GSNO concentration and on ferritin (Fig. 5, b and c), the final amount of DNDGIC is determined by the amount of ferritin available (Fig. 5a). Importantly, only a small fraction of the iron present in the ferritin protein can be mobilized by GSNO (about 0.3%). The mobilization of iron from horse ferritin also occurs in a complex milieu such as the crude homogenate. Addition of horse spleen ferritin to the rat liver homogenate in the presence of 1 mM GSNO and 10 mM GSH causes a net increase in the DNDGIC formed (Fig. 5d). This overproduction of DNDGIC corresponds to that calculated by assuming the homogenate does not alter the reaction observed with the purified system. Interestingly, the amount of iron extractable from the endogenous rat liver ferritin appears 10-fold higher than that coming from the purified horse spleen protein. In fact, the total ferritin iron present in our homogenate is about 1 mM, whereas the final concentration of DNDGIC is 28 M (3%). A higher propensity for iron mobilization from rat ferritin compared with that of the horse protein has been observed previously, in the case of iron extraction by superoxide ions (35).
GSTs Protect Glutathione Reductase against Irreversible Inhibition by DNDGIC-It is known that DNDGIC irreversibly inactivates glutathione reductase (GR). This reaction was studied in detail by Boese et al. (7), and the x-ray crystal structure of the DNDGIC-inactivated enzyme has been solved by Karplus and co-workers (8). It has been clearly demonstrated in vitro that free DNDGIC at micromolar levels (IC 50 ϭ 3-4 M) oxidizes irreversibly the essential thiol group of Cys-63 to sulfinic acid (8). Therefore, we tested whether the complex bound to GST was still able to inactivate GR. Exposure of rat hepatocytes to 1 mM GSNO did not cause any detectable inhibition of GR even after 120 min of incubation (not shown), although the estimated cytosolic concentration of GST-DNGIC reached 0.16 mM. To prove the involvement of GSTs with this protection and to evaluate the maximal defense capacity of the cell, we compared the effects of increasing amounts of DNDGIC added to rat liver homogenate. Inactivation of GR is observed only when the GST activity is almost reduced to zero, i.e. when the "buffer" capacity of GST is exhausted (Fig. 6a). In a different experiment, a fixed quantity of DNDGIC, over-stoichiometric to the endogenous GST pool, was incubated in homogenate previously implemented with variable amounts of GSTA1-1. Also in this case, the activity of GR is unaffected as long as the fixed DNDGIC concentration remains under-stoichiometric to the total GST level (Fig. 6b). These results demonstrate that Alpha GST acts as a potent protection system and allow us to predict that DNDGIC in hepatocytes in theory may accumulate a level up to 0.6 -0.7 mM without doing any significant damage to the cell.
DISCUSSION
This study gives a definitive demonstration of the profound interaction between the natural NO carrier DNDGIC and GSTs in intact hepatocytes and proposes a possible physiological significance. A first important finding is that this iron complex, when present at levels substoichiometric to GSTs, is almost exclusively sequestered by endogenous GSTs, even in a very complex protein milieu like a crude homogenate. In particular, in rat liver Alpha GSTs are the prime target of this interaction, whereas the Mu GSTs become effective only when the high affinity Alpha sites are saturated. This behavior could be predicted on the basis of the different dissociation constants for DNDGIC determined previously for each GST isoenzyme under purified conditions (11), but the present data demonstrate that the binding properties of these enzymes are unchanged in a complex protein system that approximates the in vivo conditions. Obviously, we cannot exclude that a small amount of DNDGIC may bind to other proteins, but we can conclude that more than 95% of the complex is bound to GST in a 1:1 stoichiometric interaction.
In addition we show that DNDGIC is formed and successively stabilized by GSTs in a similar way both in a crude liver homogenate and in intact hepatocytes exposed to GSNO. The unique stoichiometric binding/inhibition pattern of the GST-complex interaction reveals that the DNIC species formed in the cells is indeed the DNDGIC. This conclusion is important because the identity of intracellular DNIC species has never been established before. In hepatocytes DNDGIC is found entirely bound to GST and is never observed as the free complex. Preliminary data from our laboratory indicate that DNDGIC is formed and binds to GSTs also in other types of cells. Considering that GSTs are ubiquitous, and also Pi class GSTs bind the DNDGIC with high affinity, we propose that all the immobilized DNICs detected in biological systems through their characteristic EPR signal at g ϭ 2.03 might be ascribed to intracellular DNGIC bound to GSTs.
Because of the very high amounts of GSTs in hepatocytes, the final level of DNDGIC is always substoichiometric to the GSTs pool. Inhibition data confirm that Alpha GST is primarily involved in this interaction also in intact cells. Interestingly, in the liver homogenate, a free form of DNIC produced by GSNO can be observed only when exogenous iron is added in amounts exceeding the GST concentration. Thus iron availability seems to be a crucial factor for DNIC accumulation in these multicomponent systems. In fact, experiments performed with purified horse ferritin indicate that this protein is the likely iron source for DNIC formation, but the iron released is only 0.3% of FIGURE 5. DNDGIC formation from horse ferritin, GSH, and GSNO. a, horse spleen ferritin was incubated in 1 ml of 10 mM GSH and 1 mM GSNO in 0. 1 M potassium phosphate buffer, pH 7.4, at 25°C. At variable times, DNDGIC was measured by EPR measurements. F, ferritin 1.4 mg/ml (final concentration) (estimated 4 mM total iron); f, ferritin 2.8 mg/ml (final concentration) (estimated 8 mM total iron). The iron extracted by GSH and GSNO is about 0.3%. b, horse spleen ferritin (3 mg/ml) was incubated with variable amounts of GSNO and 10 mM GSH in 0.1 M potassium phosphate buffer, pH 7.4. At variable times the rate of DNDGIC was measured by EPR analysis or by the extent of GST inhibition, as described previously (10). c, variable amounts of horse spleen ferritin were incubated with 1 mM GSNO and 10 mM GSH in 0.1 M potassium phosphate buffer, pH 7.4. At variable times the rate of DNDGIC was measured by EPR analysis or by the extent of GST inhibition. d, rat liver homogenate was implemented with 1.4 or 2.8 mg/ml of horse spleen ferritin and incubated with 10 mM GSH and 1 mM GSNO in 0.1 M potassium phosphate buffer, pH 7.4. At various times the amount of DNDGIC was measured by EPR spectroscopy.
its total iron content. Although rat liver ferritin displays a 10-fold higher propensity to iron mobilization (about 3% of its iron content), the level of DNDGIC never exceeds the GST concentration. In any case, both iron extraction from ferritin or transferrin and the formation of DNDGIC depend only on the presence of NO together with high levels of GSH; no other cellular component is required for the reaction. This means that DNDGIC is generated spontaneously, and its accumulation in hepatocytes exposed to a flux of NO simply cannot be avoided. It appears likely that NO-mediated mobilization of iron from ferritin to form DNDGIC could somehow be related to GST expression, to ensure that practically all DNDGIC is bound to GSTs. This may be critical for cell survival as DNDGIC is a potent inhibitor of glutathione reductase, causing the irreversible oxidation of Cys-63, a residue essential for catalysis. As proved here, this inactivation occurs only when DNDGIC is present as the free compound, i.e. when its concentration exceeds the binding capacity of the GST pool (0.6 -0.8 mM). Thus GSTs, and in particular the Alpha class enzymes, represent a strong defense system in case of NO overloading or insult. Inhibition of GR is not the sole detriment coming from free DNDGIC. This complex may also be extruded from some cells through MRP1 pumps to cause iron and GSH depletion. The NO cytotoxicity promoted by macrophages against tumor cells (MCF7-VP) has been proposed to be due to this extrusion (36). In hepatocytes, the high level of GSTs and the strong affinity of the Alpha GST seem to oppose efficiently the MRP1mediated extrusion of DNDGIC, as also suggested by the prolonged persistence (hours) of the DNGIC-GST complex inside the cells. Tumor cells that express lower levels of GSTs, and typically the Pi class GST with a lower affinity for DNDGIC, are likely less efficient in retaining the complex.
The results shown in this study may also explain the beneficial effect of NO against iron-mediated oxidative stress, observed previously in rat hepatocytes. Increased levels of labile iron (because of iron overload or to ethanol exposure) makes the cell more susceptible to oxidative stress. NO lowers the availability of the labile iron through DNDGIC formation (4). We can say now that this benefit is only possible because GSTs protect GR against the killer activity of DNDGIC, and at the same time because it avoids the extrusion of free DNDGIC that would cause iron depletion. Scheme 1 illustrates the basic principles of this protective mechanism. In this context it is interesting that preliminary results indicate that the sensitivity to NO of some parasites like Plasmodium falciparum could be related to the prevalent expression of GST classes with no affinity or scarce affinity for DNDGIC in these organisms. Overall, these results depict a scenario for the cell in which cytotoxic effects of NO could be determined by the intracellular levels of GSTs and by their intrinsic affinity for DNDGIC. | 2018-04-03T03:18:25.675Z | 2007-03-02T00:00:00.000 | {
"year": 2007,
"sha1": "c8afdc98521e8192791ad37af09d366148053621",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/282/9/6364.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "65129a59364340774ca9ae1ac79c98183e1fa809",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
226251935 | pes2o/s2orc | v3-fos-license | Structural Insights into β-arrestin/CB1 Receptor Interaction: NMR and CD Studies on Model Peptides
Activation of the cannabinoid CB1 receptor induces different cellular signaling cascades through coupling to different effector proteins (G-proteins and β-arrestins), triggering numerous therapeutic effects. Conformational changes and rearrangements at the intracellular domain of this GPCR receptor that accompany ligand binding dictate the signaling pathways. The GPCR-binding interface for G proteins has been extensively studied, whereas β-arrestin/GPCR complexes are still poorly understood. To gain knowledge in this direction, we designed peptides that mimic the motifs involved in the putative interacting region: β-arrestin1 finger loop and the transmembrane helix 7-helix 8 (TMH7-H8) elbow located at the intracellular side of the CB1 receptor. According to circular dichroism and NMR data, these peptides form a native-like, helical conformation and interact with each other in aqueous solution, in the presence of trifluoroethanol, and using zwitterionic detergent micelles as membrane mimics. These results increase our understanding of the binding mode of β-arrestin and CB1 receptor and validate minimalist approaches to structurally comprehend complex protein systems.
Introduction
The therapeutic effects of cannabinoids have long been known; however, it was not until a few decades ago that their mechanism of action was elucidated. In the late 1980s, receptors targeted by phytocannabinoids were identified in rat brain [1]. Subsequent cloning of this G protein-coupled receptor (GPCR) consolidated the discovery of the first cannabinoid receptor, CB1 [2]. CB1 is highly expressed throughout the central nervous system, being one of the most abundant GPCRs in the human brain [3]. CB1 receptors are also found in the peripheral nervous system, as well as in other organs and tissues including endocrine glands, spleen, heart or the gastrointestinal tract. This expression pattern confers upon CB1 a relevant role in the modulation of numerous physiopathological processes including memory processing, pain regulation or neurodegeneration [3][4][5][6]. A growing body of research supports the notion that CB1 represents a promising target for the development of novel drugs for the treatment of diverse pathologies including neurodegenerative, cancer or metabolic disorders [7][8][9][10][11][12][13][14][15].
Concerning β-arrestin1, it has been reported that its finger loop region (FL, Figure 1) is a critical determinant of arrestin coupling to GPCRs [51][52][53][54][55][56]. The finger loop region was first identified by sequence alignment of several β-arrestins (Supplementary Table S1). Then, the potential effects of including the preceding and following residues on helical tendency and solubility was examined by the AGADIR and Protparam webservers [57,58]. The sequence for the β-arrestin1 model peptide was selected as the shortest sequence having the highest helical tendency and being the most soluble at the neutral (or slightly acidic) pH values used in the NMR study (note that peptide solubility is usually minimal at the isoelectric point, pI; Supplementary Table S1). This β-arrestin1 model peptide (β-arr1 [63][64][65][66][67][68][69][70][71][72][73][74][75][76] ) includes the preceding residue and three after the finger loop motif, as indicated in Figure 1. The β-arrestin finger loop is structurally diverse in the reported GPCR/β-arrestin complexes, adding interest to study the structure of this region by itself.
As shown in this figure, the general topology of GPCRs encompasses seven transmembrane helices (TMH) connected by intracellular and extracellular loops and a short cytoplasmic helical domain (H8) extending from TMH7. This helical segment is oriented in parallel to the membrane surface and perpendicularly to the TMH bundle.
As shown in this figure, the general topology of GPCRs encompasses seven transmembrane helices (TMH) connected by intracellular and extracellular loops and a short cytoplasmic helical domain (H8) extending from TMH7. This helical segment is oriented in parallel to the membrane surface and perpendicularly to the TMH bundle.
The scarce structural knowledge available on GPCR/arrestin complexes indicates, as seen in the model of CB1/β-arrestin1 (Figure 1), that the β-arrestin1 finger loop may be inserted into the bundle intracellularly close to the TMH7-H8 elbow area [51,52,55,59]. Therefore, the sequence for the CB1 peptide encompasses the TMH7-H8 region, located at the intracellular side of the CB1 receptor. As in the case of β-arr1 63-76 , the specific peptide sequence (CB1 391-409 ; Figure 1) was selected as the shortest peptide with higher α-helical propensity and solubility upon analysis using the AGADIR and protparam webservers [57,58] (Supplementary Table S2).
To avoid effects of the ionisable amino and carboxylate groups, the N-and C-termini of the two peptides were acetylated and amidated, respectively.
Structural Behavior of the Free CB1 and β-Arrestin1 Peptides
The conformation of the TMH7-H8 CB1 and β-arrestin1 peptides independently was examined in aqueous solution, in the presence of 30% of trifluoroethanol (TFE), a secondary structure enhancer [60], and using zwitterionic detergent micelles (dodecylphosphocholine, DPC) as membrane mimics.
We firstly characterized the structural behavior of the two peptides using circular dichroism (CD). As depicted in Figure 2, the CD spectra of the two peptides in water solution showed a minimum at about 197 nm, which indicated that they were mainly random coils, whereas they tended to form helical conformations in the presence of TFE or DPC micelles, as shown by the observed maximum below 195 nm and the minima at 208 nm and 222 nm. The helix percentages estimated from the ellipticity at 222 nm ([θ] 222nm ) are given in Table 1. peptide with higher α-helical propensity and solubility upon analysis using the AGADIR and protparam webservers [57,58] (Supplementary Table S2).
To avoid effects of the ionisable amino and carboxylate groups, the N-and C-termini of the two peptides were acetylated and amidated, respectively.
Structural Behavior of the Free CB1 and β-arrestin1 Peptides
The conformation of the TMH7-H8 CB1 and β-arrestin1 peptides independently was examined in aqueous solution, in the presence of 30% of trifluoroethanol (TFE), a secondary structure enhancer [60], and using zwitterionic detergent micelles (dodecylphosphocholine, DPC) as membrane mimics.
We firstly characterized the structural behavior of the two peptides using circular dichroism (CD). As depicted in Figure 2, the CD spectra of the two peptides in water solution showed a minimum at about 197 nm, which indicated that they were mainly random coils, whereas they tended to form helical conformations in the presence of TFE or DPC micelles, as shown by the observed maximum below 195 nm and the minima at 208 nm and 222 nm. The helix percentages estimated from the ellipticity at 222 nm ([θ] 222nm ) are given in Table 1. [a] Notice that CD-estimated helix percentages are an average for all the peptide residues, whereas NMR-estimated helix percentages are for the residues within the helix. [b] Values measured at 5 °C.
[c] Reported errors are standard deviations for the mean of the percentages obtained from the ΔδHα and ΔδCα values. To gain further structural information, the peptides were characterized using NMR. The NMR spectra of the two peptides were fully assigned in the three experimental conditions, i.e., aqueous solution, in the presence of TFE and in DPC micelles (chemical shifts are reported in the supplementary material: Tables S3-S8).
Most residues of the two peptides show negative ∆δ Hα and positive ∆δ Cα values ( Figure 3 and Figure S3), which are large in magnitude in TFE and DPC micelles, and small in aqueous solution. In agreement with the CD data, this indicates that the peptides form helical structures in TFE and DPC, and have only a low helical tendency in aqueous solution. A detailed examination of the profiles showed that CB1 391-409 presents two helical regions (P394-K402 and L404-F408), separated by the residue D403, which showed positive ∆δ Hα values in TFE and DPC (Figure 3), and negative ∆δ Cα values in aqueous solution and in DPC ( Figure S3). The helical region in the β-arr1 63-76 peptide extends from E66 to T74 in aqueous solution and in TFE, and from R65 to T74 in DPC. The percentages of helical populations estimated from ∆δ Hα and ∆δ Cα are given in Table 1.
spectra of the two peptides were fully assigned in the three experimental conditions, i.e., aqueous solution, in the presence of TFE and in DPC micelles (chemical shifts are reported in the supplementary material: Tables S3-S8).
Most residues of the two peptides show negative ΔδHα and positive ΔδCα values (Figures 3 and S3), which are large in magnitude in TFE and DPC micelles, and small in aqueous solution. In agreement with the CD data, this indicates that the peptides form helical structures in TFE and DPC, and have only a low helical tendency in aqueous solution. A detailed examination of the profiles showed that CB1 391-409 presents two helical regions (P394-K402 and L404-F408), separated by the residue D403, which showed positive ΔδHα values in TFE and DPC (Figure 3), and negative ΔδCα values in aqueous solution and in DPC ( Figure S3). The helical region in the β-arr1 63-76 peptide extends from E66 to T74 in aqueous solution and in TFE, and from R65 to T74 in DPC. The percentages of helical populations estimated from ΔδHα and ΔδCα are given in Table 1. Further evidence about the helix formation in the two peptides came from the sets of NOEs present in TFE and DPC, which included those characteristic of helical structures, i.e., sequential Further evidence about the helix formation in the two peptides came from the sets of NOEs present in TFE and DPC, which included those characteristic of helical structures, i.e., sequential NN(i,i+1), and the nonsequential αN(i,i+3), and αβ(i,i+3). Examples of these NOEs are shown in the Supplementary Figures S1 and S2.
The preferred structures of the two peptides were calculated on the basis of distance and angle restraints derived, respectively, from the NOEs and the chemical shifts measured in TFE and in DPC and using the program CYANA (see Materials and Methods). The quality of the resulting structures is good (see Ramachandran plots at Supplementary Figure S4) and they are well defined (see RMSD values in Table S12). Figures 4 and 5 illustrate overlays of the 20 lowest target function conformers for CB1 391-409 and β-arr1 63-76 peptides, as well as a representative conformer of the ensemble. In agreement with the qualitative analysis of ∆δ Hα and ∆δ Cα profiles, CB1 391-409 in both TFE and DPC exhibits two helical regions, i.e., a long helix extending residues P394 to K402 and a short one spanning residues L404 to F408 ( Figures 3A and 4). The angle between the two helical regions shows certain variability among the conformers within the structural ensembles, but they are approximately perpendicular each other (94 • ± 15 • in TFE; 75 • ± 30 • in DPC; Figure 4) as in the crystalline structure of free CB1 (97 • in PDB ID: 5XRA). TMH7, which ends at residue L399 in crystalline full-length CB1 receptor, extends up to residue K402 in the CB1 391-409 peptide both in TFE and in DPC. This result is in agreement with the previously reported structure for another CB1-derived peptide containing the same region [61]. Tyukhtenko and coworkers studied the structure of the TMH7-H8 span (CB1 377-416 ) obtaining a lengthy hydrophobic α-helical segment and a short amphipathic α-helix (H8) orthogonally oriented to TMH7.
The preferred structures of the two peptides were calculated on the basis of distance and angle restraints derived, respectively, from the NOEs and the chemical shifts measured in TFE and in DPC and using the program CYANA (see Materials and Methods). The quality of the resulting structures is good (see Ramachandran plots at Supplementary Figure S4) and they are well defined (see RMSD values in Table S12). Figures 4 and 5 illustrate overlays of the 20 lowest target function conformers for CB1 391-409 and β-arr1 63-76 peptides, as well as a representative conformer of the ensemble. In agreement with the qualitative analysis of ΔδHα and ΔδCα profiles, CB1 391-409 in both TFE and DPC exhibits two helical regions, i.e., a long helix extending residues P394 to K402 and a short one spanning residues L404 to F408 (Figures 3A and 4). The angle between the two helical regions shows certain variability among the conformers within the structural ensembles, but they are approximately perpendicular each other (94° ± 15° in TFE; 75° ± 30° in DPC; Figure 4) as in the crystalline structure of free CB1 (97° in PDB ID: 5XRA). TMH7, which ends at residue L399 in crystalline full-length CB1 receptor, extends up to residue K402 in the CB1 391-409 peptide both in TFE and in DPC. This result is in agreement with the previously reported structure for another CB1-derived peptide containing the same region [61]. Tyukhtenko and coworkers studied the structure of the TMH7-H8 span (CB1 377-416 ) obtaining a lengthy hydrophobic α-helical segment and a short amphipathic α-helix (H8) orthogonally oriented to TMH7. Our structural studies demonstrated that the β-arr1 63-76 peptide also formed helical conformations in DPC and TFE ( Figure 5). In agreement with our observations, various studies have Our structural studies demonstrated that the β-arr1 63-76 peptide also formed helical conformations in DPC and TFE ( Figure 5). In agreement with our observations, various studies have indicated that in its activated state, the β-arrestin finger loop adopts helical conformations [55,56,62]. However, it is important to note that conformational plasticity of the finger loop was observed in previously reported GPCR/arrestin complexes [51][52][53][54][55]. While in the rhodopsin/arrestin complexes the finger loop forms a helical domain [54,55], in the recently solved muscarinic 2 receptor/arrestin complex [53], the finger loop adopts an extended loop configuration. This suggests that it can be ordered in different conformations or adopt diverse relative orientations in order to enable the recognition of a wide variety of GPCRs.
indicated that in its activated state, the β-arrestin finger loop adopts helical conformations [55,56,62]. However, it is important to note that conformational plasticity of the finger loop was observed in previously reported GPCR/arrestin complexes [51][52][53][54][55]. While in the rhodopsin/arrestin complexes the finger loop forms a helical domain [54,55], in the recently solved muscarinic 2 receptor/arrestin complex [53], the finger loop adopts an extended loop configuration. This suggests that it can be ordered in different conformations or adopt diverse relative orientations in order to enable the recognition of a wide variety of GPCRs.
Characterization of the CB1 and β-arrestin1 Interface
In order to elucidate whether CB1 391-409 and β-arr1 [63][64][65][66][67][68][69][70][71][72][73][74][75][76] peptides are prone to interact, we acquired NMR spectra of the peptide mixture in the same conditions as for the isolated peptides. All the residues in the mixtures were unequivocally assigned (Supporting Information Tables S9-S11). As seen in the spectral regions shown in Figure 6 (see also Figures S5-S7), some cross-peaks are shifted in the spectra of the peptide mixture relative to the isolated peptides in the three examined experimental conditions. This result provides evidence that these two short peptides by themselves are able to interact each other.
Characterization of the CB1 and β-Arrestin1 Interface
In order to elucidate whether CB1 391-409 and β-arr1 [63][64][65][66][67][68][69][70][71][72][73][74][75][76] peptides are prone to interact, we acquired NMR spectra of the peptide mixture in the same conditions as for the isolated peptides. All the residues in the mixtures were unequivocally assigned (Supporting Information Tables S9-S11). As seen in the spectral regions shown in Figure 6 (see also Figures S5-S7), some cross-peaks are shifted in the spectra of the peptide mixture relative to the isolated peptides in the three examined experimental conditions. This result provides evidence that these two short peptides by themselves are able to interact each other.
In aqueous solution, some cross-peaks belonging to CB1 391-409 showed significant differences in the mixture relative to the isolated peptide (the most affected residues are D403 and H406; Figure 7A), whereas those of β-arr1 63-76 were hardly affected ( Figure 7A). This suggests that the interaction of these two peptides in aqueous solution requires some structural rearrangement in CB1 391-409 , but not in β-arr1 63-76 , whose conformational equilibrium remains mainly unaffected.
These results show that short model peptides encompassing residues belonging to the putative contact region in the model of the CB1/β-arrestin1 complex (Figure 1) are able to interact. Thus, these short sequences seem to contain enough information to recognize each other. However, how they interact seems to depend on the environment. The conformational rearrangement in CB1 is likely similar in water and in TFE, since the affected residues are essentially the same. But, upon CB1 391-409 interaction, β-arr1 63-76 suffers some reorganization in the presence of TFE, but hardly change in water.
In the presence of DPC micelles, the two peptides might experience some conformational rearrangements, albeit somehow differently from those in water and TFE. These conformational changes might play a role in the CB1 β-arrestin 1 activation. Int. J. Mol. Sci. 2020, 21, x FOR PEER REVIEW 8 of 18 in both β-arr1 63-76 and CB1 391-409 moieties when comparing the independent peptides with the mixture (Figures 7B,C). These changes are remarkable in residues D403 and H406 for CB1 391-409 (which are also affected in aqueous solution; Figure 7A) and E66, D67 and D69 for β-arr1 63-76 in TFE. DPC mixtures showed weighted NMR chemical shift differences in residues R400 and K402 of CB1 391-409 and R65, E66, L68, D69 and L73 of β-arr1 [63][64][65][66][67][68][69][70][71][72][73][74][75][76] . Table S13 summarizes the residues whose chemical shifts are affected upon interaction in each experimental condition. As previously mentioned, structural rearrangements of the arrestin finger loop have already been observed depending on the environment, providing evidence for its necessary plasticity to couple to diverse GPCRs [51,53,55]. Table S14 displays the sequence diversity at the interface region of the GPCRs elucidated in complex with arrestins compared to CB1. This demonstrates the ability of the finger loop domain to conformationally adapt according to the interacting partner.
To visualize how the peptides contact each other and if they are reproducing the way of interaction of the full-length proteins, we proceeded to model the complexes. For that purpose, we used the Haddock-webserver introducing the structures calculated for the isolated peptides in TFE and in DPC as input. This program requires the definition of interacting residues defined as active in the docking interface. These are the amino acids whose resonances show changes in the peptide mixture for each condition (Figure 7). Figure 8 depicts a representative model of the CB1 391-409 /β-arr1 63-76 complex in each condition selected from the cluster with the best Haddock docking score. These models exhibit the different rearrangement of the peptides, depending on the environment. While in DPC, β-arr1 [63][64][65][66][67][68][69][70][71][72][73][74][75][76] is almost parallel to the H8 portion of CB1 391-409 , in TFE, β-arr1 63-76 sits perpendicularly to both CB1 391-409 helical domains. Main interactions involved in the interface of the TFE complex model include hydrogen bonds between K402 and D69, H406 and E66, and the interaction formed by E66 backbone with D403 side chain ( Figure 8A). The DPC complex model is mainly stabilized by hydrogen bond interactions of R400 with E66 and D69, and L68 backbone with S401 side chain ( Figure 8B). In both conditions, there is also a reduction of solvent accessible surface area (ASA) upon complex formation in β-arr1 residues E66 and D69. These divergences in the peptide rearrangement, depending on the environment, could be due to the conformational plasticity of the studied region. This is in agreement with the structural diversity observed in the GPCR/arrestin finger loop interface of the reported complexes [51][52][53][54][55].
couple to diverse GPCRs [51,53,55]. Table S14 displays the sequence diversity at the interface region of the GPCRs elucidated in complex with arrestins compared to CB1. This demonstrates the ability of the finger loop domain to conformationally adapt according to the interacting partner.
To visualize how the peptides contact each other and if they are reproducing the way of interaction of the full-length proteins, we proceeded to model the complexes. For that purpose, we used the Haddock-webserver introducing the structures calculated for the isolated peptides in TFE and in DPC as input. This program requires the definition of interacting residues defined as active in the docking interface. These are the amino acids whose resonances show changes in the peptide mixture for each condition (Figure 7). Figure 8 depicts a representative model of the CB1 391-409 /β-arr1 [63][64][65][66][67][68][69][70][71][72][73][74][75][76] complex in each condition selected from the cluster with the best Haddock docking score. These models exhibit the different rearrangement of the peptides, depending on the environment. While in DPC, β-arr1 [63][64][65][66][67][68][69][70][71][72][73][74][75][76] is almost parallel to the H8 portion of CB1 391-409 , in TFE, β-arr1 63-76 sits perpendicularly to both CB1 391-409 helical domains. Main interactions involved in the interface of the TFE complex model include hydrogen bonds between K402 and D69, H406 and E66, and the interaction formed by E66 backbone with D403 side chain ( Figure 8A). The DPC complex model is mainly stabilized by hydrogen bond interactions of R400 with E66 and D69, and L68 backbone with S401 side chain ( Figure 8B). In both conditions, there is also a reduction of solvent accessible surface area (ASA) upon complex formation in β-arr1 residues E66 and D69. These divergences in the peptide rearrangement, depending on the environment, could be due to the conformational plasticity of the studied region. This is in agreement with the structural diversity observed in the GPCR/arrestin finger loop interface of the reported complexes [51][52][53][54][55]. It is worth noting that in the few GPCR-arrestin complexes reported thus far (none of them with CB1 receptors), residues in analogous positions of the GPCR and arrestin play a key role in their interface. For instance, residue D69 in activated β-arrestin1 was shown to directly engage with the elbow region of the β1-adrenergic receptor in a recently elucidated complex [51].
Designed peptides, with acetylated amino termini and amidated carboxylate ends, were synthesized on demand by CASLO ApS (Denmark). Solid-phase synthetic procedures along with reverse-phase HPLC purification yielded the desired peptides with the indicated purities:
Peptide Numbering
The absolute sequence number of peptide residues was used throughout the article. The Ballesteros−Weinstein numbering system for GPCR amino acid residues is provided in Figure 1 to facilitate the identification of key GPCR positions [63].
CD Spectroscopy
CD spectra of the peptides were recorded using a J-815 spectropolarimeter (JASCO, Groß-Umstadt, Germany). Stock solutions of each peptide were prepared at a nominal concentration of 1 mg mL −1 in milliQ-water. Samples in DPC micelles were prepared by dilution of a 30 mM DPC stock solution in milliQ-water. In both conditions, peptide final concentrations were 50 µM. Measurements were recorded at 5 • C in a quartz glass cells (Suprasil, Hellma, Müllheim, Germany) of 1 mm path length, between 260 and 190 nm at 0.1 nm intervals.
Isothermal spectra for these samples were acquired at a scan speed of 50 nm min −1 with a response time of 4 s and 1 nm bandwidth. Over four scans were averaged for each sample and for the baseline of the corresponding peptide-free sample. Upon baseline correction, CD data were processed with the adaptive smoothing method integrated in the Jasco Spectra Analysis software. CD data are given in molar ellipticity units ([θ], deg cm 2 dmol −1 ) for the isolated peptides and ellipticity units (θ, mdeg) for mixtures.
Estimations of the helix percentages for the free peptides were obtained from the experimental [θ] value at 222 nm ([θ] 222nm , deg.cm 2 .dmol −1 ) by applying Equation (1): The pH was measured using a glass micro-electrode and adjusted to 5.5 by addition of NaOD or DCl. Samples were placed in 5 mm NMR tubes and 2 µL of sodium 2,2-dimethyl-2-silapentane-5-sulfonate (DSS) were added as internal reference for 1 H chemical shifts.
Spectra Acquisition
A Bruker Avance-600 spectrometer (600 MHz) was used to record NMR spectra. Standard techniques were used to acquire 2D spectra: COSY (phase sensitive correlated spectroscopy), TOCSY (total correlated spectroscopy), and NOESY (nuclear Overhauser enhancement spectroscopy). Water signal suppression was achieved by presaturation or Watergate [64]. Mixing times of 60 ms were used to record the TOCSY spectra while 150 ms were used for the NOESY. 1 H-13 C HSQC (heteronuclear single quantum coherence spectroscopy) were acquired at 13 C natural abundance. The IUPAC-IUB recommended 1 H/ 13 C chemical shift ratio was employed to indirectly referenced the 13 C chemical shifts [65]. Depending on the experimental conditions, peptide samples were tested at 5 and/or 25 • C. Data processing was accomplished using the TOPSPIN software (Bruker Biospin, Karlsruhe, Germany).
Spectra Assignment
The well-established sequential methodology based on homonuclear spectra [66] was used to assign the NMR spectra of each sample. This was done using the tools provided by the NMR assignment program SPARKY (NMRFAM-Sparky version 1.4) [67]. 13 C resonances were assigned based on the cross-peaks observed in the 1 H-13 C-HSQC spectra. 1 H and 13 C chemical shifts are listed in the Supporting Tables S3-S11 and been deposited at the BioMagResBank (http://www.bmrb.wisc.edu) with accession codes BMRB ID: 50372-50377 and 50382-50384.
Estimation of Helix Populations
Helix populations were obtained from the H α and 13 C α chemical shifts as previously described [68]. The errors in the populations estimated from the H α and 13 C α chemical shifts are approx. 3 and 7%, respectively, assuming experimental errors of 0.01 and 0.1 ppm in the measurement of 1 H and 13 C chemical shifts.
Structure Calculation
Structure calculations of the studied peptides were performed using the iterative procedure for automatic NOE assignment integrated in the CYANA 3.97 program [69]. The CYANA algorithm uses an iterative process having seven cycles, in which NOEs are automatically assigned by a probabilistic treatment, and structures are calculated from them. The program computes 100 conformers per cycle, minimizing the 20 structures with the lowest target functions.
The assigned chemical shifts, the NOE integrated cross-peaks (as observed in the NOESY spectra) and the ϕ and ψ dihedral angle restraints (obtained using TALOSn webserver [70]) were used as experimental input data for structure calculation (Table S12).
The Maestro software, integrated in the Schrödinger 2018 package (Schrödinger Inc., Portland, OR, USA), and the MOLMOL program [68] were used to visualize and examine the final ensembles of the 20 lowest target function conformers. The protein preparation wizard implemented in Maestro was used to assess their quality and ensure structural correctness.
NMR-Driven Docking
A model of the CB1/β-arrestin1 interaction complex was built using the Haddock-webserver (http://milou.science.uu.nl/services/HADDOCK2.2/) [71,72]. The PDB coordinates determined herein for the solution structure of each peptide were used as input. The active residues in the docking interface were those whose NMR signals in the free peptides and in the mixture showed significant differences. These active residues guide the search for the best interacting way of the two input molecules. Haddock follows a rigid body energy minimization to cluster the complex models. In this way, the 200 complex models with lowest energy values were clustered and then refined using semiflexible docking and explicit water solvation. Representative complexes were those showing the best Haddock docking scores.
Conclusions
In the search of improved therapeutics targeting CB1 receptors, biased ligands are currently a major hope and challenge for avoiding undesired effects while optimizing the beneficial outcome. The design of these compounds clearly depends on an in-depth structural understanding of the GPCR-effector mechanism.
Since the G-protein interaction to CB1 has already been extensively explored [49,50], in this work, we aim to provide insights into the CB1/β-arrestin1 interface. This arrestin isoform was chosen due to the fact that it can provoke G protein-independent activation of the ERK signaling pathway [27]. For this purpose, based on reported complexes of β-arrestin with other GPCRs, we identified a putative binding region of the β-arrestin1 finger loop in CB1 . We characterized the structure of the CB1 TMH7-H8 elbow region and the β-arrestin1 finger loop, as well as their interaction using model peptides. The structural data obtained using CD and NMR studies indicated that both peptides had a slight tendency to be helical in aqueous solution, with the helical conformations being greatly stabilized in the presence of TFE and DPC micelles. It should be noted that TFE is a secondary structure enhancer, which has been shown to stabilize both helices and β-sheets [60,73] and that amphipathic structures, helical or not, seem to be favored in DPC micelles [74]. NMR characterization of CB1 391-409 confirmed the formation of two distinct helical motifs orthogonally oriented mimicking their corresponding region at TMH7 and H8. Therefore, this short peptide is able to maintain, at least partially, the structure of the full-length protein. Concerning β-arr1 63-76 finger loop model peptide, it tended to adopt helical conformations, which is in agreement with some of the reported activated β-arrestins [54][55][56]62], but not with others in which the finger loop is not helical [51,53]. The fact that the helix stability of the β-arr1 63-76 finger loop is low might be related to the plasticity of this region to adopt diverse structures in order to adapt to its partner. So, this short peptide would be reproducing the structural behavior of the full-length protein.
More interestingly, as observed in the peptides mixture spectra, residues at the TMH7-H8 elbow can interact with the domain of the β-arrestin1 finger loop. This structural information is in agreement with the few previously reported structures of β-arrestins in complex with other class A GPCRs such as the rhodopsin or the β1-adrenergic receptors [51][52][53]55]. Structural changes at this intracellular receptor region may suggest that the extracellular domain of the TMH1-2-7 region is involved in ligand binding of CB1 β-arrestin1 biased ligands. Therefore, this information may provide further insights into the design of novel CB1 molecules with optimized therapeutic outcomes.
In summary, our results show that short peptides encompassing the sequences of the TMH7-H8 intracellular domain and the β-arrestin1 finger loop tend to adopt the structural features of the full-length proteins, and are able to interact each other in a way that parallels the putative CB1/β-arrestin1 interface, as deduced from other GPCR/arrestin complexes. Apart from providing structural insights into the CB1/β-arrestin1 recognition, our findings might open a way towards the selective blocking of the β-arrestin1 pathway. Further studies using CB1 391-409 and β-arr1 63-76 mutants and considering TMH6 and intracellular loops will be developed in order to fully unravel the key molecular features involved in CB 1 recognition of the finger loop domain of β-arrestin1, which would evidently also be understood if the structure of the whole CB1/β-arrestin1 complex is determined in the future.
Acknowledgments:
We acknowledge support of the publication fee by the CSIC Open Access Publication Support Initiative through its Unit of Information Resources for Research (URICI). The NMR experiments were performed in the "Manuel Rico" NMR laboratory, LMR, CSIC, a node of the Spanish Large-Scale National Facility ICTS R-LRB.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-11-05T09:08:26.035Z | 2020-10-30T00:00:00.000 | {
"year": 2020,
"sha1": "819d9f9d07f6dfa9b0302b3fa57368526ffb5606",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/21/21/8111/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b64012c6545261f42a2d0a339a809caabecd240a",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
14598745 | pes2o/s2orc | v3-fos-license | Word Sense Disambiguation Improves Statistical Machine Translation
Recent research presents conflicting evidence on whether word sense disambiguation (WSD) systems can help to improve the performance of statistical machine translation (MT) systems. In this paper, we successfully integrate a state-of-the-art WSD system into a state-of-the-art hierarchical phrase-based MT system, Hiero. We show for the first time that integrating a WSD sys-tem improves the performance of a state-of-the-art statistical MT system on an actual translation task. Furthermore, the improvement is statistically significant.
Introduction
Many words have multiple meanings, depending on the context in which they are used. Word sense disambiguation (WSD) is the task of determining the correct meaning or sense of a word in context. WSD is regarded as an important research problem and is assumed to be helpful for applications such as machine translation (MT) and information retrieval.
In translation, different senses of a word w in a source language may have different translations in a target language, depending on the particular meaning of w in context. Hence, the assumption is that in resolving sense ambiguity, a WSD system will be able to help an MT system to determine the correct translation for an ambiguous word. To determine the correct sense of a word, WSD systems typically use a wide array of features that are not limited to the local context of w, and some of these features may not be used by state-of-the-art statistical MT systems.
To perform translation, state-of-the-art MT systems use a statistical phrase-based approach (Marcu and Wong, 2002;Och and Ney, 2004) by treating phrases as the basic units of translation. In this approach, a phrase can be any sequence of consecutive words and is not necessarily linguistically meaningful. Capitalizing on the strength of the phrase-based approach, Chiang (2005) introduced a hierarchical phrase-based statistical MT system, Hiero, which achieves significantly better translation performance than Pharaoh (Koehn, 2004a), which is a state-of-the-art phrasebased statistical MT system.
Recently, some researchers investigated whether performing WSD will help to improve the performance of an MT system. Carpuat and Wu (2005) integrated the translation predictions from a Chinese WSD system (Carpuat et al., 2004) into a Chinese-English word-based statistical MT system using the ISI ReWrite decoder (Germann, 2003). Though they acknowledged that directly using English translations as word senses would be ideal, they instead predicted the HowNet sense of a word and then used the English gloss of the HowNet sense as the WSD model's predicted translation. They did not incorporate their WSD model or its predictions into their translation model; rather, they used the WSD predictions either to constrain the options available to their decoder, or to postedit the output of their decoder. They reported the negative result that WSD decreased the performance of MT based on their experiments.
In another work (Vickrey et al., 2005), the WSD problem was recast as a word translation task. The 33 translation choices for a word w were defined as the set of words or phrases aligned to w, as gathered from a word-aligned parallel corpus. The authors showed that they were able to improve their model's accuracy on two simplified translation tasks: word translation and blank-filling.
Recently, Cabezas and Resnik (2005) experimented with incorporating WSD translations into Pharaoh, a state-of-the-art phrase-based MT system . Their WSD system provided additional translations to the phrase table of Pharaoh, which fired a new model feature, so that the decoder could weigh the additional alternative translations against its own. However, they could not automatically tune the weight of this feature in the same way as the others. They obtained a relatively small improvement, and no statistical significance test was reported to determine if the improvement was statistically significant.
Note that the experiments in (Carpuat and Wu, 2005) did not use a state-of-the-art MT system, while the experiments in (Vickrey et al., 2005) were not done using a full-fledged MT system and the evaluation was not on how well each source sentence was translated as a whole. The relatively small improvement reported by Cabezas and Resnik (2005) without a statistical significance test appears to be inconclusive. Considering the conflicting results reported by prior work, it is not clear whether a WSD system can help to improve the performance of a state-of-the-art statistical MT system.
In this paper, we successfully integrate a stateof-the-art WSD system into the state-of-the-art hierarchical phrase-based MT system, Hiero (Chiang, 2005). The integration is accomplished by introducing two additional features into the MT model which operate on the existing rules of the grammar, without introducing competing rules. These features are treated, both in feature-weight tuning and in decoding, on the same footing as the rest of the model, allowing it to weigh the WSD model predictions against other pieces of evidence so as to optimize translation accuracy (as measured by BLEU). The contribution of our work lies in showing for the first time that integrating a WSD system significantly improves the performance of a state-of-the-art statistical MT system on an actual translation task.
In the next section, we describe our WSD system. Then, in Section 3, we describe the Hiero MT system and introduce the two new features used to integrate the WSD system into Hiero. In Section 4, we describe the training data used by the WSD system. In Section 5, we describe how the WSD translations provided are used by the decoder of the MT system. In Section 6 and 7, we present and analyze our experimental results, before concluding in Section 8.
Word Sense Disambiguation
Prior research has shown that using Support Vector Machines (SVM) as the learning algorithm for WSD achieves good results (Lee and Ng, 2002). For our experiments, we use the SVM implementation of (Chang and Lin, 2001) as it is able to work on multiclass problems to output the classification probability for each class. Our implemented WSD classifier uses the knowledge sources of local collocations, parts-of-speech (POS), and surrounding words, following the successful approach of (Lee and Ng, 2002). For local collocations, we use 3 features, w −1 w +1 , w −1 , and w +1 , where w −1 (w +1 ) is the token immediately to the left (right) of the current ambiguous word occurrence w. For parts-of-speech, we use 3 features, P −1 , P 0 , and P +1 , where P 0 is the POS of w, and P −1 (P +1 ) is the POS of w −1 (w +1 ). For surrounding words, we consider all unigrams (single words) in the surrounding context of w. These unigrams can be in a different sentence from w. We perform feature selection on surrounding words by including a unigram only if it occurs 3 or more times in some sense of w in the training data.
To measure the accuracy of our WSD classifier, we evaluate it on the test data of SENSEVAL-3 Chinese lexical-sample task. We obtain accuracy that compares favorably to the best participating system in the task (Carpuat et al., 2004).
where X is a non-terminal symbol, γ (α) is a string of terminal and non-terminal symbols in the source (target) language, and there is a one-to-one correspondence between the non-terminals in γ and α indicated by co-indexation. Hence, γ and α always have the same number of non-terminal symbols. For instance, we could have the following grammar rule: X → X 1 , go to X 1 every month to (2) where boxed indices represent the correspondences between non-terminal symbols. Hiero extracts the synchronous CFG rules automatically from a word-aligned parallel corpus. To translate a source sentence, the goal is to find its most probable derivation using the extracted grammar rules. Hiero uses a general log-linear model (Och and Ney, 2002) where the weight of a derivation D for a particular source sentence and its translation is where φ i is a feature function and λ i is the weight for feature φ i . To ensure efficient decoding, the φ i are subject to certain locality restrictions. Essentially, they should be defined as products of functions defined on isolated synchronous CGF rules; however, it is possible to extend the domain of locality of the features somewhat. A n-gram language model adds a dependence on (n−1) neighboring target-side words (Wu, 1996;Chiang, 2007), making decoding much more difficult but still polynomial; in this paper, we add features that depend on the neighboring source-side words, which does not affect decoding complexity at all because the source string is fixed.
In principle we could add features that depend on arbitrary source-side context.
New Features in Hiero for WSD
To incorporate WSD into Hiero, we use the translations proposed by the WSD system to help Hiero obtain a better or more probable derivation during the translation of each source sentence. To achieve this, when a grammar rule R is considered during decoding, and we recognize that some of the terminal symbols (words) in α are also chosen by the WSD system as translations for some terminal symbols (words) in γ, we compute the following features: • P wsd (t | s) gives the contextual probability of the WSD classifier choosing t as a translation for s, where t (s) is some substring of terminal symbols in α (γ). Because this probability only applies to some rules, and we don't want to penalize those rules, we must add another feature, • P ty wsd = exp(−|t|), where t is the translation chosen by the WSD system. This feature, with a negative weight, rewards rules that use translations suggested by the WSD module.
Note that we can take the negative logarithm of the rule/derivation weights and think of them as costs rather than probabilities.
Gathering Training Examples for WSD
Our experiments were for Chinese to English translation. Hence, in the context of our work, a synchronous CFG grammar rule X → γ, α gathered by Hiero consists of a Chinese portion γ and a corresponding English portion α, where each portion is a sequence of words and non-terminal symbols.
Our WSD classifier suggests a list of English phrases (where each phrase consists of one or more English words) with associated contextual probabilities as possible translations for each particular Chinese phrase. In general, the Chinese phrase may consist of k Chinese words, where k = 1, 2, 3, . . .. However, we limit k to 1 or 2 for experiments reported in this paper. Future work can explore enlarging k.
Whenever Hiero is about to extract a grammar rule where its Chinese portion is a phrase of one or two Chinese words with no non-terminal symbols, we note the location (sentence and token offset) in the Chinese half of the parallel corpus from which the Chinese portion of the rule is extracted. The actual sentence in the corpus containing the Chinese phrase, and the one sentence before and the one sentence after that actual sentence, will serve as the context for one training example for the Chinese phrase, with the corresponding English phrase of the grammar rule as its translation. Hence, unlike traditional WSD where the sense classes are tied to a specific sense inventory, our "senses" here consist of the English phrases extracted as translations for each Chinese phrase. Since the extracted training data may 35 be noisy, for each Chinese phrase, we remove English translations that occur only once. Furthermore, we only attempt WSD classification for those Chinese phrases with at least 10 training examples.
Using the WSD classifier described in Section 2, we classified the words in each Chinese source sentence to be translated. We first performed WSD on all single Chinese words which are either noun, verb, or adjective. Next, we classified the Chinese phrases consisting of 2 consecutive Chinese words by simply treating the phrase as a single unit. When performing classification, we give as output the set of English translations with associated context-dependent probabilities, which are the probabilities of a Chinese word (phrase) translating into each English phrase, depending on the context of the Chinese word (phrase). After WSD, the ith word c i in every Chinese sentence may have up to 3 sets of associated translations provided by the WSD system: a set of translations for c i as a single word, a second set of translations for c i−1 c i considered as a single unit, and a third set of translations for c i c i+1 considered as a single unit.
Incorporating WSD during Decoding
The following tasks are done for each rule that is considered during decoding: • identify Chinese words to suggest translations for • match suggested translations against the English side of the rule • compute features for the rule The WSD system is able to predict translations only for a subset of Chinese words or phrases. Hence, we must first identify which parts of the Chinese side of the rule have suggested translations available. Here, we consider substrings of length up to two, and we give priority to longer substrings.
Next, we want to know, for each Chinese substring considered, whether the WSD system supports the Chinese-English translation represented by the rule. If the rule is finally chosen as part of the best derivation for translating the Chinese sentence, then all the words in the English side of the rule will appear in the translated English sentence. Hence, we need to match the translations suggested by the WSD system against the English side of the rule. It is for these matching rules that the WSD features will apply.
The translations proposed by the WSD system may be more than one word long. In order for a proposed translation to match the rule, we require two conditions. First, the proposed translation must be a substring of the English side of the rule. For example, the proposed translation "every to" would not match the chunk "every month to". Second, the match must contain at least one aligned Chinese-English word pair, but we do not make any other requirements about the alignment of the other Chinese or English words. 1 If there are multiple possible matches, we choose the longest proposed translation; in the case of a tie, we choose the proposed translation with the highest score according to the WSD model.
Define a chunk of a rule to be a maximal substring of terminal symbols on the English side of the rule. For example, in Rule (2), the chunks would be "go to" and "every month to". Whenever we find a matching WSD translation, we mark the whole chunk on the English side as consumed.
Finally, we compute the feature values for the rule. The feature P wsd (t | s) is the sum of the costs (according to the WSD model) of all the matched translations, and the feature P ty wsd is the sum of the lengths of all the matched translations. Figure 1 shows the pseudocode for the rule scoring algorithm in more detail, particularly with regards to resolving conflicts between overlapping matches. To illustrate the algorithm given in Figure 1, consider Rule (2). Hereafter, we will use symbols to represent the Chinese and English words in the rule: c 1 , c 2 , and c 3 will represent the Chinese words " ", " ", and " " respectively. Similarly, e 1 , e 2 , e 3 , e 4 , and e 5 will represent the English words go, to, every, month, and to respectively. Hence, Rule (2) has two chunks: e 1 e 2 and e 3 e 4 e 5 . When the rule is extracted from the parallel corpus, it has these alignments between the words of its Chinese and English portion: {c 1 -e 3 ,c 2 -e 4 ,c 3 -e 1 ,c 3 -e 2 ,c 3 -e 5 }, which means that c 1 is aligned to e 3 , c 2 is aligned to Input: rule R considered during decoding with its own associated costR Lc = list of symbols in Chinese portion of R WSDcost = 0 i = 1 while i ≤ len(Lc): ci = ith symbol in Lc if ci is a Chinese word (i.e., not a non-terminal symbol): seenChunk = ∅ // seenChunk is a global variable and is passed by reference to matchWSD if (ci is not the last symbol in Lc) and (ci+1 is a terminal symbol): then ci+1=(i+1)th symbol in Lc, else ci+1 = NULL if (ci+1!=NULL) and (ci, ci+1) as a single unit has WSD translations: W SDc = set of WSD translations for (ci, ci+1) as a single unit with context-dependent probabilities WSDcost = WSDcost + matchWSD(ci, W SDc, seenChunk) WSDcost = WSDcost + matchWSD(ci+1, W SDc, seenChunk) i = i + 1 else: W SDc = set of WSD translations for ci with context-dependent probabilities WSDcost = WSDcost + matchWSD(ci, W SDc, seenChunk) i = i + 1 costR = costR + WSDcost matchWSD(c, W SDc, seenChunk): // seenChunk is the set of chunks of R already examined for possible matching WSD translations cost = 0 ChunkSet = set of chunks in R aligned to c for chunkj in ChunkSet: if chunkj not in seenChunk: if (wsd k is sub-sequence of chunk j ) and (wsd k contains at least one word in E chunk j ) Candidate wsd = Candidate wsd ∪ { wsd k } wsd best = best matching translation in Candidate wsd against chunkj cost = cost + costByWSDfeatures(wsd best ) // costByWSDfeatures sums up the cost of the two WSD features return cost Figure 1: WSD translations affecting the cost of a rule R considered during decoding. e 4 , and c 3 is aligned to e 1 , e 2 , and e 5 . Although all words are aligned here, in general for a rule, some of its Chinese or English words may not be associated with any alignments.
In our experiment, c 1 c 2 as a phrase has a list of translations proposed by the WSD system, including the English phrase "every month". matchWSD will first be invoked for c 1 , which is aligned to only one chunk e 3 e 4 e 5 via its alignment with e 3 . Since "every month" is a sub-sequence of the chunk and also contains the word e 3 ("every"), it is noted as a candidate translation. Later, it is determined that the most number of words any candidate translation has is two words. Since among all the 2-word candidate translations, the translation "every month" has the highest translation probability as assigned by the WSD classifier, it is chosen as the best matching translation for the chunk. matchWSD is then invoked for c 2 , which is aligned to only one chunk e 3 e 4 e 5 . However, since this chunk has already been examined by c 1 with which it is considered as a phrase, no further matching is done for c 2 . Next, matchWSD is invoked for c 3 , which is aligned to both chunks of R. The English phrases "go to" and "to" are among the list of translations proposed by the WSD system for c 3 , and they are eventually chosen as the best matching translations for the chunks e 1 e 2 and e 3 e 4 e 5 , respectively.
Experiments
As mentioned, our experiments were on Chinese to English translation. Similar to (Chiang, 2005) (Chiang, 2005).
the English portion of the FBIS corpus and the Xinhua portion of the Gigaword corpus, we trained a trigram language model using the SRI Language Modelling Toolkit (Stolcke, 2002). Following (Chiang, 2005), we used the version 11a NIST BLEU script with its default settings to calculate the BLEU scores (Papineni et al., 2002) based on case-insensitive ngram matching, where n is up to 4. First, we performed word alignment on the FBIS parallel corpus using GIZA++ (Och and Ney, 2000) in both directions. The word alignments of both directions are then combined into a single set of alignments using the "diag-and" method of . Based on these alignments, synchronous CFG rules are then extracted from the corpus. While Hiero is extracting grammar rules, we gathered WSD training data by following the procedure described in section 4.
Hiero Results
Using the MT 2002 test set, we ran the minimumerror rate training (MERT) (Och, 2003) with the decoder to tune the weights for each feature. The weights obtained are shown in the row Hiero of Table 2. Using these weights, we run Hiero's decoder to perform the actual translation of the MT 2003 test sentences and obtained a BLEU score of 29.73, as shown in the row Hiero of Table 1. This is higher than the score of 28.77 reported in (Chiang, 2005), perhaps due to differences in word segmentation, etc. Note that comparing with the MT systems used in (Carpuat and Wu, 2005) and (Cabezas and Resnik, 2005), the Hiero system we are using represents a much stronger baseline MT system upon which the WSD system must improve.
Hiero+WSD Results
We then added the WSD features of Section 3.1 into Hiero and reran the experiment. The weights obtained by MERT are shown in the row Hiero+WSD of Table 2. We note that a negative weight is learnt for P ty wsd . This means that in general, the model prefers grammar rules having chunks that matches WSD translations. This matches our intuition. Using the weights obtained, we translated the test sentences and obtained a BLEU score of 30.30, as shown in the row Hiero+WSD of Table 1. The improvement of 0.57 is statistically significant at p < 0.05 using the sign-test as described by Collins et al. (2005), with 374 (+1), 318 (−1) and 227 (0). Using the bootstrap-sampling test described in (Koehn, 2004b), the improvement is statistically significant at p < 0.05. Though the improvement is modest, it is statistically significant and this positive result is important in view of the negative findings in (Carpuat and Wu, 2005) that WSD does not help MT. Furthermore, note that Hiero+WSD has higher n-gram precisions than Hiero.
Analysis
Ideally, the WSD system should be suggesting highquality translations which are frequently part of the reference sentences. To determine this, we note the set of grammar rules used in the best derivation for translating each test sentence. From the rules of each test sentence, we tabulated the set of translations proposed by the WSD system and check whether they are found in the associated reference sentences.
On Table 3: Number of WSD translations used and proportion that matches against respective reference sentences. WSD translations longer than 4 words are very sparse (less than 10 occurrences) and thus they are not shown. by the WSD system were used for each sentence. When limited to the set of 374 sentences which were judged by the Collins sign-test to have better translations from Hiero+WSD than from Hiero, a higher number (11.14) of proposed translations were used on average. Further, for the entire set of test sentences, 73.01% of the proposed translations are found in the reference sentences. This increased to a proportion of 73.22% when limited to the set of 374 sentences. These figures show that having more, and higher-quality proposed translations contributed to the set of 374 sentences being better translations than their respective original translations from Hiero. Table 3 gives a detailed breakdown of these figures according to the number of words in each proposed translation. For instance, over all the test sentences, the WSD module gave 7087 translations of single-word length, and 77.31% of these translations match their respective reference sentences. We note that although the proportion of matching 2word translations is slightly lower for the set of 374 sentences, the proportion increases for translations having more words.
After the experiments in Section 6 were completed, we visually inspected the translation output of Hiero and Hiero+WSD to categorize the ways in which integrating WSD contributes to better translations. The first way in which WSD helps is when it enables the integrated Hiero+WSD system to output extra appropriate English words. For example, the translations for the Chinese sentence ". . . " are as follows.
• Hiero: . . . or other bad behavior ", will be more aid and other concessions.
• Hiero+WSD: . . . or other bad behavior ", will be unable to obtain more aid and other concessions.
Here, the Chinese words " " are not translated by Hiero at all. By providing the correct translation of "unable to obtain" for " ", the translation output of Hiero+WSD is more complete.
A second way in which WSD helps is by correcting a previously incorrect translation. For example, for the Chinese sentence ". . . . . . ", the WSD system helps to correct Hiero's original translation by providing the correct translation of "all ethnic groups" for the Chinese phrase " ": • Hiero: . . . , and people of all nationalities across the country, . . .
We also looked at the set of 318 sentences that were judged by the Collins sign-test to be worse translations. We found that in some situations, Hiero+WSD has provided extra appropriate English words, but those particular words are not used in the reference sentences. An interesting example is the translation of the Chinese sentence " ".
• Hiero: Australian foreign minister said that North Korea bad behavior will be more aid • Hiero+WSD: Australian foreign minister said that North Korea bad behavior will be unable to obtain more aid This is similar to the example mentioned earlier. In this case however, those extra English words provided by Hiero+WSD, though appropriate, do not 39 result in more n-gram matches as the reference sentences used phrases such as "will not gain", "will not get", etc. Since the BLEU metric is precision based, the longer sentence translation by Hiero+WSD gets a lower BLEU score instead.
Conclusion
We have shown that WSD improves the translation performance of a state-of-the-art hierarchical phrase-based statistical MT system and this improvement is statistically significant. We have also demonstrated one way to integrate a WSD system into an MT system without introducing any rules that compete against existing rules, and where the feature-weight tuning and decoding place the WSD system on an equal footing with the other model components. For future work, an immediate step would be for the WSD classifier to provide translations for longer Chinese phrases. Also, different alternatives could be tried to match the translations provided by the WSD classifier against the chunks of rules. Finally, besides our proposed approach of integrating WSD into statistical MT via the introduction of two new features, we could explore other alternative ways of integration. | 2014-07-01T00:00:00.000Z | 2007-06-01T00:00:00.000 | {
"year": 2007,
"sha1": "bbe013543e9c1d8f00499036121c363ad3ab3d7d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "e7170c30db5995a757c68da17c3c2c4cf4108353",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
147703980 | pes2o/s2orc | v3-fos-license | A Global Compact Result for a Fractional Elliptic Problem with Hardy term and critical non-linearity on the whole space
In this paper, we deal with a fractional elliptic equation with critical Sobolev nonlinearity and Hardy term $$ (-\Delta)^{\alpha} u-\mu\frac{u}{|x|^{2\alpha}}+a(x) u=|u|^{2^*-2}u+k(x)|u|^{q-2}u$$ $$ u\,\in\,H^\alpha({\mathbb R}^N),$$ where $24\alpha$, $2^*=2N/(N-2\alpha)$ is the critical Sobolev exponent, $a(x),k(x)\in C({\mathbb R}^N)$. Through a compactness analysis of the functional associated to $(*)$, we obtain the existence of positive solutions for $(*)$ under certain assumptions on $a(x),k(x)$.
Recently the fractional Laplacian and more general nonlocal operators of elliptic type have been widely studied, both for their interesting theoretical structure and concrete applications in many fields such as optimization, finance, phase transitions, stratified materials, anomalous diffusion and so on (see [2,5,7,8,14,21,23,24]). In particular, a lot of results have been accumulated for elliptic equations with critical nonlinearity related to (1.1). In [5], Dipierro etc. considere'd the critical problem with Hardy-Leray potential (1.2) whereḢ α (R N ) is defined in (1.6). They proved the existence, certain qualitative properties and asymptotic behavior of positive solutions to (1.2). Ghoussoub and Shakerian in [9] investigated the following double critical problem in with µ > 0, 0 < s < 2. Through the non-compactness analysis of the Palais-Smale sequence of (1.3), the existence of the solutions were obtained. The authors in [11] established a concentration-compactness result for a fractional Schrödinger equation with the subcritical nonlinearity f (x, u). Motivated by [5,9,11,12,27] we consider the existence of positive solutions for problem (1.1) in R N . The main interest for this type of problems, in addition to the nonlocal fractional Laplacian is the presence of the singular potential 1 |x| 2α related to the fractional Hardy's inequality. We recall the Hardy inequality( [5]), |u(x) − u(y)| 2 |x − y| N +2α dy, ∀u ∈ C ∞ 0 (R N ), (1.4) where The Sobolev embeddingḢ α (R N ) ֒→ L 2 (|x| −2α , R N ) is not compact, even locally, in any neighborhood of zero. As it is well known, the loss of the compactness of the embeddings is one of the main difficulties for elliptic problems with critical nonlinearities. Problem (1.1) has three factors, critical Sobolev term, Hardy term and unbounded domain which lead to the non-compactness of the embeddings. In [5] and [9], the authors can consider the solutions of critical problems in the homogeneous fractional Sobolev spaceḢ α (R N ), while we must deal with (1.1) in the nonhomogeneous fractional Sobolev space H α (R N ) given the presence of low sub-critical terms in (1.1). This is why the methods in [5] and [9] can not be used directly to (1.1). As far as we know, the existence results for the fractional elliptic problems with a mixture of critical Sobolev terms, Hardy term and subcritical terms are relatively new. To overcome the difficulties caused by the lack of compactness, we carry out a non-compactness analysis which can distinctly express all the parts which cause non-compactness. As a result, we are able to obtain the existence of nontrival solutions of the elliptic problem with the critical nonlinear term on an unbounded domain by getting rid of these noncompact factors. To be more specific, for the Palais-Smale sequences of the variational functional corresponding to (1.1) we first establish a complete noncompact expression which includes all the blowing up bubbles caused by the critical Sobolev nonlinearity, the Hardy term and by the unbounded domain. Then we derive the existence of positive solutions for (1.1). Our methods are based on some techniques of [4,11,13,16,19,20,25,26].
Before introducing our main results, we give some notations and assumptions.
Notations and assumptions:
Denote c and C as arbitrary constants which may change from line to line. Let B(x, r) denote a ball centered at x with radius r and B(x, r) C = R N \ B(x, r).
We define the operator (−∆) α u by the Fourier transform LetḢ α (R N ) be the homogeneous fractional Sobolev space as the completion of and denote by H α (R N ) the usual nonhomogeneous fractional Sobolev space with the norm |Γ(−α)| . Let u + = max{u, 0}, u − = u + − u. From the proof of (2.15) in [15], it follows Recall the definition of Morrey space. A measurable function u : R N → R belongs to the Morrey space with p ∈ [1, ∞) and ν ∈ (0, N], if and only if By Hölder inequality, we can verify (refer to [14]) and Next we give the definition of the Palais-Smale sequence. Let X be a Banach space, (1.11) In this paper we assume that: In the following, we assume that a(x), k(x) always satisfy (a) and (b). The energy functional associated with (1.1) is for all u ∈ H α (R N ), Finally we present some problems associated to (1.1) as follows.
The limit equation of (1.1) involving subcritical and critical terms is 12) and its corresponding variational functional is The limit equation of (1.1) involving the Hardy term and critical Sobolev nonlinearity is (1.13) and the corresponding variational functional is The limit equation of (1.1) involving critical Sobolev nonlinearity is (1.14) and the corresponding variational functional is and α µ ∈ (0, N −2α 2 ) is a suitable parameter whose explicit value will be determined as the unique solution to the following equation and ϕ α,N is strictly increasing. That is All the positive solutions of (1.13) are of the form In particular, for µ = 0, it follows that (refer to [6] ) where C > 0 is a constant. These solutions U ε,y are also minimizers for the quotient It is known that N = ∅ since problem (1.12) has at least one positive solution if N > 2α (see Theorem 1.3 in [28]) for 2 < q < 2 * andk > λ * (λ * > 0 is a positive constant definded in [28]).
The main result of our paper is as follows: , such that up to a subsequence: where u and u k (1 ≤ k ≤ l 1 ) satisfy In particular, if u ≡ 0, then u is a weakly solution of (1.1). Note that the corresponding sum in (1.25) will be treated as zero if l i = 0 (i = 1, 2, 3).
Remarks:
1) Similar as Corollary 3.3 in [19], one can show that any Palais-Smale sequence for I at a level which is not of the form , gives rise to a non-trivial weak solution of equation (1.1).
2) In our non-compactness analysis, we prove that the blowing up positive Palais-Smale sequences can bear exactly three kinds of bubbles. Up to harmless constants, they are either of the form where u is the solution of (1.12). For any Palais-Smale sequence u n for I, ruling out the above two bubbles yields the existence of a non-trivial weak solution of equation (1.1).
Using the compactness results and the Mountain Pass Theorem [1] we prove the following existence result.
Then (1.1) has a nontrivial solution u ∈ H α (R N ) which satisfies This paper is organized as follows. In Section 2, we prove Theorem 1.1 by carefully analyzing the features of a positive Palais-Smale sequence for I. Theorem 1.2 is proved in Section 3 by applying Theorem 1.1 and the Mountain Pass Theorem. Finally we put some preliminaries in the last section as an appendix.
Non-compactness analysis
In this section, we prove Theorem 1.1 by using the Concentration-Compactness Principle and a delicate analysis of the Palais-Smale sequences of I. Firstly we give the following Lemmas.
Then, up to subsequence, there exist two sequences {r n } ⊂ R + and {x n } ⊂ R N such that Proof. By Theorem 1 in [16], Then there exists a constant c > 0 such that From (2.5), we may find r n > 0 and x n ∈ R N such that for n large enough, Since {u n } is bounded inḢ α (R N ), from the scaling and translation invariance ofḢ α (R N ), then {ū n } is bounded inḢ α (R N ), therefore, up to a subsequence (still denoted byū n ), If xn rn is bounded, there exists aR > 1 such that B( xn rn , 1) ⊂ B(0,R), then whereR > 1. Obviously we have w ≡ 0. From (2.8) and (2.9), Lemma 2.1 is complete.
Proof. First, we prove that v 0 solves (1.14) and The last equality in (2.10) holds since Thus v 0 is a nontrival critical point of I 0 . By Lemma 4.5, (1.20) and the fact N > 4α, it follows . By the Brézis-Lieb Lemma and the weak convergence, similar to Lemma 4.6 in the Appendix, we can prove that as n → ∞. It completes the proof.
Proof. First, we prove that v 0 solves (1.13) and I(z n ) = I(v n ) − I µ (v 0 ). Fix a ball B(0, r) and a test function φ ∈ C ∞ 0 (B(0, r)). Sincē The last equality in (2.14) holds since Thus v 0 is a nontrival critical point of I µ . Noting the fact N > 4α, µ < φ α,N ( N −4α 2 ) and φ α,N is a strictly increasing, it follows then by Lemma 4.5 and (1.16), it follows From (2.15) and v 0 ∈ L p (R N ) for all p ∈ [2, 2 * ), it follows Thus z n ⇀ 0 in H α (R N ) as n → ∞. Now we prove that {z n } is a Palais-Smale sequence of I at level d 1 − I µ (v 0 ). By the Brézis-Lieb Lemma and the weak convergence, similar to Lemma 4.6 in the Appendix, we can prove that as n → ∞. It completes the proof.
Proof of Theorem 1.1. By Lemma 4.3 in the appendix, we can assume that {u n } is bounded. Up to a subsequence, let n → ∞, we assume that Without loss of generality, we may assume that In fact if l = 0, Theorem 1.1 is proved for l 1 = 0, l 2 = 0, l 3 = 0.
Step 1: getting rid of the blowing up bubbles caused by unbounded domains.
Suppose there exists a constant 0 < δ < ∞ such that where 0 < λ < 1. Thus there exists aδ > 0 such that v n 2 L 2 ≥δ > 0. By Lemma 4.1, there exists a subsequence still denoted by {v n }, such that one of the following two cases occurs.
To proceed, we first construct the Palais-Smale sequences of I ∞ .
We claim that v 0 ≡ 0. From (2.26), we may assume that there exists a sequence {y n } satisfying (2.27) and where the last equality but one is a result of (2.31), therefore, as n → ∞, Hence z n ⇀ 0 in H α (R N ) as n → ∞, and z n is a Palais-Smale sequence of I. From (4.7) in Lemma 4.4, it follows v − 0 H α = 0, that is v 0 ≥ 0 a.e. in R N . Then by Brezis-Lieb Lemma and (4.7), there exists a constant c > 0 such that where the last inequality follows from the fact v 0 ≡ 0. If z n L q (R N ) → δ 2 > 0 as n → ∞, from (2.38) and the boundedness of v n L q , then one can repeat Step 1 for finite times (l 1 times). Thus we obtain a new Palais-Smale sequence of I, without loss of generality still denoted by v n , such that as n → ∞.
Step 2: Getting rid of the blowing up bubbles caused by the critical terms.
Suppose there exists 0 < δ < ∞ such that Now we claim that r n → 0 as n → ∞. In fact there exists a R 1 > 0 such that (2.46) If | xn rn | → ∞, then there exists a constantc such that Then from (2.41) (2.47) and the fact q < 2 * , it follows that r n → 0. Similarly, if xn rn is bounded, we also have that r n → 0.
For the case that xn rn is bounded andv n (x) = r n . It follows from Lemma 2.2 that {z n } is a Palais-Smale sequence of I satisfying with R 1 n → 0. Then from (2.23) it follows with R 1 n → 0. From (4.7), we have that z n ≥ 0, a.e. in R N . From Lemma 4.7, let a = v n , b = U For the case that | xn rn | → ∞ andv n (x) = r If still there exists aδ > 0, such that then repeat the previous argument. From (2.52) and the fact we deduce that the iteration must stop after finite times. That is to see, there exist nonnegative constants l 2 , l 3 and a new Palais-Smale sequence of I, (without loss of generality) denoted by {v n }, such that as n → ∞, as n → ∞. From (2.61), it gives that we deduce that for a fixed u ≡ 0 in H α (R N ), we have Hence, there exists r 0 > 0 small such that I(u) As a consequence, I(u) satisfies the geometry structure of Mountain-Pass Theorem. Now define c * =: inf To complete the proof of Theorem 1.2, we need to verify that I(u) satisfies the local Palais-Smale conditions. According to Remarks 1), we only need to verify that In fact, from (1.20) it is easy to calculate the following estimates Denote t ε the attaining point of max t>0 I(tv ε ), similar to the proof of Lemma 3.5 in [3] we can prove that t ε is uniformly bounded. In fact, we consider the function (3.7) Since lim t→+∞ h(t) = −∞ and h(t) > 0 when t is closed to 0, then max t>0 h(t) is attained for Since k(x) > 0, from (3.3) and (3.4) for ε sufficiently small, we have (3.10) Choosing ε > 0 small enough, by (3.3)-(3.5), there exists a constant α 1 > 0 such that t ε > α 1 > 0. Combining this with (3.9), it implies that t ε is bounded for ε > 0 small enough.
This completes the proof of (3.2). By the definition of c * , we have c * < α N S N 2α α,µ .
Appendix
In this appendix, we give some lemmas and detailed proofs for the convenience of the reader. where λ > 0 is fixed. Then there exists a subsequence {ρ n k } satisfying one of the following two possibilities: (1) (Vanishing): |u n (x)| 2 dx → 0 as n → ∞.
Then u n → 0 in L q (R N ), for 2 < q < 2N N −2α . Proof. Since a(x) ≥ 0,ā > 0 and inf and hence for 2 < q < 2 * It follows that {u n } is bounded in H α (R N ) for 2 < q < 2 * . Since we have d ≥ 0. Suppose now that d = 0, we obtain from the above inequality that lim n→∞ u n H α (R N ) = 0.
Lemma 4.4. Let {u n } be a Palais-Smale sequence of I at level d ∈ R and u + n = max{u n , 0}. Then {u + n } is also a Palais-Smale sequence of I at level d.
Next we claim that u > 0 in R N . Otherwise there exist x 1 ∈ R N such that u(x 1 ) = 0. Since u is lower semicontinuous in B(x 1 , 1/2), from Proposition 2.2.8 in [22], it follows u ≡ 0 in R N . This contradicts the assumption u is nontrivial.
Let {u n } be a Palais-Smale sequence at level d. Up to a subsequence, we assume that u n ⇀ u in H α (R N ) as n → ∞.
Then by the Brézis-Lieb Lemma in [1] as n → ∞, we have Lemma 4.7. Assume t ≥ b > 0 and q > 1, then Then | 2019-05-07T09:15:25.000Z | 2019-05-07T00:00:00.000 | {
"year": 2019,
"sha1": "b99aecef44f591bb8f33d25ddcf07b365ee8f2e4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b99aecef44f591bb8f33d25ddcf07b365ee8f2e4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
40937038 | pes2o/s2orc | v3-fos-license | Extensive lipoma in chin region. Case report
.
INTRODUCTION
Lipoma may be classified as a benign neoplasm 1,2 that affects soft tissue 2 . This tumor is related to mature adipose tissue 1,2 where it is commonly found in the mesenchymal region, according to the WHO classification 1 . Lipomas represent the most common mesenchymal tumors and are found in regions in which adipose tissue is normally present 3 .
The purpose of the present study is to report a clinical case of a very extensive intra-oral lipoma, located in the mentonian region; also, to conduct a review of the literature about this tumor.
CASE REPORT
The patient, a 62 year old woman with the primary complaint of swelling in the posterior region of the left mandible, approximately 2 years of asymptomatic clinical development, reporting paresthesia on the left side of the lower lip, as well as ipsilateral jugal mucosa. The patient claims no history of smoking, alcohol consumption or other clinically relevant situations. During the extra-oral evaluation there were no clinical signs of swelling The intra-oral clinical exam revealed the absence of all upper and lower dental elements, the presence of significant swelling in the posterior region of the left mandible extending to the region of the jugal mucosa, no signs of inflammation, the tumor was flaccid to palpation and had well-defined boundaries ( Figure 1).
The patient reported previous surgery at the site to perform a biopsy, and the result of the histopathological examination was lipoma. The proposed treatment was the surgical removal of the tumor under local anesthesia in an ambulatory setting.
After blocking the buccal and inferior alveolar nerves, local infiltration was also performed to improve hemostasis. An incision was made in the region immediately below the tumor, in the region of the oral vestibule. This was held open such that it was possible to locate the mentonian nerve, thereby enabling the dissection and the lesion. After, muscle adhesions related to the tumor were separated and excised ( Figure 2).
Macroscopic examination revealed a nodular tumor with yellowish coloring, similar to adipose tissue ( Figure 2). Microscopic examination revealed the presence of adipose cells, which diagnosis is compatible with lipoma ( Figure 3).
During post-operative clinical follow-up after 14 days, local scarring and absence of signs of recurrence of the tumor were found (Figure 4). During the clinical examination, the patient showed improvement in respect to the area of paresthesia.
DISCUSSION
The first report of OL was made by Roux in 1841 7,8 , in which an alveolar mass was reported which was referred to as "yellowish epulis" 8 . Lipomas are mesenchymal tumors found most frequently in soft tissue, but also occurring rarely in the mouth 2 . According to the 2002 WHO classification, lipomas usually present as asymptomatic tissue tumors, except for cases in which their location is related to compression of nerve structures 1 . This same symptom was reported in this clinical case, where improvement in pain was obtained following removal of the tumor from the region of the mentonian nerve. Microscopically, it is not possible to distinguish normal adipose tissue from lipomas; however, metabolic differences are found due to the fact that lipomas are not used as a form of energy, as happens with normal adipose tissues. This fact is related to the activity of the lipoprotein lipase which is notably greater in lipomas 5,6,9 . Lipomas may be classified as classic and variants, according to the amount and type of tissue found. These variants may be angiolipoma, chondrolipoma, myolipoma and pleomorphic lipoma, each with specific clinical and histological characteristics 2,4 . In a survey of 125 cases of OL, most cases were found in male patients (91 cases), most were found in patients between 52 years old, and 04 cases were found in pediatric patients. In regard to location, 30 cases were found in the parotid gland, 29 in the oral mucosa, 21 in the lips, 15 in the submandibular region, 15 in the tongue, 6 in the palate, 5 in the floor of the mouth, and 2 in the buccal vestibule. Most of the patients presented asymptomatic growth. The tumors were classified histologically as lipomas (62 cases), spindle cell lipomas (59 cases), fibrolipomas (2 cases) and chondrolipomas (2 cases).
Zhong et al. 3 evaluated lipomas in the maxillofacial region using ultrasonography in a study conducted with 22 patients. The mean age of the patients was 47 years, most of the patients were men, and the submandibular region was the most frequent location of these tumors. The ultrasonography of these patients revealed the presence of elliptical tumors, covered with an intact or partially intact capsule, having interiors with hypoechoic images. All patients were treated with surgical excision, and no recurrence was found in any cases.
In a study involving 58 cases of OL, Manor et al. 5 found no gender preference, with the mean age of the patients at 59 years. Regarding the location of these tumors, most of the cases were found in the region of the oral mucosa (31 cases), tongue (10 cases), lips (6 cases), floor of the mouth (6 cases) and the buccal vestibule (5 cases). Most of the patients complained of asymptomatic swelling. Histological analysis revealed the predominance of lipomas (28 cases), followed by fibrolipomas (19 cases), intramuscular lipoma (4 cases), spindle cell lipoma (3 cases), minor salivary gland lipoma (2 cases), and angiolipoma (2 cases). All cases were treated by surgical excision and no complications or recurrences were found during post-operative follow-up.
According to the latest WHO 1 classification, lipomas most frequently affect patients between the ages of 40 and 60 years, and are most common in obese patients. According to the same classification, location in the intra-oral region is found in a small number of cases in the literature.
The present study shows the presence of lipoma in the oral region, specifically in the region of the oral mucosa. This fact emphasizes that this is the most common location for this tumor and the age range of the patient is also within the data in the literature. Only the gender is contradicted in the literature, since some studies emphasize the predominance of cases in male patients, while others reiterate the absence of gender preference in cases of OL. The tumor was treated surgically in the present clinical case, as this is the treatment proposed in the literature with no reports found for recurrence of the tumor. In the present study, a 6-month post-operative follow-up found appropriate local healing and no signs of recurrence of the tumor. | 2017-09-14T12:09:02.251Z | 2014-04-01T00:00:00.000 | {
"year": 2014,
"sha1": "650cfd4dea2d5590d7fa9f7729ad2a46166616c6",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rounesp/a/RqBnqnWYFzGLmKmRL4Tt5xf/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3f70d78a4ba3f047f348807d57afe2bb730ad3dd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
151739133 | pes2o/s2orc | v3-fos-license | Grasping the Ineffable : Interdisciplinary Perspectives on Mood
The question of mood deeply affects a variety of disciplines such as psychology, sociology, philosophy and the arts. As an emerging new field in both disciplinary and interdisciplinary research, mood became the focal point of the conference ‘Mood: Aesthetic, Psychological and Philosophical Perspectives’, held at Warwick in 2016, which set out to explore the nature of mood and develop ways of conceptualising and researching it through an interdisciplinary lens. A series of keynote lectures, creative performances and parallel sessions led to a varied and productive exchange of disciplinary perspectives that helped to outline the main questions and concerns the emerging research topic of mood constitutes, uncovering the pivotal role it plays in aesthetic, social and political contexts.
To many, 2016 was a particularly tumultuous year in world history: from the divisive Brexit vote to the controversial outcome of the American presidential election, from the ongoing strains of the European refugee crisis and numerous atrocious terrorist attacks all around the world to the perceived mass dying of major artists and public personalities like David Bowie and Muhammad Ali, the public mood of the past year may have been at an all-time low.The impactful social and political moods of the recent past are complimented by a present-day emphasis on private moods and ways of managing them: on the popular music streaming platform Spotify, songs are now not only ordered by genres, but also by mood, according to their website in order to match listeners' moods and to 'soundtrack [their] life with a playlist to fit any moment'.Likewise, the new 'Elf Emmit' device is advertised as a 'digital metronome' that means to physiologically induce different moods: 'sleep', 'antistress', 'concentrate', 'meditate' and 'deep learning', while the video chat application Skype allows users to add a 'Mood Message' to their profile to communicate their states of mind to their social contacts, as do other social media platforms in similar ways.'Mood lights', 'mood rings' and other products are meant to adjust and express our daily moods, while pharmaceutical 'mood stabilisers' like anti-depressants are supposed to manage the wide-spread psychiatric mood disorders of our day.Our age's obsession with moods, private and public, may betray a sense of crisis and fracture in many areas of modern life, but it may also showcase a growing awareness of the conditions of everyday life and the ways in which moods enable and affect our every experience.In his defining work Being and Time (1927), Martin Heidegger postulates that we are never not in a mood or 'attunement' (in German, Stimmung, cf. Heidegger, 1962: 172f.).The inconspicuousness of our daily moods, many of which can be barely noticeable, often only becomes apparent in a state of Verstimmung, of being 'out of tune' with the world or being bad-tempered.'One mood can be replaced by another', contemporary philosopher Lars Svendsen finds, 'but it is impossible to leave attunement altogether' (Svendsen, 2005: 114).The omnipresence of mood in our daily lives, indeed the way it enables them in the first place, and the manner in which it thereby informs all academic thought and activity, call for a study of its nature and impact in a number of areas, including, though not limited to, philosophy, psychology, sociology and the arts.As such, mood is inherently interdisciplinary: The works of Heidegger, Sartre, Kierkegaard and Kant have established it as a central concern of modern philosophy.Psychology and psychiatry illuminate the role of mood in cognition, behaviour and psychosomatic illnesses.Sociology investigates mood in social interactions, groups and communities, while political science studies its impact on politics on a national and international stage.Finally, the arts examine the emergence and transmission of aesthetic moods in different media, including literature, film, theatre and music.Recent work on this phenomenon has considerably advanced the study of this theoretically challenging concept and has put it on the map of current research in a variety of disciplines: in literary studies Hans Ulrich Gumbrecht's influential book Atmosphere, Stimmung, Mood: On a Hidden Potential of Literature (2012) All of the aforementioned disciplines work with a different concept of mood, and all of them have equal claim to it.In a newly emerging field of such vast potential, interdisciplinary work using that synergy only suggests itself, and might even be necessary to grasp a concept as Breidenbach.Exchanges 2017 4(2), pp.309-315 multifaceted and complex as mood is.The conference Mood: Aesthetic, Psychological and Philosophical Perspectives, which took place at the University of Warwick in May 2016, set out to establish a cutting-edge platform for exploring mood in an interdisciplinary way, being the first international event that brought together a wide array of disciplines to discuss this subject matter from a diverse and differentiated perspective.Sponsored by the Humanities Research Centre, the Department of English and Comparative Literary Studies and the Centre for Research in Philosophy, Literature and the Arts at Warwick, the Mood conference featured academic keynote lectures by Hans Ulrich Gumbrecht (Albert Guérard Professor in Comparative Literature, Stanford University), Giovanna Colombetti (Associate Professor in Sociology, Philosophy and Anthropology, University of Exeter) and Hagi Kenaan (Professor of Philosophy, Tel Aviv University) as well as a keynote reading by accomplished nonfiction author Mary Cappello (Professor of English and Creative Writing, University of Rhode Island).In addition to this, another 39 speakers presented their research in twelve panels spread across the two-day event.Including speakers from six continents, based in over a dozen academic disciplines and forms of creative practice, the programme represented the diverse, multi-faceted nature of worldwide research on mood.The initial guiding questions for the event could be summarised as the following: how do concepts of mood and ways of researching it differ among disciplines, and (how) can they be brought together?Where do these disciplines collide, and what does this tell us about the nature of mood?Is mood primarily a phenomenon of the subject, or is it inherently social?What are the politics of mood, and what is its place in current scholarship across the sciences and the humanities?The discussions that took place over the course of the event demonstrated that the visceral and intersubjective nature of mood itself helped bridging disciplinary differences and boundaries in discussing this phenomenon as the moods the conference itself produced enveloped delegates in a shared experience encountered with a, naturally, heightened sensitivity.
One of the central recurrent questions addressed throughout the conference was that of the place of mood.In her keynote lecture on the 'extended-mind-thesis' and the concept of incorporation, Giovanna Colombetti argued that, when a mood emerges from the moment-bymoment reciprocal interactivity of a person with an object (such as a musical instrument) and the person experiences the object as part of herself, we can regard the mood as 'extended'-in the sense that it is physically constituted not just by the biological organism, but by the hybrid system of 'person-plus-object'.In a joint panel discussing the influence of mood on politics and economics, Dennis Elam (Texas A&M Breidenbach.Exchanges 2017 4(2), pp.309-315 University-San Antonio), Alan Hall and Matt Lampert (both from the Socionomics Institute in Gainesville, GA) presented the Socionomic perspective on mood, according to which moods appear in social waves that shape socio-politic reality.From this point of view, mood is an inherently intersubjective social phenomenon.Douglas Bachorik's (University of Durham) paper on the role of mood in religious communities, more specifically in choir-singing, brought together both perspectives in examining forms of embodied affect in the social group of a congregation, thus constituting a 'corporate body'.The workings of mood in art took on a significant role during the event, with Mary Cappello's highly visceral multisensory reading from her new book Life Breaks In: A Mood Almanack (2016) foregrounding its role both in the artistic process and in aesthetic experience, further corroborated by artist Katja K. Hock's (Nottingham Trent University) installation 'Buchenwald', which was showcased at the conference, and paper on the same theme.Indeed, spaces of mood were explored by a number of contributors, including Jon Arcaraz Puntonet's (University of Navarra) presentation on architecture and rhythm, relating to a design by Spanish architect Fernando Higueras, Joshua Burraway's (UCL) discussion of boredom and urban homelessness in London and Christopher Donaldson's (University of Birmingham) examination of mood and literary representations of the Morecambe Bay Sands.
At the same time, the question of mood's temporality arose in Hagi Kenaan's keynote lecture on 'changing moods' (in the double sense of the expression), in which he proposed that 'there is no such thing as a mood' as moods only exist in the plural sense.Kenaan further argued that every mood already contains the possibility of a future mood and that they are therefore by definition never singular and closed off.In close connection to this idea, Hans Ulrich Gumbrecht's final keynote lecture historicised the concept of mood, tracing it throughout the history of literary criticism and raising the question of why it has, until recently, been a blind spot of criticism.Relating literary moods to the concepts of aesthetic presence and immersion, Gumbrecht's talk built on his recent book on Stimmung, in which he defines moods as a 'presencerelated part of existence' (Gumbrecht, 2012: 7).In vein with such perspectives on the temporality of mood, a number of contributions explored historical and aesthetic instances of mood, including Madeleine Scherer's (Warwick) and Ryan Pepin's (University of Cambridge) papers on moods in the classical literature of Homer and Virgil, respectively, Maria Rita Drumond Viana's (Universidade Federal de Santa Catarina) presentation on Yeats' changing concept of mood in art and Daniel Tiemeyer's (University of Vienna) paper on moods in the music of Viennese modernism.
Breidenbach.Exchanges 2017 4(2), pp.309-315 Another absolutely central concern regarding the nature of mood crystallised out of a number of contributions that discussed the question of agency and intentionality with regards to mood and which, in consequence, posed the question of its politics and ethics.Jonathan Mitchell (Warwick) proposed that despite common definitions, moods can have intentionality and intentional objects, an idea also discussed in the talks by Colombetti and Emmanuel Ordóñez Angulo (UCL).From the notion of intentionality results the question of how moods affect agency, and by extension, responsibility.In line with this, Alireza Fakhrkonandeh's (Warwick) talk on affect and ethics in the theatre of Howard Barker explored the notions of self and other negotiated through inter-affective relationships in the plays.Likewise, the question of ethics and the 'other' reappeared in a panel on depression, which featured Jake Jackson (Temple University) discussing depressive responsibility and Constantin Mehmel (Warwick) suggesting that depression is a mood that radically 'others' those affected by it.The political dimension of mood, which was opened up both by the Socionomic discussion of social moods and continued in the debates on mood and ethics, also encompassed thoughts on mood and gender, with talks by Mohammad Shahidul Islam Chowdhury (East Delta University) and Mary Harrod (Warwick), as well as postcolonial perspectives explored in Harjinder Singh Majhail's (University of Derby) talk on religious moods in relation to Sikh identity and Oliver Paynel's analysis of mood in Zadie Smith's postcolonial novel White Teeth (2000).The postcolonial point of view hinted towards another crucial aspect in studying mood: its (inter-)cultural dimension.Both Vladimír Gärtner's (Masaryk University) study of the reception of Turkish art music by Western listeners and Hyun Höchsmann's (East China Normal University) discussion of Heidegger's notion of Stimmung in the context of Daoism offered cross-cultural perspectives on mood, which raised the question of how culturally specific moods are and whether they can be 'translated' into other cultures.Manifold other issues revolving around the concept of mood were raised in the course of the conference, underlining the complexity and vastness in scope this phenomenon demonstrates.
In conclusion, by producing its own productive, collegial and sometimes discordant moods, the two-day event provided a rich and impressive display of the state of the art in research on mood in different disciplines and cultural context.While uncovering the breadth of mood-related issues in a number of disciplines, the conference also provided a roadmap for future research and allowed us to identify the main questions and concerns for this emerging new interdisciplinary field: the spatiality and temporality of mood, the questions of its intentionality, responsibility and ethics, as well as its cultural specificity and cross-Breidenbach.Exchanges 2017 4(2), pp.309-315 cultural capacity.We are currently planning on furthering the productive potential of this plurality of disciplinary viewpoints, as well as the valuable and indicative tensions that arose from the discussions over the two days, in a collection of essays based on the papers given at this conference.The aim of this publication is to give a comprehensive overview of the state of the art of research on mood in various disciplines, to outline theoretical and practical ways of approaching it through an interdisciplinary lens, and finally, to provide case studies of the role mood plays in various aesthetic, social and political contexts.Through further theoretical and applied research on this pivotal aspect of human life and social interaction, we will gain more insight into the affective dimension that informs both our private and public lives in so many ways.
For more information and to view the programme, visit the conference website: http://www2.warwick.ac.uk/fac/arts/english/research/conferences/moo d2016
Funding
of the work provided that the original author and source are credited, the work is not used for commercial purposes and that any derivative works are made available under the same license terms.
, in political science John L. Casti's Mood Matters: From Rising Skirt Lengths to the Collapse of World Powers (2010), in philosophy Philosophy's Moods: The Affective Grounds of Thinking (2011), edited by Hagi Kenaan and Ilit Ferber, and in psychology and sociology Jaap van Ginneken's Mood Contagion: Mass Psychology and Collective Behaviour Sociology in the Internet Age (2013). | 2017-10-20T01:07:11.437Z | 2017-04-30T00:00:00.000 | {
"year": 2017,
"sha1": "8707fb33fa043d33b2bb82125cc22f3d93866681",
"oa_license": "CCBY",
"oa_url": "https://exchanges.warwick.ac.uk/article/download/167/190",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8707fb33fa043d33b2bb82125cc22f3d93866681",
"s2fieldsofstudy": [
"Art",
"Psychology",
"Philosophy",
"Sociology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
235521415 | pes2o/s2orc | v3-fos-license | Treatment options of Adolescent Gestational Diabetes: Effect on Outcome
Objectives: Teenage pregnancy with gestational diabetes mellitus (GDM) offers a real challenge to the health system and needs a special care. We aimed to evaluate possible obstetrical and neonatal adverse events of different treatment protocols in adolescent GDM including lifestyle, metformin (MTF), and insulin. Methods: All teen pregnant women ≤ 19 years old visiting Baghdad Teaching Hospital throughout four years (from June 1, 2016 till May 31, 2020) diagnosed with GDM were included in this cohort study and followed-up closely throughout pregnancy and after delivery. Included adolescents were put on lifestyle alone during the first week of presentation. Adolescents who reached target glucose measurements were categorized into lifestyle group, while other adolescents were randomly allocated into MTF and insulin groups. Also, adolescent pregnant women without GDM were recruited as control group using computer randomization. Results: The GDM (110 cases) and control (121 individuals) groups had matched general features at recruitment except for diabetes family history. Also, GDM treatment groups had matched features. Glycemic readings (fasting and random) was significantly (p< 0.05) higher in insulin group having odds ratio (OR) of 1.41, and 1.57, respectively. In MTF group, significant protective OR was found in preeclampsia (OR=0.76, p< 0.05). MTF showed non-significant protective OR regarding prematurity and five minutes Apgar score>7 [(OR=0.83, p=0.24), and (OR=0.94, p=0.73), respectively], and significant protective association with large for gestational age and admission to neonatal intensive unit. Insulin had significantly higher prematurity, small for gestational age, and hypoglycemia [OR=1.89, 2.53, and 2.84, respectively]. Conclusion: Metformin (MTF) showed less pregnancy and neonatal complications in adolescent GDM than insulin and lifestyle.
INTRODUCTION
Teenage (teen) or adolescent period lies within the age range of 10-19 years. 1 Adolescents have a natural rebellion to medical treatment and doctors` instructions which in turn may lead to more complications with higher frequencies if gestational diabetes mellitus (GDM) is added to the equation of pregnancy. 2 Pregnancy is a potential risk of glucose intolerance, and insulin sensitivity is further decreased with progression of time till the point of non-matching between the secreted insulin and insulin resistance is reached to declare occurrence of gestational diabetes. On the other hand, teenage pregnancy carries specific hazards affecting pregnancy events and newborn parameters. Accordingly, many health systems have increasing efforts to find out the best approach to deal with such pregnancies. 1,2 Based on the idea of poor compliance in adolescents and supported by the expected adverse obstetric and neonatal events provoked by both gestational diabetes and young (teen) age group, we tried to conduct this prospective cohort to evaluate possible complications occurred during pregnancy, delivery, and early neonatal period in gestational diabetic adolescent women with regard to the main treatment options involving lifestyle, insulin, and metformin (MTF). In general, the average age of pregnancy in Iraq is 25.7 year`s old. 3 GDM was diagnosed after 20 weeks` gestation according to the International Association of Diabetes in Pregnancy Study Groups by fasting venous plasma sugar > 91.8 mg/ dl, or postprandial glucose either at one or two hours > 180 mg/dl, and > 153 mg/dl, respectively when using oral glucose tolerance test (75 mg). 4 Enrolled adolescent mothers were requested to measure their glucose level daily during the first week of enrolment and at least twice a week on regular basis afterwards, eight hours after fasting and one hour postprandial using home glucose measuring gadgets of Accu-Chek ® Performa from Roche Diabetes Care, Inc.
METHODS
The hospital guidelines recommended fasting glucose measurements to be (70-90 mg/dl) while random blood sugar one hour postprandial should have been < 140 mg/dl as a target point. Glycated hemoglobin (HbA1c) was not routinely done due to limited resources.
During the first week of presentation, lifestyle management (including proper education, physical activity, and dietary management) was adopted as the only treatment option for the all. Treatment trajectory was decided by the attending obstetrician at the end of the first week of presentation. If the teenage women reached the above-mentioned targets of glucose measurements, they would continue on lifestyle option alone, while the rest of included women were divided into two equal groups using random selection method, one group was treated with insulin (and lifestyle), and the other with MTF (and lifestyle). Depending on prepregnancy body mass index (P-BMI) which was calculated retrospectively, diabetic diet estimation was encouraged for normal weight individuals (P-BMI= 18.5-24.9) to be 30 kcal/kg/day and 25 kcal/kg/day for overweight or obese adolescents (P-BMI ≥ 25). 4 Insulin was started with a dose of 0.3 IU/kg using soluble and lente forms in multiple daily injections regimen with 10-15% dose titration every 1-2 weeks to maintain the above-mentioned target glucose readings. The insulin dose adjustment decision was made by the attending obstetrician during regular antenatal care (ANC) visits (every two weeks till 36 week`s gestation and once weekly later on until delivery) or mobile phone calls made by the researchers in between the visits. Hypoglycemic episodes were reported several times during study period in the insulin group of involved women.
MTF dose was one tablet containing 500 mg given after meals three times daily. The maximum dose was 2000 mg/ 24 hours as needed. Also, using computer randomization, adolescent pregnant mothers without gestational diabetes were included as control group. All included women were followed-up until delivery through regular ANC visits (every two weeks till 36 week`s gestation and once weekly later on until delivery) or mobile phone calls made by the researchers in between the visits. MTF was well tolerated by most of the involved women in the MTF group, and only 2 women had poor MTF tolerance because of severe nausea and vomiting. These two women were switched to insulin therapy and dropped out from the study calculations.
Full medical history and examination were done during these visits and routine basic investigations were performed such as blood sugar, hemoglobin levels (Hb), and general urine analysis. Neonatal events were observed and managed by the attending neonatologist who fully examined the newborns. Compliance to lifestyle, insulin, and MTF was insured by the attending obstetrician throughout the above-mentioned ANC visits in addition to the mobile phone calls made by the researchers in between the visits.
Failure of follow-up or switching of treatment protocols during study period were considered as exclusion criteria. Flowchart in Fig.1 shows that clearly. [4][5][6] BMI: body weight (kg)/height in square meters (m 2 ). Preeclampsia: occurred when blood pressure more than 140/90 mm mercury (Hg) and proteinuria more than 0.3 gram per day. Large for gestational age (LGA): birth weight >90 th percentile of the mean. Small for gestational age (SGA): birth weight <10 th percentile of the mean. Neonatal hypoglycemia: venous plasma glucose <45 mg/ dl after delivery. Statistical Analysis: Statistical Package for the Social Sciences version 22 was utilized for statistical analysis. Categorical parameters were expressed as a percentage, while continuous samples were expressed as mean ± standard deviation (SD). Repeated-measures ANOVA test was used to complete the analysis and comparison. Multiple logistic regression models with multivariable analysis were applied to obstetrical and neonatal events. The obstetrical model was adjusted for possible confounding factors including maternal age, P-BMI gestational age at involvement, gestational weight gain, family history of diabetes mellitus and hypertension, education level, consanguinity, residence, and presence of polycystic ovary syndrome. The neonatal model was adjusted for maternal age, P-BMI, gestational age at involvement, gestational weight gain, family history of diabetes mellitus and hypertension, education level, consanguinity, residence, polycystic ovary syndrome, preeclampsia, cesarean section (CS), and neonatal weight at birth. Significant levels were considered when p value < 0.05. Ethical Statement: Scientific and ethical committees located at College of Medicine, and Al-Kindy College of Medicine at University of Baghdad granted the ethical and scientific approvals (No. 1218, and 429, respectively). Informed consent was obtained from all participants in this work which was performed in line with Declaration of Helsinki.
RESULTS
Total number of recruited adolescent pregnant women having the diagnosis of GDM was 110, divided into three main groups according to treatment plan (lifestyle= 29, MTF= 40, and insulin= 41). Another 121 adolescent pregnant women without GDM were involved as control group. For the three major groups of GDM (lifestyle, MTF, and insulin), all the general characteristics were matched and comparable (p ≥ 0.05) as seen in Table-I. This is also applied to both GDM and control women who had matched (p ≥ 0.05) general characteristics at baseline of recruitment and afterwards except for family history of diabetes that was significantly (p < 0.05) found in GDM women than non-GDM women.
In Table-II, odds ratio (OR) and 95% confidence interval (CI) were calculated. MTF had protective association (OR < 1) without being significant (p ≥ 0.05) in fasting blood sugar (FBS), random blood sugar (RBS), and gestational age (GA) at delivery in comparison to lifestyle, while MTF had significant protective association (OR= 0.76, and p < 0.05) with preeclampsia only. More details are seen in Table-II. Moreover, gestational age (GA) estimations at delivery (weeks) presented as mean ± SD were comparable for lifestyle, MTF, and insulin women (38.7 ± 1.6, 38.6 ± 1.4, and 37.2 ± 1.8, respectively) showing slightly lower values in insulin group. CS rates for the above-mentioned treatment groups were [n= 16 (55.17%), n= 21 (52.50%), and n= 25 (60.98%), respectively]. The above-mentioned results do not appear in the tables. Neonatal complications in treatment groups are discussed in Table-III such as preterm birth, SGA, hypoglycemia, 5 minutes Apgar score > 7, and admission to Neonatal Intensive Care Unit (NICU).
DISCUSSION
High-quality evidences are insufficient to determine the differences among various GDM treatment plans to have a clear decision during clinical practice. 7 However, our search did not find such a comparison in teenage GDM.
Although not significant, P-BMI was higher and gestational weight gain was lower in our adolescent GDM pregnant women than non-GDM adolescent pregnant controls, and this was also found in MTF group when compared to lifestyle and insulin groups which may be considered as a privilege. Pregnancy by itself is a state of body insulin resistance that is increased with increasing weight. MTF has a positive influence on insulin sensitivity which in turn may improve insulin resistance leading to a better control over blood glucose during pregnancy. This idea was endorsed by many reporters. 8 In GDM, it is preferred to slow down the curve of gestational weight gain as more kilograms during pregnancy would deepen insulin resistance affect- ing sugar control in a bad way. 9 Herein this study, data revealed superior serum glucose control in MTF and lifestyle adolescent women than insulin group with the best overall fasting and postprandial serum glucose readings were found in MTF adolescents that might mirror the best improvement of insulin resistance which is one of the major factors causing GDM as indicated by other scientists who noted that MTF had faster and better sugar control while pregnant ladies might need time to be accustomed to the insulin dosage and timing. 10 Preeclampsia in MTF group had significantly the lowest occurrence than other treatment groups. This was considered by some workers who claimed MTF as a protective agent. 11 However, a previously published paper did not agree with this finding. 12 Many factors could affect adolescent preeclampsia that may explain these differences such as maternal age, weight, and endothelial abnormalities resulted from glucose vacillations during pregnancy. 13 Regarding our sample, age and P-BMI were matched for the all, while our insulin adolescents had a risk factor of preeclampsia having more fluctuations in glycemic readings than other groups as noticed during the study period.
High rates of operative delivery by CS were found in all treatment groups without significant differences. These rates were higher than in other studies 14,15 but closer to local CS rates regardless of age and GDM. 16 MTF group had a significant lower rate of LGA newborns. A Finish report supported our results 17 but counteracted by another study 14 in which the involved MTF group was not pure because of supplemented insulin doses.
Limitations of the study:
It included the small number of involved adolescent pregnant women with no full randomization. However, most of basic general features of our enrolled cases were matched at time of recruitment and it is well-known that the prevalence rate of teenage pregnant women with GDM is very low (1.33%) as estimated by a previous study. 17 Unfortunately, although we had conducted this study throughout four years in a major tertiary center, the sample size was not large enough to perform full randomization. It is usual during teenage period to have a difficult-to-satisfy personality, disobey medical advice about diet and lifestyle changes, and fail to strictly follow the invasive approach of multi-injection insulin therapy. 18 Accordingly, it sounds logical that oral MTF would be more satisfying than other options. 14 CONCLUSION MTF treatment option for adolescent GDM had lower rates of maternal and neonatal complications when compared with other treatment plans including lifestyle and insulin. | 2021-06-22T17:55:51.920Z | 2021-04-22T00:00:00.000 | {
"year": 2021,
"sha1": "5f8156adb3245b6c76cf55a924ed7ba300e50610",
"oa_license": "CCBY",
"oa_url": "http://www.pjms.org.pk/index.php/pjms/article/download/3966/930",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "66a3cc56ab21249860524d812e774cbf96cb419f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54052751 | pes2o/s2orc | v3-fos-license | Baryon resonances from a novel fat-link fermion action
We present first results for masses of positive and negative parity excited baryons in lattice QCD using an O(a^2) improved gluon action and a Fat Link Irrelevant Clover (FLIC) fermion action in which only the irrelevant operators are constructed with fat links. The results are in agreement with earlier calculations of N^* resonances using improved actions and exhibit a clear mass splitting between the nucleon and its chiral partner, even for the Wilson fermion action. The results also indicate a splitting between the lowest J^P = 1/2^- states for the two standard nucleon interpolating fields.
INTRODUCTION
Understanding the dynamics responsible for baryon excitations provides valuable insight into the forces which confine quarks inside baryons and into the nature of QCD in the nonperturbative regime.One of the long-standing puzzles in spectroscopy has been the low mass of the first positive parity excitation of the nucleon (the J P = 1 2 + N * (1440) Roper resonance) compared with the lowest lying odd parity excitation.Another challenge for spectroscopy is presented by the Λ 1/2− (1405), whose anomalously small mass has been interpreted as indicating strong coupled channel effects involving Σπ, KN , • • • [1], and a weak overlap with a three valence constituent quark state.
In this paper we present the first results of excited octet baryon mass simulations using an O(a 2 ) improved gluon action and an improved Fat Link Irrelevant Clover (FLIC) [2] quark action in which only the irrelevant operators are constructed using fat links.Configurations are generated on the new Orion computer cluster dedicated to lattice gauge theories at the CSSM at Adelaide University.After reviewing in Section 2 the main elements of lattice calculations of excited hadron masses, we describe in Section 3 various features of interpolating fields used in this analysis.In Section 4 we present results for J P = 1 2 ± nucleons and hyperons.Finally, in Section 5 we make concluding remarks and discuss some future extensions of this work.
BARYONS ON THE LATTICE
The history of excited baryons on the lattice is quite brief, although recently there has been growing interest in finding new techniques to isolate excited baryons, motivated partly by the experimental N * program at Jefferson Lab.Previous work on excited baryons on the lattice can be found in Refs.[3][4][5][6][7].
Following standard notation, we define a twopoint correlation function for a baryon B as: where χ B is a (positive parity) baryon interpolating field, and we have suppressed Dirac indices.The choice of interpolating field χ B is discussed in Section 3 below.For large Euclidean time, the correlation function can be written as a sum of the lowest energy positive and negative parity contributions: where a fixed boundary condition in the time direction is used to remove backward propagating states, and where the overlap of the field χ B with positive or negative parity states |B ± is parameterized by a coupling strength λ B ± , with energy.The energies of the positive and negative parity states are obtained by taking the trace of G B with the operator Γ ± , where For p = 0, E B ± = M B ± and the operator Γ ± projects out the mass, M B ± , of the baryon B ± .In this case, positive parity states propagate in the 1, 1 and 2, 2 elements of the Dirac matrix of Eq. ( 2), while negative parity states propagate in the 3, 3 and 4, 4 elements.
INTERPOLATING FIELDS
In this analysis we consider two types of interpolating fields which have been used in the literature.The notation adopted follows that of Leinweber et al. [8].For the positive parity proton we use as interpolating fields: and where the fields u, d are evaluated at Euclidean space-time point x, C is the charge conjugation matrix, a, b and c are color labels, and the superscript T denotes the transpose.As pointed out by Leinweber [3], because of the Dirac structure of the "diquark" in the parentheses in Eq. ( 4), the field χ p+ 1 involves both products of upper × upper × upper and lower × lower × upper components of spinors for positive parity baryons, so that in the nonrelativistic limit χ p+ 1 = O(1).Furthermore, since the "diquark" couples to a total spin 0, one expects an attractive force between the two quarks, and hence a lower energy state than for a state in which two quarks do not couple to spin 0.
The χ p+ 2 interpolating field, on the other hand, is known to have little overlap with the ground state [3,9].Inspection of the structure of the Dirac matrices in Eq. ( 5) reveals that it involves products of upper × lower × lower components only for positive parity baryons, so that χ p+ 2 = O(p 2 /E 2 ) vanishes in the nonrelativistic limit.As a result of the mixing, the "diquark" term contains a factor σ • p, meaning that the quarks no longer couple to spin 0, but are in a relative L = 1 state.One expects therefore that two-point correlation functions constructed from the interpolating field χ p+ 2 are dominated by larger mass states than those arising from χ p+ 1 .Interpolating fields for a negative parity proton can be constructed by multiplying the positive parity fields by γ 5 , χ B− ≡ γ 5 χ B+ , which reverses the role of the terms in Eq. ( 2).While the masses of negative parity baryons are obtained directly from the (positive parity) interpolating fields in ( 4) and ( 5) by using the parity projectors Γ ± , it is instructive nevertheless to examine the general properties of the negative parity interpolating fields.In contrast to the positive parity case, both the interpolating fields χ p− 1 and χ p− 2 mix upper and lower components, and consequently both χ p− 1 and χ p− 2 are O(p/E).Physically, two nearby J P = 1 2 − states are observed in the nucleon spectrum.In simple quark models, the splitting of these two orthogonal states is largely attributed to the extent to which scalar diquark configurations compose the wave function [4].It is reasonable to expect χ p− 1 to have better overlap with scalar diquark dominated states, and thus provide a lower effective mass in the large Euclidean time regime explored in lattice simulations.If the effective mass associated with the χ p 2 correlator is larger, then this would be evidence of significant overlap of χ p− 2 with the higher lying N 1 2 − states.In this event, further analysis directed at resolving these two states is warranted.
RESULTS
In this paper we report results of calculations of octet excited baryon masses performed on a 16 3 × 32 lattice at β = 4.60 with a lattice spacing of a = 0.125(2) fm.The analysis is based on a preliminary sample of 50 configurations generated on the new Orion computer cluster at the CSSM, Adelaide.For the gauge fields, a meanfield improved plaquette plus rectangle action is used, while for the quark fields, the FLIC [2] action is implemented.In the present analysis we have imposed fixed boundary conditions in the time direction (U t ( x, nt) = 0 ∀ x), and periodic boundary conditions in spatial directions.
Although the simulations were performed with both n = 4 and 12 fattening sweeps, the improved gauge fields were found to be smooth after only 4 sweeps.Since the results with n = 4 sweeps exhibit slightly better scaling than those with n = 12 [2], we shall focus on the results with 4 smearing sweeps.The 12 sweep results lead to the same conclusions as presented in the following.Further details of the simulations are given in Ref. [2].
In Fig. 1 we show the N and N * ( 1 2 − ) masses as a function of the squared pseudoscalar meson mass, m 2 π .The results of the new simulations are indicated by the filled symbols (filled circles are FLIC; filled diamonds are Wilson).For comparison, we also show results from earlier simulations with domain wall fermions (DWF) [6] (open triangles), and a nonperturbatively (NP) improved clover action at β = 6.2 [7].The scatter of the different NP improved results is due to different source smearing and volume effects: the open squares are obtained by using fuzzed sources and local sinks, the open circles use Jacobi smearing at both the source and sink, while the open diamonds, which extend to smaller quark masses, are obtained from a larger lattice (32 3 × 64) using Jacobi smearing.The empirical masses of the nucleon and the lowest 1 2 − excitation are indicated by the asterisks along the ordinate.There is excellent agreement between the different improved actions for the nucleon mass, in particular between the FLIC, NP improved clover [7] and DWF [6] results.On the other hand, the Wilson results lie systematically low in comparison to these due to large O(a) errors in this action [2].A similar pattern is repeated for the N * ( 1 2 − ) masses.Namely, the FLIC, NP improved clover and DWF masses are in agreement with each other, while the Wilson results again lie systematically low.A mass splitting of around 400 MeV is clearly visible between the N and N * for all actions, including the Wilson action, contrary to previous claims [6].
Figure 2 shows the ratio of the masses of the N * ( 1 2 − ) and the nucleon.Once again, there is good agreement between the FLIC and DWF actions.However, the results for the Wilson action lie above the others, as do those for the anisotropic D 234 action [5].The D 234 action has been mean-field improved, and uses an anisotropic lattice which is relatively coarse in the spatial direction (a ≈ 0.24 fm).This is an indication of the need for nonperturbative or fat link improvement.
The mass splitting between the two lightest nearby N * ( 1 2 − ) states (N * (1535) and N * (1650)) can be studied by considering the χ 1 and χ 2 interpolating fields in Eqs.( 4) and (5).Recall that the "diquarks" in χ 1 and χ 2 couple differently to spin, so that even though the correlation functions built up from the χ 1 and χ 2 fields will be ) and N masses.
The FLIC and Wilson results are from the present analysis, with results from the DWF [6] and D 234 [5] actions shown for comparison.The empirical N * (1535)/N mass ratio is denoted by the asterisk.
made up of a mixture of many excited states, they will have dominant overlap with different states yielding different masses [3].The results, shown in Fig. 3 for the FLIC action, indicate that indeed the N * ( 1 2 − ) corresponding primarily to the χ 2 field (labeled "N * 2 ") lies systematically above the N * ( 1 2 − ) associated primarily with the χ 1 field ("N * 1 ").As has long been known, the positive parity χ 2 interpolating field ("N 2 ", which is also sometimes denoted by "N ′ ( 1 2 + )") does not have good overlap with the nucleon ground state [3], and corresponds to a state which lies around 400 MeV above the negative parity excitation of χ 1 .There is little evidence that this state is the N * (1440) Roper resonance (first 1 2 + excitation of the nucleon).While it is possible that the Roper resonance may have a strong nonlinear dependence on the quark mass at m 2 π < 0.2 GeV 2 , arising from pion loop corrections, it is unlikely that this behavior would be so dramatically different from that of the N * (1535) so as to reverse the level ordering obtained from the lattice.A more likely explanation is that the χ 2 interpolating field does not have good overlap with either the nucleon or the N * (1440), but rather a (combination of) excited 1 2 + state(s).
Recall that in a constituent quark model in a harmonic oscillator basis the mass of the Roper is higher than the mass of the lowest P -wave excitation.The lattice data thus appear to be consistent with the naive quark model expectation at large values of m q .Better overlap with the Roper resonance is likely to require more exotic interpolating fields.
CONCLUSION
We have presented the first results for the excited baryon spectrum from lattice QCD using an O(a 2 ) improved gauge action and an improved Fat Link Irrelevant Clover (FLIC) quark action in which only the links of the irrelevant dimension five operators are smeared.The simulations have been performed on a 16 3 × 32 lattice at β = 4.60, providing a lattice spacing of a = 0.125(2) fm.The analysis is based on a set of 50 configurations generated on the new Orion computer cluster at the CSSM, Adelaide.
Good agreement is obtained between the FLIC and other improved actions, such as the nonperturbatively improved clover [7] and domain wall fermion [6] actions, for the nucleon and its chiral partner, with a mass splitting of ∼ 400 MeV.Our results for the N * ( 1 2 − ) improve on those using the D 234 [5] and Wilson actions.Despite strong chiral symmetry breaking, the results with the Wilson action are still quite reasonable rendering earlier conjectures invalid.Using the two standard nucleon interpolating fields, we also confirm earlier observations [4] of a mass splitting between the two nearby 1 2 − states.We find no evidence of overlap with the 1 2 + Roper resonance.
We have not attempted to extrapolate the lattice results to the physical region of light quarks, since the nonanalytic behavior of N * 's near the chiral limit is not as well understood yet as that of the nucleon [10,11].It is vital that future lattice N * simulations push closer towards the chiral limit.On a promising note, our simulations with the 4 sweep FLIC action are able to reach relatively low quark masses (m q ∼ 60-70 MeV) already.We have also not addressed the question of to what extent quenching may affect any of our results.We naturally expect the effects of quark loops to be relatively unimportant at the currently large quark masses, although quenching may well produce some artifacts as one nears the chiral limit.
For future work, we intend to use variational techniques to better resolve individual excited states, for instance, those corresponding to the N * 1 and N * 2 fields (using a 2 × 2 correlator matrix).In order to further explore the origin of the Roper resonances, more exotic interpolating fields involving higher Fock states, or other nonlocal operators should be investigated.Finally, the present N * mass analysis will be extended in future to include N → N * transition form factors through the calculation of three-point correlation functions.
Figure 1 .= 1 2 −
Figure 1.Masses of the nucleon and the lowest J P = 1 2 − excitation.The FLIC and Wilson re-sults are from the present analysis, with the NP improved clover[7] and DWF[6] results shown for comparison.The empirical N and N * (1535) masses are indicated by the asterisks.
Figure 3 . 2 + and 1 2 − 2 ±
Figure 3. Masses of the 1 2 + and 1 2 − nucleons, for the FLIC action.The positive parity (N 1 and N 2 ) and negative parity (N * 1 and N * 2 ) states are constructed from the χ p 1 and χ p 2 interpolating fields, respectively.The empirical masses of lowest two 1 2 ± excitations of the nucleon are indicated by the asterisks. | 2018-12-01T16:35:44.848Z | 2001-07-01T00:00:00.000 | {
"year": 2002,
"sha1": "a2d59b27d426f5921ab6606261b9e60efbcdf017",
"oa_license": null,
"oa_url": "https://digital.library.unt.edu/ark:/67531/metadc722683/m2/1/high_res_d/789642.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a2d59b27d426f5921ab6606261b9e60efbcdf017",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
252780071 | pes2o/s2orc | v3-fos-license | Shadow Celestial Amplitude
We study scattering amplitudes in the shadow conformal primary basis, which satisfies the same defining properties as the original conformal primary basis and has many advantages over it. The shadow celestial amplitudes exhibit locality manifestly on the celestial sphere, and behave like correlation functions in conformal field theory under the operator product expansion (OPE) limit. We study the OPE limits for three-point shadow celestial amplitude, and general $2\to n-2$ shadow celestial amplitudes from a large class of Feynman diagrams. In particular, we compute the conformal block expansion of the $s$-channel four-point shadow celestial amplitude of massless scalars at tree-level, and show that the expansion coefficients factorize as products of OPE coefficients.
Introduction
Celestial holography is believed to be a concrete realization of holographic principles for quantum gravity in asymptotically flat spacetime (AFS) [1][2][3][4][5]. It relates scattering amplitudes of a quantum field theory or quantum gravity in a four-dimensional AFS to correlation functions of a celestial conformal field theory (CCFT) on the two-dimensional celestial sphere.
The Lorentz symmetry of the four-dimensional AFS is realized as the SL(2, C) conformal symmetry of the celestial sphere. The goal of celestial holography is to study the scattering amplitudes in AFS by using the techniques developed in conformal field theory (CFT). One of the most important achievements in celestial holography is recasting the soft theorems in flat space into Ward identities in two-dimensional CFTs. The currents associated to these Ward identities generate asymptotic symmetries in the four-dimensional spacetime [6][7][8][9][10][11][12][13][14][15][16] and can be re-organized into the w 1+∞ algebra [17][18][19].
To manifest the SL(2, C) Lorentz symmetry in the scattering amplitudes, one needs to change the basis of asymptotic states from the standard plane-wave basis to the conformal primary basis [20][21][22][23]. By definition, the conformal primary basis must satisfy the equations of motion and transform covariantly under SL(2, C). The S-matrix elements in the conformal primary basis are referred to as celestial amplitudes. The conformal primary basis that is widely used in the literature for massless particles is built from the usual plane-wave basis followed by a Mellin transformation. However, in this basis, the coordinates on the celestial sphere relate directly to the solid angles of the flat space momentum. Thus the corresponding celestial amplitudes are highly constrained by four-dimensional kinematics and do not take the standard form of CFT correlation functions. For example, the four-point celestial amplitudes of massless scalars contain an unfamiliar delta-function δ(χ −χ) originated from the momentum conservation. This distributional factor forces the celestial amplitude to live on the equator of the celestial sphere. In addition, depending on the assignments of the incoming and outgoing particles, the celestial amplitudes are only supported in disjoint intervals on the equator.
Finally, in terms of the massless conformal primary basis, the celestial amplitudes do not have proper conformal block expansion. This can be seen by looking at the s-channel tree-level celestial amplitude of two incoming, two outgoing massless particles and one massive exchange particle. The imaginary part of the corresponding scattering amplitude in the plane-wave basis is factorized into two three-point scattering amplitudes due to the optical theorem. This leads to a factorization in the conformal partial wave expansion of the celestial amplitudes [24,25]. 1 However, since the integration kernel in the conformal partial wave expansion does not have poles located at the right half-plane, one does not get a conformal block expansion by closing the contour. Again, in the literature, the studies of conformal block expansion of celestial amplitudes are limited to the Klein space [28][29][30][31] or three dimensional space [32,33].
To fix these issues, we consider a different set of conformal primary wavefunctions for massless particles, which are in a different branch of solutions to the two defining properties of conformal primary wavefunction [1], i.e. they satisfy the equations of motion and trans- 1 Partial wave expansion of the celestial amplitudes is also studied in [26,27].
form covariantly under SL (2, C). It turns out that, up to a constant factor, these conformal primary wavefunctions are equivalent to the shadow transformations [34,35] of the original conformal primary wave functions. 2 We will refer to this basis as the shadow conformal primary basis. Expanding the scattering amplitude in the shadow conformal primary basis, we define the shadow celestial amplitudes, which can be obtained by performing the shadow transformation of all external operators of the celestial amplitudes. 3 The shadow celestial amplitudes resolve all the abovementioned issues and lead to the standard correlation functions of CFTs. Specifically, the shadow celestial amplitudes of four massless particles no longer have δ(χ −χ) and are defined on the entire celestial sphere. In addition, the shadow celestial amplitudes have well-behaved OPE limits. For scattering amplitudes with n external massless real scalars, we consider the OPE limit by making the celestial coordinates of the first two incoming particles close to each other. For amplitudes from a large class of Feynman diagrams, we find that the shadow celestial amplitudes factorize as expected as n-point correlation functions in a CFT. Using the generalized optical theorem, we also obtain the correct factorization of the imaginary part of shadow celestial amplitudes for any Feynman diagrams. What's more, we compute the 4-point shadow celestial amplitudes involving four external massless scalars and one exchange massive scalar and derive its conformal block expansion in the 12 ↔ 34 channel. We find that the coefficients appearing in the conformal block expansion give the correct OPE coefficients obtained from the corresponding 3-point shadow celestial amplitudes matching the results from the OPE analysis. This paper is organized as follows. In Section 2, we review the celestial amplitudes, discuss their disadvantages, and introduce the shadow celestial amplitudes. In Section 3, we analyze the OPE limits of n-point shadow celestial amplitudes involving n external real massless scalars and show that the OPE limits are well-defined for the shadow celestial amplitudes.
In Section 4, we work out the conformal block expansion on the 4-point shadow celestial amplitudes involving four external massless scalars and one exchange massive scalar. We find the complete agreement between the block expansion coefficients and OPE coefficients.
In Section 5, we conclude our work and point out a few future directions. In Appendix A, we review the generalized optical theorem, the shadow transformation and conformal partial waves, that will be used in later parts of this paper.
2 Shadow conformal primary basis 2.1 Review on the celestial amplitudes Celestial amplitudes are obtained by expanding the position space amplitudes with respect to the conformal primary wavefunctions [20] instead of the plane-waves, i.e., 4 where M(x j ) is the scattering amplitude in position space. The conformal primary wave functions ϕ ± ∆ (z; x) for massless and massive scalars with mass m are given by and respectively. Here z andz are coordinates on celestial sphere and G ∆ (z,z;p) is the bulkto-boundary propagator. The coordinates (z,z) on the celestial sphere are related to the massless on-shell momenta q µ through and to the massive on-shell momenta p µ through In terms ofp in (2.5), the bulk-to-boundary propagator takes the form 5 (2.6) 4 Throughout this paper, O i (z i ) should be understood as O i (z i ,z i ). We use this abbreviation to simplify the notation. 5 In this paper, we use the most positive metric in four-dimensional flat space, i.e., g AB = diag(−1, +1, +1, +1).
In terms of the conformal primary basis (2.2), the celestial amplitudes in (2.1) with n 1 massless scalars and n 2 massive scalars can be re-expressed as where M(q i , p ′ i ) is the scattering amplitude in the plane-wave basis.
where m is the mass of the exchange massive scalar. This leads to the following expression for the function f (χ) with χ ≥ 1 From (2.12), we note that A ∆ i 12→34 (z i ) contain a delta-function δ(χ −χ) which forces the four-point celestial amplitudes to live on the equator of the celestial sphere and makes the structure of correlation functions in CCFTs very different from those in the standard CFTs.
Moreover, (2.12) is for the 12 → 34 kinematic and only valid when χ ≥ 1. The other two kinematics 14 → 23 and 13 → 24 are defined in distinct intervals χ ≤ 0 and 0 ≤ χ ≤ 1 on the equator. This also makes the four-point celestial amplitudes exotic. Finally, A ∆ i 12→34 (z i ) does not have a proper conformal block expansion. We take the s-channel scattering amplitude (the first term in (2.12)) as an example. As we will see in Section 4, the partial wave expansion of the s-channel celestial amplitudes A ∆ i s takes the form 7 where h =h = (1 + iλ)/2 and the spectral density is (2.15) When χ ≤ 1, closing the contour to the right-half λ-plane leads to a vanishing conformal block expansion because the integrand in (2.14) does not have poles located in the right-half λ-plane. On the other hand, when χ > 1 the integrand in (2.14) does not decay to zero when Re(λ) approaches infinity and we can not close the contour. This reflects the fact that
A different conformal primary basis
It is easy to see that the above disadvantages of the celestial amplitudes are due to the fact that the conformal primary wave functions (2.2) for massless scalars are constructed from the plane-waves by performing integral only over the energy ω. Thus, we consider a different set of conformal primary wave functions for massless scalars: (2.16) Here, q µ is defined in (2.4) and G ∆ (q; q ′ ) is given by which takes the same form as bulk-to-boundary propagator (2.6) withp ′ replaced by q ′ .
Using the identity q ′ (z ′ ) ·q(z) = −2ω|z ′ − z| 2 and noting that the conformal primary waves functions (2.16) can be written as (2.19) which are proportional to the shadows of the original conformal primary wave functions ϕ ± ∆ (z; x), that were previously studied in [20]. We dub it the shadow conformal primary basis. The shadow conformal primary wave functions (2.16) obey the massless Klein-Gordon equation and transform covariantly under the conformal transformation since the shadow transformation is conformally covariant. In other words, they satisfy the two defining properties for conformal primary wave functions given in [1,20]. On the other hand, one does not obtain a new wave function by performing the shadow transformation on the massive conformal primary basis (2.3) because the result from the shadow integral takes the same form as (2.3) up to a change of conformal dimension from ∆ to 2 − ∆ [20].
Using the shadow conformal primary wave functions (2.16), we define the shadow celestial amplitudes as where k ′ j can be either q ′ j orp ′ j depending on whether the particle is massless or massive.
Translation symmetry
Unlike the scattering amplitudes in the plane-wave basis, translation symmetry in shadow celestial amplitudes A ∆ i (z i ) becomes obscure, while Lorentz symmetry is manifest. In this subsection, we will discuss how the translation generatorsP µ act on the shadow celestial amplitude. The action of the translation generators on massive scalars was studied in [39].
Specifically,P µ act on a massive scalar in CCFT as a differential operator P µ where the different operator P µ a acting on massive scalars takes the form with ϵ k = ±1 labeling incoming and outgoing states, respectively.
To find the action ofP µ on massless shadow celestial amplitudes (2.20), we start with the action in the plane-wave basisP Thus, finding the action ofP µ on massless shadow celestial amplitudes is translated into finding a differential operator P µ a such that This can be realized by the following differential operator Thus, the translation symmetries amount to the following differential equations where P µ a is given in (2.22) when it acts on massive scalar operators and in (2.26) when it acts on massless scalar operators.
OPE behaviour of the shadow celestial amplitudes
In this section, we will study the OPE behaviour of the shadow celestial amplitudes of 2 incoming massless real scalars, labeled by 1 and 2, and n − 2 outgoing massless real scalars. Similar analysis works for the same shadow celestial amplitudes with the incoming and outgoing particles swapped. The shadow celestial amplitudes of interest can be written as 8 where we have used (2.18). After changing of variables, we have We have also used the relation In the following subsections, we will start with (3.2) and study the OPE behaviours for
OPE analysis for three-point shadow celestial amplitudes
We first focus on the three-point shadow celestial amplitudes. Since scattering amplitudes involving three massless scalars cannot satisfy momentum conservation, we will consider the shadow celestial amplitudes for two incoming massless scalars and one outgoing massive scalar. In this case, the scattering amplitudes are where we defined p = q ′ 1 + q ′ 2 . Plugging the above express into (3.2) and changing integral variables from p µ to Mp µ then lead to In Appendix B, we compute the integral overq ′ 2 and expand the results around smallq 12 ≡ − 1 2q 1 ·q 2 . Substituting the expansion (B.6), we get Rewriting the delta-function as and performing the integral over M andp ′ 3 lead to One can immediately recognize that the second line is exactly the three-point AdS Witten diagram W ∆ 1 +n,∆ 2 +n,∆ 3 (q i ). In the short distance limitq 1 →q 2 , W ∆ 1 +n,∆ 2 +n,∆ 3 (q i ) behaves as (3.9) We arrive (3.10) Finally, computing the summation over n gives where the coefficient When ∆ 1 + ∆ 2 − ∆ 3 = −2n + 2iν with ν ∈ R, we use the following formula This leads to For later convenience, we define the OPE coefficient C ∆ 1 ,∆ 2 ,∆ 1 +∆ 2 +2n (m) as with n ∈ Z ≥0 .
From (3.15) and (3.16) we see that apart from the delta-function δ(ν), the OPE limit of In the following subsections, we will focus on the finite parts in the OPE limit and study the OPE behaviour of n ≥ 3-point shadow celestial amplitudes.
OPE analysis from the generalized optical theorem
The leading order OPE behaviour can be obtained through replacing allq 1 in (3.2) byq 2 , leading to lim q 1 →q 2 where Y ′µ ≡ 2(−q 2 ·p)p µ −q µ 2 . For n ≥ 3-point shadow celestial amplitudes, the OPE limit can be studied by the generalized optical theorems. We assume M 2→n−2 (q ′ 1 , q ′ 2 , · · · , q ′ n ) is at tree-level. Using the generalized optical theorem (A.9) and noting that there are only single-particle intermediate states that can appear in tree diagrams, we get 9 lim q 1 →q 2 Here X labels all possible physical single-particle states that satisfy the on-shell condition With the help of the delta function, we evaluate the integral over p, leading to lim q 1 →q 2 After evaluating the integral overq ′ 2 by using (B.7), we get (3.20) 9 Here and throughout this paper, we assume that conformal dimensions ∆ i are real when taking the imaginary part of A ∆i 2→n−2 (z i ).
is the inverse of coefficient of two-point massive celestial amplitude [28] and C ∆ 1 ,∆ 2 ,∆ 1 +∆ 2 (m X ) is given in (3.16) Some comments are made here. The leading OPE behaviour (3.20) works only when the (n − 1)-point shadow celestial amplitudes A ∆ 1 +∆ 2 ,∆ i X→n−2 (m X ; z i ) are well-defined. This demands that the integrals appearing in the computation of the (n − 1)-point shadow celestial X→n−2 (m X ; z i ) must converge. For example, in the case of n = 4, the leading OPE behaviour (3.20) works only when Re(
OPE analysis for a special class of Feynman diagrams
In the previous subsection, we used the generalized optical theorem to decompose a npoint scattering amplitude into a 3-point and a (n − 1)-point scattering amplitudes. The shortcoming is that we only obtained the OPE limit of the imaginary part of the shadow celestial amplitude A ∆ i 2→n−2 . In this subsection, we will focus on a special class of Feynman diagrams shown in Figure 1. It allows us to derive a formula for the OPE limit of the shadow celestial amplitude without taking the imaginary part. We note that the scattering amplitudes of the class of Feynman diagrams in Figure 1 take the form where we defined p = q ′ 1 + q ′ 2 . Plugging the above express into (3.2) and changing integral variables from p µ to Mp µ withp ·p = −1 then lead to lim q 1 →q 2 Performing the integral overq ′ 2 , we obtain (3.23) Thus, for the particular class of diagrams in Figure 1, the leading OPE limit of a n-point massless shadow celestial amplitudes are given by a superposition of (n − 1)-point shadow celestial amplitudes over a range of mass M ∈ [0, ∞). (3.23) leads to the leading order operator product expansion We stress here again that (3.23) works only when the (n − 1)-point shadow celestial
OPE analysis for four-point shadow celestial amplitudes
As an example, we consider the OPE behaviour of four-point shadow celestial amplitudes A ∆ i 2→2 involving two incoming and two outgoing massless scalars with a massive scalar exchange. Since only s-channel amplitude has an imaginary part and is in our particular class of diagrams shown in Figure 1, we only focus on the s-channel amplitude and take the OPE limitq 1 →q 2 . The OPE limitq 3 →q 4 can be derived in a similar way. In this specific case, (3.20) takes the form as Here m denotes the mass of the exchange operator. On the other hand, in this case, (3.23) becomes lim q 1 →q 2 We note that in the present case A ∆ 1 +∆ 2 ,∆ i M (z i ) takes the form as Rescaling ω 3 → M ω 3 and ω 4 → M ω 4 leads to By further expanding (3.29) in power series ofq 34 and using (3.13), we find In the next section, we will compute the s-channel four-point shadow celestial amplitude and derive its conformal block expansion. We will find agreement between the conformal block expansion and the OPE expansion (3.30) when Re(
Examples of shadow celestial amplitudes 4.1 Three-point shadow celestial amplitudes
We first consider the shadow celestial amplitudes for two incoming massless scalars and one outgoing massive scalar. The celestial amplitudes A ∆ i 2→1 at tree level is [24] is the beta-function. With the help of (A.25), we compute the three-point shadow celestial amplitudes A ∆ i 2→1 . We get where the coefficient C ∆ 1 ,∆ 2 ,∆ 3 is given in (3.12) with T 2→1 (m 2 ) = −g and z ij ≡ z i − z j and z ij ≡z i −z j . We mention here that the integral in the shadow transformation from (4.1) to (4.2) converges only when Re(∆ 1 + ∆ 2 − ∆ 3 ) > 0. However, with (4.2) in hand, we can analytically continue the conformal dimensions to the whole complex plane excluding the The constrains imposed by translation symmetries (2.28) on the shadow celestial ampli- It is then straightforward to confirm that the coefficient (3.12) satisfies the above equality.
Four-point shadow celestial amplitudes
In this subsection, we will compute the four-point shadow celestial amplitudes A ∆ i 12→34 of four external massless scalars and one exchange massive scalar, and derive its conformal block expansion in the 12 − 34 channel. To achieve this, we start with the corresponding celestial amplitudes A ∆ i 12→34 with the explicit formulae given in (2.8), (A.18) and (2.12), and perform the shadow transformation for each external operator.
Shadow transformation for O 1
To get the shadow celestial amplitudes A ∆ i s , we first perform the shadow transformation on the operator O 1 , and denote the result by A ∆ i s . 10 Using the conformal symmetry, we can fix three of the four coordinates, leading to In this coordinate configuration, I 12−34 in (A.18) takes the form as where ∆ i = 2h i . Together with (2.8) and (2.12), we simplify the shadow integral as where ∆ 1 = 2 − ∆ 1 and h 1 = 1 − h 1 , and we defined the new conformal cross-ratios χ ′ and χ ′ as The above integral can be evaluated by using the integral representation of the Appell function F 1 : (4.9) Using the conformal symmetry, one can unfix the coordinates (4.4), and reach the following where we rename all of the primed (tilde) variables by the corresponding unprimed (untilde) variables to simplify the notation and G s (χ) is where we have renamed ∆ 1 , h 1 by ∆ 1 , h 1 , and in this convention β = −∆ 1 +∆ 2 +∆ 3 +∆ 4 −2.
To find the conformal block expansion in the 12 − 34 channel, we use the Burchnall-Chaundy expansion for the Appell function leading to where we defined (4.14) Finally, after using the following identity we get the conformal block expansion of G s (χ) in 12 − 34 channel, Here, we introduced the notation h i+j ≡ h i + h j and the conformal block in s-channel is given by To get the shadow celestial amplitude A ∆ i s , we still need perform the shadow transformations on the remaining three operators in A ∆ i s . However, computing the shadow integral as we did for O 1 now becomes extremely hard. Instead of computing the shadow integral directly, in this subsection, we will use the conformal partial wave reviewed in the Appendix A.2 to get the conformal block expansion of A ∆ i s . Specifically, we first derive the conformal partial expansion of A ∆ i s from the block expansion (4.16). Then we compute the conformal partial wave expansion of A ∆ i s by virtue of (A.28). Finally, we derive the desired conformal block expansion by closing the contour to the right-half plane.
From the block expansion (4.16) and the symmetry property (A. 19), we can derive the spectral density ρ ∆ i h,h for A ∆ i s , where C ∆ i ,∆ j ,∆ (m) is given in (3.12) and we used the fact that The poles of (4.19) located at right-half ∆-plane are at ∆ = ∆ 1 + ∆ 2 + 2n and ∆ = ∆ 3 + ∆ 4 + 2n with n = 0, 1, 2, · · · . This leads to the following conformal block expansion of the shadow celestial amplitude A ∆ i s (z i ) (4.20) Here the expanding coefficients C ∆ i ∆ are As we can see from (4.20), conformal block expansion of A ∆ i s contains two series of scalar exchange operators with conformal dimension ∆ = ∆ 1 + ∆ 2 + 2n and ∆ = ∆ 3 + ∆ 4 + 2n, respectively. It is then natural to expect that these exchange operators are double-trace operators, which take the schematic form as 11 Using (A.28), one could also obtain the partial wave expansion (2.14) of the celestial amplitudes A ∆i s by simply removing the factor 2 ∆1 a ∆1,∆2 ∆ in (4.18). It is also easy to check that the partial wave expansion of Im A ∆i s obtained in this way agrees with the result predicted by the optical theorem [24,25].
Although we can not see the exchange operators with conformal dimension ∆ = ∆ 3 + ∆ 4 + 2n by looking at the OPE limitq 1 →q 2 at leading order, these exchange operators can be recovered by studying the OPE limitq 3 →q 4 . This makes the four-point shadow celestial amplitude A ∆ i 2→2 a bit special since we can implement our OPE analysis for both incoming and outgoing operators and recover all the exchange operators appearing in the conformal block expansion (4.20).
Outlook
In this paper, we proposed the shadow conformal primary basis (2.16) for massless scalar particles and defined the shadow celestial amplitude in (2.20). The shadow conformal primary basis can be obtained from the conformal primary basis by performing shadow transformations as in (2.19). In terms of the shadow conformal primary basis, the shadow celestial amplitude defined in (2.20) behaves more like a standard CFT correlator. We studied the constraints from the translation symmetry on the shadow celestial amplitudes. Moreover, based on the factorization of the scattering amplitudes in the plane-wave basis, we further showed that the shadow celestial amplitudes enjoy a nice factorization in the leading OPE limit. Since the celestial coordinates in the shadow celestial amplitudes are not directly related to the momenta of the particles, the kinematic constraints like the momentum conservation do not lead to any subtlety when we study the OPE behaviour of the shadow celestial amplitude. We checked the OPE factorization by focusing on the tree-level fourpoint shadow celestial amplitude of four massless external scalars and one massive exchange scalar. Several interesting open questions ensue from our work.
• In this paper, we only studied the OPE limit at leading order in the sense that we set q 1 =q 2 directly. It would be of great interest to study the OPE limit at sub-leading orders and explore possible exchange operators. As we see, all of the exchange operators in the four-point conformal block expansion (4.20) can be uncovered by looking at the leading order OPEq 1 →q 2 andq 3 →q 4 . However, we expect that the leading order OPE is not enough to generate all exchange operators in higher-point amplitudes. To get the exchange operators in the higher-point conformal block expansion, one must go beyond the leading OPE. 12 • The OPE analysis in Section 3.2 works only for scattering amplitude at tree-level.
When the scattering amplitude M 2→n at loop-level is considered, the intermediate states appearing in the generalized optical theorem also include multi-particle states.
It would be interesting to see how to translate the factorization of the scattering amplitude in the plane-wave basis into the factorization of the corresponding shadow celestial amplitude when there exist multi-particle states. Moreover, our OPE analysis in Section 3.3 is restricted to a particular class of Feynman diagrams depicted in Figure 1.
Extending that analysis to more generic diagrams is necessary to fully understand the OPE behaviour in CCFTs.
• Another avenue would be to explore the flat space correspondence of the double-trace operators appearing in the conformal block expansion (4.20). It is well-known that double-trace operators in AdS correspond to two-particle states. Although some recent progress has been made to relate celestial amplitudes with AdS Witten diagrams [62,63], it is unclear if this statement still holds in CCFTs. Thus it would be of great interest to study these double-trace operators and explore their holographic dual in flat space.
• Finally, it would be interesting to study the Mellin amplitudes associated with the shadow celestial amplitudes. Celestial Mellin amplitudes in three dimensions have been studied in [33]. For the usual four-dimensional celestial amplitudes, defining the corresponding celestial Mellin amplitudes is subtle due to the existence of delta-function δ(χ −χ). In contrast with the celestial amplitudes, the shadow celestial amplitude defined in this paper no longer has this distributional factor and takes the standard form as a CFT correlation function. Thus, following the definition of AdS Mellin amplitudes, one can define Mellin amplitudes associated with the shadow celestial amplitudes, and the techniques developed in AdS Mellin amplitudes can be used to study the shadow celestial Mellin amplitudes. 12 Recent progress on the computation of higher-point conformal blocks can be found in . Using the integral representation (A.14) of the conformal partial waves, we find that . (A.24) The integral over z 1 ′ can be computed by noting that .
B The conformal integral
In this appendix, we will compute the following conformal integral I(q 1 ,q 2 ,p) ≡ D 2q′ where D 2q′ = d 2 z ′ , Y µ = −2(q 1 ·p)p µ −q µ 1 . Using the Feynman/Schwinger parameterization, I(q 1 ,q 2 ,p) can be written as We evaluate the integral overq ′ 2 by noting that [64] D 2q′ which holds for any timelike vector Q µ . This can be seen by going to the rest frame of Q µ , i.e. we choose Q µ = (1, 0, 0, 0). In this frame, the above integral becomes | 2022-10-11T01:16:25.236Z | 2022-10-10T00:00:00.000 | {
"year": 2022,
"sha1": "1640af39ad5aa281c00dfad20cae9edfb8a2a129",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "41063e7a8b87b46ea3ee57596177fde4d3d92b81",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
134247711 | pes2o/s2orc | v3-fos-license | It’s Time for a New Agriculture System
This summer when we look at the mouth of the mighty Mississippi River, we might see a vast dead zone which surpasses the area of the state of New Jersey. Around the world, dead zones have been shown a consistently associated with crop fertilizer losses and soil erosion. Hypoxic and anoxic zones are caused by excessive nutrient pollution, primarily from crop agriculture. In the corn belt state of Illinois, for example,mean nitrogen N, phosphorus P and potassium K rates are about 150, 50 and 50 for the respective nutrients on kg/ha application rate. When we look at the cost of the conventional maize production system we look not only to the dead zone and the over 50% reduction in brown shrimp catch. But other concerns are soil effects on economic, environmental, and health concerns and consequences.
In North America, the biggest nutrient input in the maize dominated cropping system is ammoniated nitrogen. Ammoniated fertilizer comprises the bulk of the fertilizer input and has the largest carbon foot print. In areas where rain as precipitation seasonally exceeds the crop evaporation, washing of nutrients are associated with soil acidification, soil aging, soil organic matter loss and soil erosion [1] (Figure 1).
Figure 1
The grand father of the long term farming trials the University of Illinois Morrow Plots has shown the maize production methods with full fertilizer recommendation applied result in substantial losses of soil organic matter and will contribute to accelerated soil acidification and aging [2].
The conventional maize monoculture not only has a large carbon footprint related to use of fertilizer inputs principally N then P and K but also from herbicide, insecticide, and lime. Agrichemical inputs have side effects not restricted to their direct cost and environmental effects but can lead to long term soil deterioration of the soil system itself. The energy cost of agriculture inputs can be dwarved by their potential to compromise soil quality by reducing the levels of soil organic matter and acidify the soil which represents a type of double edged sword [3].
The Rodale Institute Farming Systems Trial has taken a different approach than conventional N fertilization dependence. By their design, they demonstrated that N fertilization by ammoniated sources are not essential but rather it can be eliminated. In addition other agrichemical inputs are not necessary when the soil can be improved. In biologically based systems using legume based rotation of crops, cover cropping and organic amendment weans the plants from a chemical dependence to a dependence on the soil itself [4][5][6].
While the conventional maize and soybean rotation did not lose organic matter unlike the results of maize monoculture in Illinois and elsewhere in the Midwest. The biologically based farming systems was found not only to reduce the carbon footprint of the production practices but additionally and more importantly substantially increased soil organic matter [7].
At the Ohio State University a long term maize no till experiment started in the early 1960s has shown that converting from fully tillage to no till can lead to 330 kg/ha/yr C sequestration [8]. In relation to these encouraging results, we must consider the fertilizer and pesticide inputs are equal or greater under these no till systems run conventionally. The Rodale approach experienced lower carbon foot print and substantially higher carbon sequestration from cover cropping, crop rotation, and organic amendmentin Rodale Farming Systems Trial and Compost Utilization Trial. Rattan Lal, a noted expert in Carbon sequestration, suggested that if no till systems were extended on global tillable acreage up to 10% of current greenhouse gas emissions could be neutralized. While OSU results put the potential C sequestration at 330kg to neutralize 10% The Rodale work suggest the range of a crop cover contribution can be from 600 to 1,200kg C per ha per year. While compost amendment could provide sequestration rates 1,100 to 2,200kg C per ha per year. Work should now concentrate and the potential for additive and synergistic interactions of practice combinations and systems.
If we depart from no till effect of 10% at 330 the use of cover cropping could result in about 18 to 36% mitigation potential and compost about 34 to 68%. These numbers do not include the acreages of pastures which are well known to have extremely high Carbon sequestration potential when managed properly and the acreage of pasture system can exceed the tillable acreage by 2 times nor does it consider the potential for forests.
Until recently most of the energetics and greenhouse gas work has focused on carbon footprint of individual practices or technological packages and protocols for individual crop production such as monoculture maize.
The potential of using our soil as carbon sink may not be fully appreciated and represents much more profound opportunity than the traditional focus on just carbon foot print from a practice or practices.
Beside the tradition practiceof using a production foot print approach alone, we suggest the bigger goal of improving the soil base condition by soil sequestration while reducing and eliminating agrichemical inputs be combined in calculating a net result. When input footprint costs are substracted from positive carbon sequestration a net bottomline carbon sequestration can be calculated which would be an improvement on current calculation of input footprint alone and give quantitative estimate of net effect.
The use of biological inputs in cropping systems can eliminate the majority of carbon footprint from maize. The biological input provide necessary nitrogen from the soil by including soybean legume forages and winter covers in the cropping scheme. While conventional systems need the same of increasing inputs over time the biologically based system depends rather on the improved soil conditions and get the nitrogen from biological nitrogen fixation from the vast atmospheric reservoir.
With increasing consensus on the challenge of changing climate, soil organic matter works to stabilize crop yields from the principal constraint of periodic drought. Since improved soil condition increases soil percolation and soil water retention and use the Rodale Farming Systems study clearly shows superior yield of biologically based systems compared to conventional maize and soybean cropping with agrichemicals in drought years.
In terms of the environment, energy use economics, sustainability and the addressing of global issues of greenhouse gases and mitigation of negative effects of climate change, we need to focus our production systems on the soil and substitute biological for agrichemical inputs. In developing these new generation practices net carbon sequestration uses the soil sequestration value and substracts the carbon foot print of practices. Concentrating on net sequestration is highly recommended to better estimate the impact of our system on the soil, energy and environment. | 2019-04-27T13:07:55.389Z | 2017-08-29T00:00:00.000 | {
"year": 2017,
"sha1": "f71bf285b9a653459611edeb0c9b8758e6b06672",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.19080/artoaj.2017.11.555804",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1b11ec9b6bafb5708b71f8f87dc6fc0d3d35e9ba",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
252271933 | pes2o/s2orc | v3-fos-license | Role of Astaxanthin as a Stimulator of Ovarian Development in Nile Tilapia (Oreochromis niloticus) and Its Potential Regulatory Mechanism: Ameliorating Oxidative Stress and Apoptosis
A 60-day feeding experiment was performed to evaluate the effect of dietary astaxanthin on gonad development, the antioxidant system, and its inherent mechanism in female Nile tilapia (Oreochromis niloticus). Fish were fed with diets containing astaxanthin at five levels [0 mg/kg (control), 50 mg/kg, 100 mg/kg, 150 mg/kg, and 200 mg/kg]. At the end of experiment, the group fed with 150 mg/kg astaxanthin showed significantly increased specific growth rate, feed utilization, viscerosomatic index, and hepatosomatic index compared with the control group (P < 0.05). Gonad development was stimulated in the groups fed with 100 mg/kg and 150 mg/kg astaxanthin, and their gonadosomatic index and egg diameter were significantly higher than those of the control group and the group fed with 200 mg/kg astaxanthin. The ovaries of females in the groups fed with 100 mg/kg and 150 mg/kg astaxanthin were fully developed, the eggs were gray-yellow and uniform in size, and a large number of oocytes developed to stages IV and V. The serum levels of 17 β-estradiol, follicle-stimulating hormone, and luteinizing hormone were significantly higher in the groups fed with 100 mg/kg and 150 mg/kg astaxanthin than in the group fed with 200 mg/kg astaxanthin. Compared with the control and the other groups, the group fed with 150 mg/kg astaxanthin showed significantly higher transcript levels of genes encoding hormone receptors and higher catalase activity in ovarian tissues, lower malondialdehyde content, decreased apoptosis (reduced granulosa cell apoptosis and lower transcript levels of bax and caspase-3), and reduced follicular atresia. Gene ontology analyses revealed that cell division and the cell cycle were enriched with differentially expressed genes in the group fed with 150 mg/kg astaxanthin, compared with the control group. We concluded that dietary astaxanthin at a concentration of 150 mg/kg activates follicle development by inhibiting expression of mapk1 (involved in MAPK signaling) and increasing the expression genes involved in oocyte meiosis (chp2, ppp3ca, map2k1, and smc1a1) and progesterone-mediated oocyte maturation (igf1, plk1, and cdk1). In conclusion, female Nile tilapia fed with 150 mg/kg astaxanthin showed increased growth, reduced oxidative stress in ovarian tissue, lower levels of cell apoptosis, and improved oocyte development.
Introduction
Astaxanthin is a small molecule with unsaturated hydroxyl and ketone groups [1]. It was the only pigment additive allowed in the feed additive catalog [2]. Astaxanthin can combine with myoglobin in the body to make fish skin brilliantly colored, and it regulates the deposition of body pigment [3]. Many studies have shown that dietary supplementation with astaxanthin affects the pigmentation and nonspecific immune function of fish [2,[4][5][6]. Atlantic salmon (Salmo salar) fed with a diet containing 88.6 mg/kg astaxanthin accumulated astaxanthin in the muscle to a concentration of 2.0-2.5 μg/kg, compared with 0.5-0.8 μg/kg in the control group [6]. Astaxanthin-supplemented feeds were shown to promote the growth, immunity, and body color of oscar (Astronotus ocellatus) [2] and increase the growth of common carp (Cyprinus carpio) and its resistance to Aeromonas hydrophila [5]. Lobster meal rich in astaxanthin and protein-supplemented feeds were shown to increase the brightness, redness, and yellowness of goldfish (Carassius auratus) [4]. Because the molecular structure of astaxanthin contains conjugated double bonds as well as hydroxyl and ketone groups and other reducing groups, it can react with oxygen free radicals to scavenge free radicals. Thus, it has powerful antioxidant effects [1]. The addition of astaxanthin to the diet of large yellow croaker (Larimichthys crocea) was shown to increase the activity of hepatic superoxide dismutase and glutathione peroxidase, thereby increasing the antioxidant capacity [7].
Gonad development and the timing of sexual maturity in broodstock, the fertilization rate, the hatching rate of fertilized eggs, and the survival and quality of fry are affected by the nutritional status of the feed (i.e., the types and proportions of protein and amino acids, fat and fatty acids, vitamins, minerals, and other nutrients) [8,9]. Reproductive performance can be improved by appropriate nutrition but reduced by overnutrition or undernutrition [10]. Poor nutrition can even result in death. In recent years, astaxanthin has attracted attention because it can improve the reproduction level of female broodstock. Astaxanthin-supplemented feeds have been shown to increase the spawning rate of striped jack (Pseudocaranx dentex) [11] and the fertilization rate, egg survival rate, and growth of freshwater crayfish (Astacus leptodactylus), black tiger shrimp (Penaeus monodon), and Atlantic cod (Gadus morhua L.) [12][13][14]. Adding krill meal as the main protein source to the feed of gilthead bream (Sparus aurata) was shown to significantly improve the reproductive performance of the broodstock [9], because krill meal contains phosphatidylcholine and astaxanthin, which are required for gonad development [9].
Aquatic germplasm resources are essential for the development of modern aquaculture, and the demand for highquality fry has increased significantly as the aquaculture industry has developed. Tilapia, as an important imported species in China, has experienced serious germplasm degradation in recent years and has shown slowed growth, decreased fecundity, and increased susceptibility to disease [15,16]. Therefore, it is very important to investigate how various nutrients can improve the reproductive ability of tilapia broodstock and the quality of offspring. In this study, we explored the effect of astaxanthin-supplemented feeds on the reproductive ability of Nile tilapia (Oreochromis niloticus) and the potential mechanism of this effect. The results provide a scientific basis for promoting the reproduction of tilapia through nutritional means.
Experimental Fish.
One-year-old experimental female fish were obtained from the Yixing Base of the Freshwater Fisheries Research Center of the Chinese Academy of Fishery Sciences (FFRC). The experimental fish were initially kept in an indoor temperature-controlled (water temperature 28 ± 0:5°C) circulating water system for 15 days. During the acclimation period, the dissolved oxygen (DO) in the water was measured daily and maintained at (DO) >7.09 mg/L. The ammonia nitrogen concentration was lower than 0.5 mg/L, and the pH was 7:3 ± 0:2. Fish were fed with commercial feed (33% protein and 6.5% lipid) at 08 : 00 and 16 : 00 at 5% of body weight.
Preparation of Experimental Feed.
We formulated five experimental diets without or with astaxanthin at different concentrations [0 mg/kg (control), 50 mg/kg, 100 mg/kg, 150 mg/kg, and 200 mg/kg] (Table 1). We used natural astaxanthin derived from Haematococcus pluvialis (purity 10%), which was purchased from Shangcheng Biotechnology Co., Ltd. (Xi'an, China). Fish meal, soybean meal, and rapeseed meal were used as protein sources. Soybean oil was the lipid source. Wheat flour and corn flour were used as carbohydrate sources. All the diets were energetically equal. The dry materials were mixed to homogeneity in a Hobart mixer at the FFRC, and then the wet materials were added, and the mixture was shaped into cold-extruded pellets (2.5 mm diameter). After drying, the feeds were sealed in vacuumpacked bags and kept at −20°C until the start of the experiment. The astaxanthin content was determined by liquid chromatography [17]. A Luna 3u silica column (150 mm × 4:60 mm; Phenomenex, Torrance, CA, USA) was used for these analyses. The mobile phase was nhexane and acetone (83 : 17, v/v; flow rate 1.0 mL/min). The detection wavelength was 478 nm, and the injection volume was 20 μL.
Experimental Design and Feeding
Management. All the fish were starved for 24 h at the start of the experiment, and the average initial weight was 207:5 ± 4:3 g. Then, 200 selected Nile tilapia females with well-developed gonads were randomly divided into five groups (four replicates per group and 10 fish per replicate). There was no significant variation in initial body weight among the fish groups at the start of the experiment. The rearing experiment was carried out indoors in polyethylene tanks (diameter × height = 2080 mm × 1200 mm) with circulating water systems. The females were fed by hand at 08 : 00-09 : 00 and 16 : 00-17 : 00 at the rate of 3%-5% of fish body weight. The amount of the diet consumed was recorded daily, and the feeding rate was adjusted every 15 d by determining the total weight of the fish in each tank. The DO, temperature, and pH of the water were measured every day and maintained at 6:5 ± 0:5 mg/L, 28 ± 0:5°C, and 7:53 ± 0:2, respectively. The entire experiment lasted for 60 days.
Sample Collection.
At the end of experiment, the total weight of all experiment fish in each tank was measured. All fish were starved for 24 h to allow the alimentary tract to clear before sampling. Four fish from each tank were randomly selected for the collection of blood and gonad samples. The fish were anesthetized with an overdose of tricaine sulfonate at 200 mg/L (MS-222, Argent Chemical Laboratories, Redmond, WA, USA). Blood samples were collected from four fish per tank using a 2.5-mL syringe and were placed into 1.5-mL Eppendorf tubes. The blood samples were centrifuged at 5000 g for 15 min at 4°C. The supernatant (serum) was stored at −80°C until analysis. After the fish were weighed, the ovarian tissue was quickly dissected and weighed. A portion of gonadal tissue was taken from the anterior, middle, and posterior parts of the ovary and fixed in Bouin's solution, fixed for 24 h, and then stored in 70% v/v ethanol. Gonad tissues from another four experimental fish from each tank were instantly frozen in liquid nitrogen and kept at -80°C until transcriptome and gene expression analysis. Because tilapia eggs are ellipsoidal, we used 10 eggs as a group and measured both axes (long and short) to calculate mean egg size (mm), as follows: ½∑10 eggs ðlength + widthÞ/ 20 [18].
Parameter
The relative fecundity (F R , eggs kg -1 body weight) was calculated as follows: F R = 100 × total number of eggs/FBW.
Oocyte Development and Apoptosis
(1) Hematoxylin/Eosin (HE) Staining. The fixed ovary sample was dehydrated, cleared, embedded in paraffin, and serially cut into 5-6-μm sections. After HE staining, each section was observed and photographed using the NIKON digital sight DS-FI2 imaging system of a NIKON Eclipse Ci microscope (NIKON, Tokyo, Japan). There were six digital fields in each slice, and atretic follicles were counted in each digital field separately. The morphological analysis of follicular atresia (FA) has been described in detail elsewhere [15].
(2) Terminal dUTP Nick-End Labeling (TUNEL) Analyses. Sections of ovary tissues were prepared as described above. Then, DNase-free proteinase K (20 μg/mL) was added dropwise to the deparaffinized and rehydrated sections, and then the sections were incubated in a humid chamber at 37°C for 30 min before adding 50 μL prepared TUNEL reaction solution dropwise to the tissue in the dark. The sections were again incubated at in a humid chamber at 37°C for 60 min. After mounting the slide with antifluorescence quenching mounting solution, sections were observed and 2.5.4. Antioxidant Enzyme Activity. Each thawed gonad sample was rinsed with precooled physiological saline and blotted dry on filter paper. Then, 0.1 g of the sample was homogenized with nine volumes of precooled phosphatebuffered saline (PBS). The homogenate was used to determine malondialdehyde (MDA) content and the activities of superoxide dismutase (SOD) and catalase (CAT). All indicators were measured within 24 h, and the absorbance of solutions was determined using a microplate reader (BioTek Epoch, Winooski, VT, USA). The protein concentration in the supernatant was determined by the Coomassie Brilliant Blue Assay. All experimental kits were purchased from the Enzyme-linked Biotechnology Co., Ltd. (Shanghai, China).
2.5.5. Serum Hormone Levels. The follicle-stimulating hormone (FSH) and luteinizing hormone (LH) contents in serum were determined by specific and homologous competitive enzyme-linked immunosorbent assay (ELISA) methods [19]. The kits contained a microtiter plate coated with the corresponding hormone antibody (labeled with horseradish peroxide (HRP)). After the serum hormone combined with the antibody in the microtiter plate, color was developed using a development solution and HRP, and then absorbance at 450 nm was measured using a microplate reader. The hormone levels in the sample were calculated from a standard curve following the manufacturer's instructions. The same principle was used to determine the serum contents of estradiol (E 2 ).
Library Construction and Transcriptome Sequencing.
The control group and the optimal astaxanthinsupplemented group were used to construct libraries for transcriptome sequencing. The ovarian tissues from 12 fish from each group stored at −80°C were thawed in an ice box before extracting total RNA with TRIzol reagent. The quantity and purity of the total RNA were determined using a Bioanalyzer 2100 and an RNA 6000 Nano LabChip Kit (Agilent, CA, Palo Alto, USA) (RIN number> 7.0). For each group, the RNA from four samples was mixed to construct a sequencing library, with three replicates. Thus, a total of six sequencing libraries were constructed: three from the control group (Con_1, Con _2, Con _3) and three from the astaxanthin-supplemented group (As_1, As_2, and As_3). Paired-end sequencing was conducted on the Illumina NovaSeq™ 6000 platform at Lc-bio (Hangzhou, China) following the manufacturer's recommended protocol.
We used Bowtie2 and hisat2 to map reads to the genome of Nile tilapia (https://www.ncbi.nlm.nih.gov/genome/ ?term=nile+tilapia). Gene expression levels were measured using FPKM (fragments per kilobase of exon model per million mapped reads) to measure the abundance of gene expression. Differentially expressed (DE) genes were analyzed on the basis of fold-change (the mean value of FPKM of the As/the mean value of FPKM of the Con) and P value criteria, and then a false discovery rate (FDR) correction was applied to adjust the P value. The thresholds for a significant difference in gene transcript levels were |log 2 foldchange| ≥ 1 and P < 0:05. The gene ontology (GO; http://www .geneontology.org) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway (http://www.genome.jp/kegg/ pathway. html) databases were used to assign terms and pathways to the genes to investigate their potential biological functions. On the basis of the sequencing results, several DE genes in important pathways were selected for qRT-PCR verification. For specific procedures, refer to section 2.7 below.
Quantitative Gene Expression Analysis.
On the basis of the mRNA sequence of related genes at the NCBI, we designed primers to amplify apoptosis-related genes (B-cell lymphoma 2: bcl-2; BCL2 associated X: bax, caspase-3) and hormone receptor genes (Estrogen receptor: er, folliclestimulating hormone receptor: fshr; luteinizing hormone receptor: lhr) (Supplementary Table 1). Total RNA was extracted from 50 mg gonad tissue (16 samples in each group) using 1 mL TRIzol (Invitrogen, Carlsbad, CA,
Statistical
Analysis. Data were analyzed using SPSS version 25 (SPSS, Chicago, IL, USA). All data were first subjected to Shapiro-Wilk's and Levene's tests to analyze data normality and variance homogeneity, followed by one-way analysis of variance (ANOVA). Significant differences (P < 0:05) among groups were further compared using Duncan's multiple range tests. All results are expressed as mean ± SD. (a)-(e) correspond to groups fed with diets containing astaxanthin at 0, 50 mg/kg, 100 mg/kg, 150 mg/kg, and 200 mg/kg, respectively. II, III, IV, and V represent oocytes at stages II, III, IV, and V, respectively; FA: follicular atresia. 6 Aquaculture Nutrition
Effects of Dietary Astaxanthin Levels on Growth
Performance of Nile Tilapia. Compared with the control group, the groups fed with 100 mg/kg and 150 mg/kg astaxanthin showed significantly increased WG and SGR (P < 0:05) ( Table 2). The group fed with 150 mg/kg astaxanthin had the highest FBW, WG, and SGR. Compared with the groups fed with 50 mg/kg, 100 mg/kg, and 150 mg/kg astaxanthin, the group fed with 200 mg/kg astaxanthin had lower FBW, SGR, and WG. However, there was no significant difference in FBW, SGR, and WG between the group fed with 200 mg/kg astaxanthin and the control group (P > 0:05). The HSI and VSI of each experimental group increased first and then decreased during the experimental period. The HSI and VSI were significantly higher in the group fed with 150 mg/kg astaxanthin than in the control group and the group fed with 200 mg/kg astaxanthin (P < 0:05). There were no significant differences in HSI and VSI among the control and the 50 mg/kg and 200 mg/ kg astaxanthin groups. The FCR was significantly lower in the group with 100 mg/kg astaxanthin than in the control group and the group fed with 200 mg/kg astaxanthin (P < 0:05).
Effects of Dietary Astaxanthin Levels on Ovarian
Development of Nile Tilapia. As shown in Figures 1(c) and 1(d), the ovarian tissues of Nile tilapia in the groups fed with 100 mg/kg and 150 mg/kg astaxanthin were full, and the eggs were yellow-gray, uniform, clear, and countable ( Figure 1). The ovarian tissue of the control group was poorly developed, and the eggs were significantly smaller (Figure 1(a)). However, the ovarian tissue of tilapia in the group fed with 200 mg/kg astaxanthin was significantly atrophied and contained more white eggs (Figure 1(e)). The GSI increased with increasing astaxanthin supplementation levels from 0 to 150 mg/kg. However, the GSI in the 200 mg/kg astaxanthin group was significantly lower than those in the 100 mg/kg and 150 mg/kg astaxanthin groups (Table 3, P < 0:05), but not significantly different from that in the control group and 50 mg/kg astaxanthin group (P > 0:05). The F R of each experimental group first increased and then decreased with increasing astaxanthin supplementation. The F R was significantly higher in the group fed with 150 mg/kg astaxanthin than in the control group and the group fed with 200 mg/ kg astaxanthin (P < 0:05). The egg diameter was significantly higher in the groups fed with 100 mg/kg and 150 mg/kg astaxanthin than in the other groups (P < 0:05). There was no significant difference in egg diameter among the control and the 50 mg/kg and 200 mg/kg astaxanthin groups. We observed oocyte development in the fish fed with different levels of astaxanthin ( Figure 2). In the control group, most of the oocytes were at stage II and III, and atretic follicles were present (Figure 2(a)). In the group fed with 50 mg/ kg astaxanthin, stage IV oocytes were present in the ovary, as well as stages II and III oocytes and atretic follicles Aquaculture Nutrition (Figure 2(b)). The oocytes of fish in the 100 mg/kg and 150 mg/kg astaxanthin groups had developed to stage V, and there were a few stage II and III oocytes (Figures 2(c) and 2(d)). However, in the 200 mg/kg astaxanthin group, most oocytes were at stages II and III, and many atretic follicles were present (Figure 2(e)). There were significantly more atretic follicles in the 200 mg/kg astaxanthin group than in the other experimental groups (Table 3).
Effects of Dietary Astaxanthin Levels on Antioxidant
Capacity of Nile Tilapia. Different levels of astaxanthin in the feed did not affect SOD activity in Nile tilapia females ( Figure 3, P > 0:05). The MDA content tended to decrease with increasing levels of astaxanthin supplementation from 0 to 150 mg/kg but was significantly higher in the group fed with 200 mg/kg astaxanthin than in the groups fed with 100 mg/kg and 150 mg/kg astaxanthin (P < 0:05). The CAT activity was not significantly different among the control and the 50 mg/kg, 100 mg/kg, and 150 mg/kg astaxanthin groups. However, it was significantly lower in the 200 mg/ kg astaxanthin group than in the other groups (P < 0:05).
Effects of Dietary Astaxanthin Levels on Serum Hormone
Levels of Nile Tilapia. Serum FSH, E 2 , and LH levels were significantly affected by dietary astaxanthin levels ( Figure 4, P < 0:05). The serum levels of FSH, E 2 , and LH were higher in the groups fed with 100 mg/kg and 150 mg/kg astaxanthin than in the other groups. The serum FSH, E 2 , and LH levels were significantly lower in the 200 mg/kg astaxanthin group than in the 100 mg/kg and 150 mg/kg astaxanthin groups (P < 0:05).
Effects of Dietary Astaxanthin Levels on Gene Expression
and Apoptosis in Nile Tilapia. The transcript levels of fshr and er were significantly higher in the groups fed with 100 mg/kg and 150 mg/kg astaxanthin than in the control group and the group fed with 200 mg/kg astaxanthin (Figures 5(a) and 5(b), P < 0:05). The transcript levels of lhr were significantly higher in the groups fed with 100 mg/ kg, 150 mg/kg, and 200 mg/kg astaxanthin than in the control group and the group fed with 50 mg/kg astaxanthin ( Figure 5(c)). However, there was no significant difference in lhr transcript levels among the groups fed with 100 mg/ kg, 150 mg/kg, and 200 mg/kg astaxanthin (P > 0:05). The levels of bax and caspase-3 were significantly higher in the control group and the group fed with 200 mg/kg astaxanthin than in the group fed with 150 mg/kg astaxanthin (Figures 5(d) and 5(f)). The transcript level of bcl-2 was sig-nificantly lower in the group fed with 200 mg/kg astaxanthin than in the groups fed with 50 mg/kg, 100 mg/kg, and 150 mg/kg astaxanthin ( Figure 5(e)). In the TUNEL analyses, there were more green positive apoptotic cells around the follicles in the control group and the group fed with 200 mg/kg astaxanthin than in the other groups (Table 4; Figures 6(a) and 6(e), P < 0:05). However, there were significantly more granulosa cell layers and fewer apoptotic cells in the groups fed with 100 mg/kg and 150 mg/kg astaxanthin than in the control group and the group fed with 50 mg/kg astaxanthin (Figures 6(c) and 6(d), P < 0:05).
Transcriptome Analysis to Reveal Molecular Mechanism by which Dietary Astaxanthin Regulates Ovarian
Development in Nile Tilapia. The control group and the optimal astaxanthin-supplemented group (150 mg/kg As) were selected for RNA-seq analysis. We detected 462 DE mRNAs (P ≤ 0:05,|Log2fold change| ≥ 1) between the two groups, of which 202 were upregulated and 260 were downregulated ( Figure 7). The GO term enrichment analysis showed that the DE genes (Con vs. 150 mg/kg As) detected from the RNA-seq data were mainly related to the proteolysis, hormone activity, growth factor activity, extracellular space, cell division, and cell cycle (Figure 8(a)). We conducted KEGG analyses of the DE mRNAs in the control group vs. 150 mg/kg As. These analyses indicated that astaxanthin regulates ovarian development via its effects on cell fate control (MAPK signaling, cell cycle), oocyte development and maturation (gap junction, oocyte meiosis, and progesteronemediated oocyte maturation), metabolism (oxidative phosphorylation, glycolysis/gluconeogenesis, arginine, and proline metabolism), and hormone regulation (GnRH signaling pathway) (Figure 8(b)). Among these pathways, the MAPK signaling pathways, which affects apoptosis, as well as oocyte meiosis and progesterone-mediated oocyte maturation were identified as being primarily involved in the astaxanthin-induced regulation of follicular development in tilapia. These signaling pathways were the focus of further analyses. Notably, we identified 51 significant DE genes involved in follicular development, metabolic regulation, apoptosis, and adaptive immunity in the fish fed with a diet containing 150 mg/kg astaxanthin compared with the control group (Figure 9(a), Supplementary Table 2). The hierarchical clustering heatmap divided these DE genes into two major clusters (Con vs. 150 mg/kg As) (Figure 9(b)). Quantitative analyses of eight DE genes involved in these pathways revealed that mapk1 (mitogen-activated protein kinase 1) was significantly downregulated in ovarian tissue in the 100 mg/kg and 150 mg/kg astaxanthin groups compared with the control group; and that chp2 (calcineurin B homologous protein 2), ppp3ca (protein phosphatase 3 catalytic A), map2k1 (mitogen-activated protein kinase kinase 1), cdk1 (cyclin-dependent kinase 1), plk1 (serine/ threonine-protein kinase), igf1 (insulin-like growth factor I), and smc1a1 (structural maintenance of chromosomes protein 1A) were significantly upregulated in the 100 mg/ kg and 150 mg/kg astaxanthin groups (Figure 9(c)). Among them, chp2, mapk1, ppp3ca, and map2k1 encode 9 Aquaculture Nutrition components of the MAPK signaling pathway involved in oocyte meiosis; cdk1, plk1, and igf1 encode members of multiple follicular development regulation pathways (oocyte meiosis and progesterone-mediated oocyte maturation); and smc1a1 is involved in oocyte meiosis. Compared with the 100 mg/kg and 150 mg/kg astaxanthin groups, the 200 mg/kg astaxanthin group showed significant upregulation of mapk1. In contrast, the transcript levels of ppp3ca, chp2, map2k1, cdk1, plk1, igf1, and smc1a1 were all significantly lower in the 200 mg/kg astaxanthin group than in the 150 mg/kg astaxanthin group (P < 0:05).
10
Aquaculture Nutrition studies have shown that dietary supplementation with astaxanthin at 200-300 mg/kg can significantly increase the weight gain of discus fish (Symphysodon haraldi) [20] and that a diet containing 200 mg/kg astaxanthin can increase the growth performance of golden pompano (Trachinotus ovatus) and oscar [2,21]. In this study, the optimal doses of astaxanthin (150 mg/kg) significantly increased the growth and reduced the FCR of Nile tilapia. However, the highest dose (200 mg/kg) inhibited the growth and feed utilization of the broodstock. Excessive astaxanthin may increase metabolism in the fish body, resulting in additional physical energy demands to excrete excess nutrients from the body [22]. Diets supplemented with 500 mg/kg natural astaxanthin (Haematococcus pluvialis extract) [20] or 300-400 mg/kg synthetic astaxanthin were also found to significantly reduce the weight gain of discus fish [23]. The effect of astaxanthin on the growth performance of fish may be related to factors such as the fish species, growth stage, feed composition, and rearing conditions. [26,27]. In the present study, dietary supplementation with astaxanthin at 150 mg/kg not only promoted the growth and feed utilization of Nile tilapia, but also stimulated ovarian development. Also, the HSI and GSI were positively correlated during gonad development. The oocytes in the 100 mg/kg and 150 mg/kg astaxanthin groups were mainly in the middle and late stages of vitellogenesis (stages IV and V). The main process at this stage is the synthesis of exogenous yolk, and the exogenous substances are mainly from the liver [24]. A higher HSI may contribute to the continuous transport of nutrients to the ovary, increase GSI, and promote follicular development [28]. Studies have shown that the appropriate amount of astaxanthin in the feed can increase the spawning rate of broodstock and the survival of larvae after fertilization [14]. The egg diameter reflects the quality of fish eggs to a certain extent, and the quality affects the early development and survival of fertilized eggs. Therefore, egg diameter is commonly used to evaluate the quality of fish eggs [29]. In this study, the diets with 100 mg/kg and 150 mg/kg astaxanthin effectively improved the fecundity of Nile tilapia broodstock and increased the egg diameter. In another study, freshwater crayfish (Astacus leptodactylus) were fed with diets supplemented with vitamins E, C, and A, astaxanthin, and β-carotene, and those fed with diets containing vitamin E and astaxanthin produced the most and the largest eggs [12]. In a study on female grass shrimp (Penaeus monodon), a diet containing 50 mg/kg astaxanthin increased absolute fecundity and the total spawning numbers [25]. Astaxanthin is a carotenoid and is a precursor of vitamin A. Therefore, an appropriate amount of astaxanthin in the diet (100-150 mg/kg) may stimulate the growth and development of tilapia by increasing vitamin A synthesis. In this study, however, 200 mg/kg astaxanthin in the feed resulted in lower HSI, GSI, F R , and egg size of the female fish, compared with those in the 150 mg/kg astaxanthin group, and this significantly hindered development. Furuita et al. [24] also found that dietary supplementation with excess vitamin A significantly inhibited the growth, HSI, and GSI of Japanese flounder (Paralichthys olivaceus), but did not affect egg quality. How excessive astaxanthin suppresses ovarian development will be the focus of further discussion.
Dietary Supplementation with 150 mg/kg Astaxanthin Reduced Oxidative Stress in Nile Tilapia and Alleviated
Apoptosis. Astaxanthin is able to eliminate superoxide anion free radicals because of its unique molecular structure. It is a stronger and more effective antioxidant than β-carotenoids and vitamin E [12]. Cui et al. [30] found that rainbow trout (Oncorhynchus mykiss) fed with appropriate amounts of astaxanthin and canthaxanthin showed increased hepatic total antioxidant capacity. Wang et al. [22] also found that the activity of antioxidant enzymes in serum was significantly increased after koi carp were fed with a diet supplemented with astaxanthin. In this study, dietary supplementation with 150 mg/kg astaxanthin significantly increased CAT activity in the ovarian tissue of female Nile Con vs 150 mg/kg As Figure 7: Number of differentially expressed genes in Nile tilapia ovarian tissue between control group and group fed with a diet containing 150 mg/kg astaxanthin. Note: RNA from four samples was mixed to construct each sequencing library. Six sequencing libraries were constructed, three from control groups (Con) and three from 150 mg/kg astaxanthin groups (150 mg/kg As). 11 Aquaculture Nutrition tilapia and reduced the MDA content. The accumulation of MDA, which is the reaction product of lipid peroxidation, is an indicator of oxidative damage. The number of conjugated bonds in the structure of astaxanthin is related to its strong antioxidant function, which allows it to effectively remove oxygen free radicals from ovarian tissue and reduce body damage [31]. Under normal conditions, a certain amount of reactive oxygen species is produced in the follicles, and these molecules play an important role in follicle development and ovulation [32]. Dietary supplementation with astaxanthin improves antioxidant capacity and reduces oxidative damage in the ovary during gonad development in tilapia. Dietary supplementation with vitamin A can also increase the antioxidant capacity of the ovaries and increase the transport of unsaturated fatty acids [33]. However, excessive astaxanthin may lead to increased metabolism in fish, and excess free radical production can cause oxidative damage to biological macromolecules and induce cell apoptosis. The results of TUNEL and HE analyses show that the fish fed with a diet containing 200 mg/kg astaxanthin displayed significantly increased granulosa cell apoptosis and follicular atresia. This may have been caused by oxidative stress resulting from excess astaxanthin.
Dietary Supplementation with 150 mg/kg Astaxanthin Increased Serum Hormone Contents and Transcript Levels of Their Receptor Genes in Nile Tilapia and Alleviated
Apoptosis. In this study, supplementation of the feed with 50-150 mg/kg astaxanthin significantly increased the levels of sex steroid hormones in tilapia broodstock. Thus, the appropriate amount of astaxanthin may accelerate the synthesis of sex steroid hormones, thereby promoting gonad maturation. The most important and active form of estrogen is E 2 , which initiates ovarian differentiation and regulates oocyte development and maturation [34]. In studies on African catfish (Clarias gariepinus) [35], Channel catfish (Ictalurus punctatus) [36], and Pacific cod (Gadus macrocephalus) [37], the level of E 2 was found to be positively correlated with the accumulation of egg yolk in oocytes. Babin et al. [38] found that in female broodstock, the plasma E 2 content increased during yolk production and then decreased during maturation. Few studies have explored the ability of astaxanthin to promote the synthesis of sex steroid hormones. However, astaxanthin, as the precursor of vitamin A, may promote the synthesis of sterol hormones and follicular development in female fish by increasing vitamin A synthesis. The lack of vitamin A in broodstock feeds can cause gonad development disorders [28]. Appropriate vitamin A intake was shown to increase the fecundity of bighead carp (Aristichthys nobilis) [39] and Japanese flounder [40] and to satisfy the energy and nutritional requirements for gonadal development. However, excess astaxanthin can inhibit serum E 2 levels and follicular development. Similar results were obtained in studies on rainbow trout and tongue sole (Cynoglossus semilaevis) broodstock fed with diets containing high levels of vitamin A [28,41].
Both LH and FSH are synthesized in the pituitary gland and regulate follicle production and subsequent sex hormone production in female fish (e.g. E 2 ). The levels of FSH 14 Aquaculture Nutrition and LH in juvenile brook trout (Salvelinus fontinalis) [42] and yellow catfish (Pelteobagrus fulvidraco) [43] were found to steadily increase during gonad development and to peak at maturity. In the present study, the HE staining analyses revealed abundant IV and V oocytes in the ovaries of the 100 mg/kg and 150 mg/kg astaxanthin groups and a large number of mature follicles. This was related to the higher serum LH and FSH levels. Also, the increased transcript levels of fshr and lhr suggest that FSH and LH mediate ovarian development and follicular maturation via their corresponding receptors [44], ultimately reducing follicular atresia. The physiological functions of estrogen are indirectly regulated by ER, which is a member of the nuclear receptor family of steroid hormones. Previous studies have shown that changes in the contents of ERa and E 2 are related and that E 2 can increase the transcript levels of er genes [45].
An appropriate amount of astaxanthin may increase the binding rate of ER and E 2 and promote ER expression [46]. Enriched FSH, LH, and E 2 can promote follicular development by alleviating the apoptosis of granulosa cells.
In addition, dietary supplementation with 100 mg/kg and 150 mg/kg astaxanthin inhibited the expression of apoptosis genes (caspase-3 and bax) and promoted the expression of the antiapoptosis gene bcl-2. Such changes in gene expression have been shown to reduce apoptosis of granulosa cells [47]. However, we found that excess astaxanthin not only inhibited the synthesis and secretion of FSH, LH, and E 2 , but also reduced the activity of hormone receptors. This may have affected the sensitivity of oocytes to hormones, thereby accelerating granular cell apoptosis and follicular atresia.
Dietary Supplementation with 150 mg/kg Astaxanthin Reduced Follicular Atresia by Regulating MAPKs Signal.
The maturation or atresia of the follicle depends on the dynamic balance of signaling molecules at different stages, and the granulosa cell is an important intermediary via which these signal molecules affect follicle development [16]. Our analyses of oocyte morphology, serum hormone contents, and oxidative stress levels revealed that the 150 mg/kg astaxanthin group showed reduced ovarian oxidative stress, increased hormone secretion and expression of receptor genes, and alleviation of granular cell apoptosis, leading to better follicular development. Further analyses of the transcriptome revealed details of the molecular mechanism by which dietary astaxanthin at 150 mg/kg promoted follicular development. MAPKs signal transduction pathways are involved in the regulation of cell growth, reproduction, division, and death and various biochemical reactions in the cell [48]. In the present study, analyses of the transcript levels of mapk1 (encoding MAP kinase) and its down-stream targets (chp2, map2k1, and ppp3ca) revealed that MAPK signaling was involved in reducing granulosa cell apoptosis in the 150 mg/kg astaxanthin group. Chp2 participates in the regulation of cell proliferation, and reducing the expression of its encoding gene or inhibiting its activity can accelerate cell death. Inhibition of chp2 in breast cancer cells can delay the G1-S cell cycle transition [49]. Both ppp3ca and chp2 are calcineurins [50]. The upregulation of ppp3ca and chp2 in the 150 mg/kg astaxanthin group may affect the transmission of intracellular calcium signals, leading to the alleviation of cell apoptosis and activation of oocyte meiosis. As a negative regulator of MAPK signaling, mapk1 is closely related to the control of cell fate because it triggers apoptosis [51]. In another study, the upregulation of map2k1was found to alleviate apoptosis of LPS-induced WI-38 cells and reduce cell damage and inflammation [52]. Therefore, the downregulation of mapk1 and upregulation of map2k1 in the 150 mg/kg astaxanthin group may have promoted the proliferation of granulosa cells and follicle development. However, excessive astaxanthin may increase the metabolic burden in the fish body, increase oxidative stress, stimulate the MAPK signaling pathway, aggravate cell damage and apoptosis, and induce follicular atresia.
Oocyte meiosis and progesterone-mediated oocyte maturation are important signaling pathways that regulate follicular development. If the expression of cdk1 is reduced, the cell cycle is significantly blocked at the G2/M phase. Disorders of the cell cycle and long-term cell cycle arrest are important inducers of cell apoptosis [53]. Plk1 is a regulated protein kinase involved in oocyte meiosis and also in multiple steps during mitosis such as the cell G2/M phase transition and chromosome segregation [54]. Smc1a1 is related to cell proliferation, signal transmission, and the maintenance of chromosome stability [55]. Female fish consuming an appropriate amount of astaxanthin were found to show increased transcript levels of cdk1, plk1, and smc1a1; enhanced differentiation and proliferation of granulosa cells; increased transmission of various cytokines, hormones, and growth factor signals to oocytes; and improved follicular development [56]. The increased amount of atretic follicles in the 200 mg/kg astaxanthin group may be related to the abnormal expression of chp2, cdk1, and plk1 leading to disordered granulosa cell proliferation and increased apoptosis.
Granulosa cell apoptosis and follicular atresia are regulated by insulin-like growth factors (IGFs) [57]. As a cofactor of gonadotropin, igf1, together with FSH, stimulates granulosa cells/luteal cells to produce estradiol and progesterone, and promotes the proliferation of granulosa cells and follicular membrane cells [58]. In the present study, the upregulation of igf1 in the 100 mg/kg and 150 mg/kg astaxanthin groups and the higher serum FSH, LH, and E 2 contents promoted oocyte meiosis and progesteronemediated oocyte maturation signaling pathways, which enhanced follicular development.
Conclusion
In this study, Nile tilapia female fed with a diet containing 150 mg/kg astaxanthin showed improved follicular development as a result of increased serum estrogen content and reduced oxidative stress. The results of mRNA sequencing and TUNEL analyses revealed that the diet containing 150 mg/kg astaxanthin resulted in alleviation of cell apoptosis and increased granulosa cell proliferation and follicle maturation. In contrast, a high dose of astaxanthin (200 mg/kg) promoted MAPK signaling, which triggered 15 Aquaculture Nutrition granular cell apoptosis in the ovary and accelerated follicular atresia. The results of our study can be applied to increase the spawning efficiency of Nile tilapia broodstock and to prevent or alleviate ovarian stress. | 2022-09-15T15:56:59.563Z | 2022-09-10T00:00:00.000 | {
"year": 2022,
"sha1": "d810ab37237e9a823256bc4b7c6a4514b950df93",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/anu/2022/1245151.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "37cd72f69c88a2bec49b5abf88b20bbce2085912",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
33504205 | pes2o/s2orc | v3-fos-license | Dynamics of Photoinduced Interfacial Electron Transfer and Charge Transport in Dye-Sensitized Mesoscopic Semiconductors
Molecular systems designed for the conversion of solar energy offer ideal models in the study of the kinetics of light-induced electron transfer at surfaces. Due to their high porosity, nanocrystalline oxide semiconductor films allow investigations of interfacial and lateral charge transfer processes that are barely detectable on flat surfaces. Although it has proven to be very promising, the redox photochemistry of the metal oxide | molecular monolayer | electrolyte interface is still a largely unexplored scientific domain, offering huge potential for investigation and exploitation of physical and chemical processes. Carrier trapping and charge transport are also key to the efficiency of molecular photonic devices. Carrier dynamics and transport in unconventional media are studied utilizing THz time domain spectroscopy. We summarize here some aspects of the work currently carried out in these fields as part of our continued effort in the fundamental study of the dynamics of photoinduced electron transfer processes.
These films constitute a network of nanocrystalline particles such as titania, niobia or zinc oxide, sintered together to allow charge carrier transport to take place.The pores between the nanoparticles are filled with an electrolyte or a solid-state organic hole conductor, forming an interpenetrating heterojunction of very large contact area.As electrons can rapidly percolate through the film, the entire surface-adsorbed molecular layer can be electronically addressed.Charge transfer events involving adsorbed molecules can thus be induced through the nanocrystalline support and recorded as electrical current.Optical monitoring is also facile as the signals arising from the grafted molecules are greatly enhanced due to the huge internal surface area of the junction.Thus, the nanocrystalline oxide films serve as an interface between the molecular-and macroscopic world providing new opportunities to examine light-induced interfacial reactions on a molecular scale.
Mesoporous systems recently gained importance as the basis of new electrochemical devices, such as batteries, sensors, and solar cells.In these applications, results depend strongly upon the properties of charge transport.Carrier mobility and dynamics in nanostructured metal oxide assemblies, amorphous organic hole-transporting media, conjugated polymers and hybrid systems are ideally studied utilizing terahertz time-domain (THz-TDS) and optical pump− THz probe (OPTP) spectroscopies.A brief introduction to these techniques is provided in the second part of this short review.
Ultrafast Light-induced Charge Injection in Solids
Charge injection from the excited state of a donor molecule into a continuum of electronic acceptor states in a solid has significant importance for the fundamental understanding of the dynamics of electron transfer (ET) processes.Classical theoretical treatments of ET and further quantum mechanical extensions are based on the assumption that the overall ET kinetics are controlled by the nuclear activation barrier to achieve electronic resonance between reactant and product states.The situation can be illustrated by an energy scheme (Fig. 1, left), showing the situation for an ET-reaction between a donor (D) and an acceptor species (A).In a classical view, the system can propagate on the potential energy
Introduction
The use of molecular species to transduce or store signals in opto-electronic devices is becoming one of the most exciting fields of modern science.Numerous applications for such systems can be foreseen, ranging from information storage and imaging to molecular photovoltaics. [1]One particularly intriguing configuration employs a mesoscopic film of a semiconductor oxide as a support for the molecular transducer.surface (PES) of the electronic configuration, approximated as one-dimensional parabola.Energy conservation allows an electronic transition between the initial encounter complex D-Aand the final charge separated state D + A -only at the intersection of the two PESs, where the electron can be transferred from D to A.
A fundamentally different situation for electron transfer is found in dye/semiconductor systems.Fig. 1 (right) shows a schematic of such a system.The acceptor level in this case is the energetically broad conduction band of the semiconductor (SC).As a consequence, the final chargeseparated state energy surface splits up into a manifold of acceptor parabolas.In this case, there is no need of an energy matching mechanism via molecular vibrations and the rate constant for interfacial ET is essentially independent of nuclear factors.− 10] Time constants for ET as short as 6 fs [6,7] were found.Charge transfer times ≤ 20 fs indicate that the reactions occurs on the same time scale or even faster than nuclear motion associated with high-frequency intramolecular vibrations ( >1600 cm -1 ).The notion that the electron is transferred to the solid well before vibrational relaxation of the photoexcited sensitizer has recently been confirmed in strong coupling cases by the observation of the dependence of ET kinetics upon the excitation photon energy, [4] and that of oscillations in the transient absorption signal due to vibrational wavepacket motion during charge transfer. [7]ecause of their successful use in dyesensitized solar cells, Ru( ii) polypyridyl complex dyes adsorbed on nanocrystalline TiO 2 films are regarded as a model system for the experimental study of the ultrafast dynamics of interfacial light-induced electron transfer.Most studies have reported charge injection kinetics from cis -Ru II (dcbpy) 2 (NCS) 2 (N719) or its protonated form (N3) to take place with a fast (sub-100 fs) phase, followed by a slower (0.7 − 200 ps) multi-exponential component. [8]Recently, we showed that the observed kinetic heterogeneity can actually result from the aggregation of sensitizer molecules on the surface.A monophasic ET with a rise time shorter than 20 fs is indeed consistently observed when the formation of aggregates is prevented and the sensitizer is adsorbed as a monolayer on the surface of TiO 2 nanocrystals. [10]
Kinetic Competition between Charge Recombination and Dye Regeneration Processes
The fate of the charge separated state obtained by injection of an electron from a photosensitizer into the conduction band of the solid, and the prospect to exploit this light-induced interfacial charge separation for practical applications, are essentially limited by the back electron transfer processes that lead to charge recombination.Fig. 2 schematizes the energetics and dynamics of processes that take place after charge injection from a molecular excited state to the acceptor levels of a semiconductor.Thermalization and trapping of hot injected carriers is known to occur typically with a rate constant k th ≈ 10 13 s -1 . [11]Reverse transfer of a hot electron is therefore generally prevented.The rate of the electron recapture, which takes place between the solid and the oxidized dye species S + has been observed to be slower by several orders of magnitude compared to charge injection rates of efficient sensitizers.In the N719/TiO 2 system, this back electron transfer process typically occurs on a time scale of hundreds of µ s to ms time scale, 10 10 times slower than the initial photoinduced charge injection. [3,12]At least two main reasons can been invoked to explain such a huge difference: i) Charge recombination takes place between discrete energy levels and is mediated by vibration energy fluctuations.Its rate is thus scaled down by nuclear factors, which do not intervene in the case of the forward electron transfer process.ii) While electron injection is kinetically near optimum, the high exoergicity of the back electron transfer can make the system lie deep in the inverted Marcus region, where the rate of the charge transfer process is expected to decrease with increasing driving force. [3,13]he slow charge recombination process can be intercepted by reaction of a reducing mediator D with the oxidized dye (Eqn.(3), Fig. 2).The overall efficiency of the light-induced charge separation then depends upon the kinetic competition between back electron transfer and dye regeneration processes. [14]Photovoltaic cells based on the sensitization of mesoporous titanium dioxide by Ru( ii) complex dyes in conjunction with the I 3 -/I -redox couple as a mediator have proved very efficient at exploiting this principle. [15]ig. 3 shows, as an example, the temporal evolution of the oxidized state of N719 complex dye sensitizer initially formed during the photoinduced electron injection process and later decaying due to reduction by iodide or charge recombination.
Recently, solid-state devices have been described where the liquid electrolyte present in the pores of the nanocrystalline oxide film is replaced by an organic hole-transporting solid medium containing triarylamine donor functions.Contrary to the case of the oxidation of I -to I 3 -, which requires the transfer of twoelectrons, the dye regeneration process in this case is a single electron process and is characterized by a much faster kinetics. [16]g. 1.Energetic situation for a typical molecular electron transfer in the Marcus theory (left) and the situation prevailing in the case of a manifold of acceptor states found in the dye-sensitization of a semiconductor (right)
Charge Carrier Dynamics and Transport in Mesoscopic Media
For TiO 2 -based nanocrystalline dyesensitized photovoltaic cells, the conversion efficiency to a large extent is limited by electron transport through the oxide structure and conduction of positive charges in the opposite direction via an electrolyte or a hole-transporting material through the pores of the mesoscopic film. [17]Typical charge hopping and scattering times range from femtoseconds to picoseconds, depending on the nature, purity, and temperature of the material.Since these times correspond to the terahertz (THz) frequency range, THz time-domain spectroscopy (THz-TDS) has emerged as a powerful probe of charge carriers and their transport processes in condensed matter.This method provides direct access to important parameters such as the charge density and scattering times.By combining THz-TDS with synchronous optical excitation, one has optical pump− THz probe spectroscopy (OPTP) available as a powerful tool for ultrafast time-resolved conductivity studies of materials. [18]An experimental setup can be built from an existing femtosecond laser system to generate and detect THz pulses, and then photoexcite a sample and observe the distortion of the transmitted THz waveform.A schematic view of such a setup is shown in Fig. 4: A chirped pulse amplified laser system provides ~120 fs pulses at a repetition rate of 1 kHz and at a wavelength of 780 nm.The laser beam illuminates a ZnTe crystal, generating near single-cycle terahertz pulses via optical rectification in this material.The terahertz beam is then focused by gold mirrors onto the sample, and the transmitted beam is refocused onto a second ZnTe crystal constituting the receiver.A portion of the laser red light is split off, passed through the receiver crystal and converted to circularly polarized by a λ /4-plate.Using a polarizing beamsplitter, the two orthogonal components of the circularly polarized light are then separated and subtracted using a pair of balanced diode photodetectors (the net current for circularly polarized light is zero).The terahertz beam transmitted by the sample induces a birefringence in the ZnTe crystal of the receiver (Pockels effect) which rotates the polarization of the 780 nm beam such as, after the λ /4-plate, it is elliptically polarized.The resulting signal provided by the balanced photodetectors is then measured by a lock-in amplifier referenced to the chopper wheel.The receiver is gated synchronously with the transmitted THz beam, and by varying the delay line one can map out the complete time dependent electric field.
In addition to electronic conduction in the condensed phase, THz-TDS can probe long-range crystalline lattice vibrations, low energy torsion and hydrogen bonding vibrations.Phonons associated with the self trapping of photo-generated or injected carriers (small polarons) are detected and analyzed in real time, allowing in particular a detailed study of vibrational coherence associated with electron transfer processes at dye-sensitized interfaces. [7]
Acknowledgement
Many molecular systems and materials used in our studies are kindly provided by the Laboratory for Photonics and Interfaces (LPI) of EPFL.Fruitful collaboration in particular with P. Comte, S. M. Zakeeruddin, M. K. Nazeeruddin, andM.GrätzelofLPIisgratefullyacknowledged.
Fig. 2 .
Fig.2.Energetic scheme of interfacial electron transfer processes following charge injection from the electronic excited state S* of a dye-sensitizer to the conduction band (cb) of a semiconductor (SC).Typical time frames within which each reaction takes place are indicated for a system constituted of TiO 2 nanoparticles, sensitized by N719 dye, in the presence of a concentrated electrolyte containing iodide as a donor (D).Numerical figures show that kinetic competition between ET processes leads to the formation a long-lived charge separated state e -(cb)…D + with almost unit quantum yield.
Fig. 3 .
Fig. 3. Transient absorbance signals recorded upon pulsed laser excitation of the N719 dye-sensitizer adsorbed on TiO 2 mesoscopic films.Optical signals reflect the appearance and decay of the oxidized state S + of the dye.Data points at the shorter time scale corresponds to the electron injection process and concomittent formation of the S + species (Fig. 2, Eqn.(2)).The decay curve at shorter time scale was obtained in the presence of a liquid electrolyte containing 0.8 M iodide and is indicative of the dye regeneration reaction (Fig. 2, Eqn.(3)).The decay curve at longer time scale is due to the back electron transfer (Fig.2, Eqn.(4)) and was recorded in pure redoxinactive solvent.Ultrafast transients were measured at a probe wavelength of 860 nm, following pumping at 535 nm.Ns-µ s data were obtained at 680 nm upon 600 nm pulsed laser excitation. | 2017-04-27T08:35:35.871Z | 2007-10-24T00:00:00.000 | {
"year": 2007,
"sha1": "168ef2b637baff6cb0ab06accc1ced1cc3f98d1a",
"oa_license": "CCBYNC",
"oa_url": "https://chimia.ch/chimia/article/download/4379/3669",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "168ef2b637baff6cb0ab06accc1ced1cc3f98d1a",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
215802676 | pes2o/s2orc | v3-fos-license | Demographic science aids in understanding the spread and fatality rates of COVID-19
Governments around the world must rapidly mobilize and make difficult policy decisions to mitigate the coronavirus disease 2019 (COVID-19) pandemic. Because deaths have been concentrated at older ages, we highlight the important role of demography, particularly, how the age structure of a population may help explain differences in fatality rates across countries and how transmission unfolds. We examine the role of age structure in deaths thus far in Italy and South Korea and illustrate how the pandemic could unfold in populations with similar population sizes but different age structures, showing a dramatically higher burden of mortality in countries with older versus younger populations. This powerful interaction of demography and current age-specific mortality for COVID-19 suggests that social distancing and other policies to slow transmission should consider the age composition of local and national contexts as well as intergenerational interactions. We also call for countries to provide case and fatality data disaggregated by age and sex to improve real-time targeted forecasting of hospitalization and critical care needs.
Governments around the world must rapidly mobilize and make difficult policy decisions to mitigate the coronavirus disease 2019 (COVID-19) pandemic. Because deaths have been concentrated at older ages, we highlight the important role of demography, particularly, how the age structure of a population may help explain differences in fatality rates across countries and how transmission unfolds. We examine the role of age structure in deaths thus far in Italy and South Korea and illustrate how the pandemic could unfold in populations with similar population sizes but different age structures, showing a dramatically higher burden of mortality in countries with older versus younger populations. This powerful interaction of demography and current age-specific mortality for COVID-19 suggests that social distancing and other policies to slow transmission should consider the age composition of local and national contexts as well as intergenerational interactions. We also call for countries to provide case and fatality data disaggregated by age and sex to improve real-time targeted forecasting of hospitalization and critical care needs.
COVID-19 | demography | age structure | mortality G overnments are rapidly mobilizing to minimize transmission of coronavirus disease 2019 (COVID-19) through social distancing and travel restrictions to reduce fatalities and outstripping of healthcare capacity. The pandemic's progression and impact are strongly related to the demographic composition of the population, specifically, population age structure. Demographic science can provide new insights into how the pandemic may unfold and the intensity and type of measures needed to slow it down. Currently, COVID-19 mortality risk is highly concentrated at older ages, particularly those aged 80+ y. In China, case fatality rate (CFR) estimates range from 0.4% for those 40 y to 49 y jumping to 14.8% for those 80+ y (1). This age pattern has been even more stark in Italy, where, as of March 30, 2020, the reported CFR is 0.7% for those 40 y to 49 y, and 27.7% for those >80 y, with 96.9% of deaths occurring in those aged 60 y and over (2). Current CFRs are likely overestimated due to underascertainment of cases. In South Korea, with broader testing and strong health care capacity (only 158 deaths), the current CFR for those 80+ y is still an alarming 18.31% (3).
The Importance of Age Structure for COVID-19 Transmission and Fatality Rates Population age structure may explain the remarkable variation in fatalities across countries and the vulnerability of Italy. The deluge of fatal COVID-19 cases in Italy was unexpected, given the affected region's health and wealth. Italy is one of the oldest populations, with 23.3% of its population over 65 y, compared to 12% in China (4). Italy is also characterized by extensive intergenerational contacts, supported by a high degree of residential proximity between adult children and parents (5). Even when intergenerational families do not coreside, daily contacts are frequent. Many Italians prefer to live close to extended family, with over half of the population in the northern regions commuting (6). Intergenerational interactions, coresidence, and commuting may have accelerated the outbreak in Italy through social networks that increased the proximity of elderly to initial cases (7).
The age structure of initial cases, along with early detection and treatment, likely explains the low numbers of fatalities in South Korea and Germany. The Korean outbreak was concentrated among the young Shincheonji religious group (3), with only 4.5% of cases thus far falling into the >80-y group (8). This contributed to a low overall CFR in South Korea relative to Italy (1.6% vs. 10.6%). Germany has, likewise, few deaths (583 out of 61,923 cases to date), with the median age of confirmed cases at 48 y compared to 62 y in Italy (9). COVID-19 transmission chains that begin in younger populations may go undetected longer (10), with countries slow to raise the alarm. The initial low CFR in England may have reflected the relatively young age structure of early infections, including Greater London, which has a small fraction of residents over 65 y compared to more rural areas (11). COVID-19 was only detected in King County, WA, once it reached the Life Care Center in Kirkland, where 19 out of 22 of the state's first reported COVID-19 deaths occurred, with virus genetic sequence estimates suggesting it circulated for several weeks prior (12). Once community transmission is established, countries with high intergenerational contacts may see faster transmissions to high-fatality age groups, as seen in Italy and Spain, leading to higher average CFR (13). The overall burden of serious cases and mortality reflects linkages between the age distribution of early cases, age structure of the population, and intergenerational connections. Fig. 1 contains population pyramids to illustrate how population age structure interacts with high COVID-19 mortality rates at older ages to generate large differences across populations in the number of deaths, holding constant assumed rates of infection prevalence (10%) and age−sex-specific CFRs (Italy) (14). Adjusting assumptions changes the total number of expected deaths but not the relative comparisons across countries with different age structures. For example, assuming that CFRs, by age, are half of current Italian rates would reduce the numbers of expected deaths by half. Fig. 1, Top considers two countries, Italy and South Korea, with very different population age structures. The larger number of expected fatalities is clearly visible in Fig. 1, Top Right for Italy (302,530) versus Korea (177,822). In Fig. 1, Bottom, we consider two countries with similar population sizes but very different age distributions. Brazil has 2.0% of its population over age 80+ y, with our simulated scenario leading to dramatically more deaths (452,694) compared to Nigeria (142,056), where only 0.2% are 80+ y. visualizes expected deaths by age groups in countries with different population age structures: Italy (older), United States (middle), and Nigeria (younger). We see stark implications of an older age structure for higher fatalities, amplified at higher population infection rates. SI Appendix, Fig. S1 animates differences by infection rate (0 to 100%).
Demographic Science and COVID-19 Policy
Demographically informed projections will better predict the COVID-19 burden and inform governments. While population age structure is crucial for understanding those at the highest risk of mortality both across and within countries, it is also vital for understanding social distancing measures to reduce critical cases that overload the health system-aka "flattening the curve." Our illustrations show that countries with older populations must take aggressive protective measures. For these to be effective, special attention should be devoted to high-risk population groups and intergenerational contact. Within countries, mapping of agerelated spatial clustering can improve hospital and critical care forecasts (15).
Consideration of population age structure also necessitates understanding the interlinkage of policy measures and how policies might create unintended consequences. While schools may be a hub of virus transmission, school closures may inadvertently bring grandparents and children into contact if grandparents become the default carers. In aged populations with close intergenerational ties, governments need to facilitate childcare solutions that reduce contact. In a pending decree, the Italian government introduced a special leave for parents with children at home from school, and a voucher for babysitting.
The age structure of populations also suggests that the squeezed "sandwich" generation of adults who care for both the old and young are important for mitigating transmission. Beyond introducing sick pay for those who need to self-isolate or care for family members, joint government and industry emergency policy measures should seek to counter family economic crises, particularly for vulnerable and precarious workers who are less able to comply with policies that allow social distancing.
The rapid spread of COVID-19 has revealed the need to understand how population dynamics interact with pandemics. Population aging is currently more pronounced in wealthier countries, which, mercifully, may lessen the impact of this pandemic in lower-income countries with weaker health systems but younger age structures. It is plausible that poor general health status and coinfections such as HIV and tuberculosis will increase the danger of COVID-19 in these countries, along with intergenerational proximity and challenges to physical distancing. Thus far, the lower than expected number of cases detected in Africa (despite extensive trade and travel links with China) suggests that the young age structure may be protective of severe and thus detectable cases. Beyond age structure, demography can shed light on the large sex differences in COVID-19 mortality that need to be understoodwith men at higher risk. Distributions of underlying comorbidities such as diabetes, hypertension, and chronic obstructive pulmonary disease will likewise refine risk estimates. Until more nuanced data are available, the concentration of mortality risk in the oldest old ages remains one of the best tools to predict the burden of critical cases and produce more precise planning of availability of hospital beds, staff, and other resources. Few countries are routinely releasing their COVID-19 data with key demographic information Projections assume 10% population infection rate and current age−sex-specific case fatality rates from Italy (Dataset S1).
such age, sex, or comorbidities. We call for the timely release of these disaggregated data to allow researchers and governments to nowcast risk for more focused prevention and preparedness.
Methods
Data and Analysis. Data to produce Figs. 1 and 2 are from https://population. un.org/wpp (4), and age−sex-specific case fatality rates are from Italian data (https://www.epicentro.iss.it/coronavirus/bollettino/Bollettino-sorveglianzaintegrata-COVID-19_30-marzo-2020.pdf), accessed March 30, 2020 (2). For Figs. 1 and 2, total number of expected deaths by age group was derived by multiplying the total number of people in each age−sex group and country by an assumed population infection rate of 0.1 (and 0.4 for Fig. 2) and Italian age−sex-specific fatality rates (Movie S1) (March 30, 2020). Data analysis is in R using the packages (ggplot2). | 2020-03-19T10:19:27.255Z | 2020-03-18T00:00:00.000 | {
"year": 2020,
"sha1": "75327cca0cc66f3f9b71a2fa9497a317211dfcf4",
"oa_license": "CCBY",
"oa_url": "https://www.pnas.org/content/pnas/117/18/9696.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d47a4314bb32d153ec6ea22ce49a87e8fbb38455",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Medicine",
"Geography"
]
} |
264403810 | pes2o/s2orc | v3-fos-license | Changes in bowel sounds of inpatients undergo general anesthesia
Background: General anesthesia can affect intestinal function, but there is no objective and effective indicator to evaluate the inhibition and recovery of intestinal function. The main objective of this study is to assess whether bowel sounds (BSs) change before and after general anesthesia, then it can be explained that the BS can be an effective indicator of intestinal function. Methods: We randomly selected 26 inpatients and collected three sets of 5-minute continuous BSs before operation (Pre-op), after operation (Pro-op) and three hours after operation(3h-Pro-op) separately for each patient. The data were de-noised with adaptive filtering and wavelet threshold denoising, and processed with fractal dimension to identify the effective bowel sounds (EBSs). Then the linear and nonlinear characteristic values (CVs) of each EBS were extracted and paired t -test and rank-sum test were used to evaluate the changes of the BSs after general anesthesia. Results: For the difference between Pre-op and Pro-op, as well as between Pro-op and 3h-Pro-op, there are statistical differences ( p <0.05). Specifically, the linear CVs that can reflect the occurrence frequency, overall energy and overall duration of EBSs and the nonlinear CVs that can reflect the dispersion degree of stability and complexity of EBSs were statistically significant. However, there is no statistical difference in the CVs reflecting the energy and duration, as well as the stability and complexity of locally EBSs ( p >0.05). Also, there is no statistically significant difference between all the characteristic values between Pre-op and 3h-Pro-op ( p >0.05). Conclusion: The BSs change after general anesthesia. Furthermore, the BSs are weakened after general anesthesia and recovered to the state before general anesthesia three hours later. Therefore, the indicator of intestinal function under general anesthesia, as to provide guidance for postoperative feeding is of great clinical significance.
Background
General anesthesia can inhibit gastrointestinal function, so postoperative feeding needs to wait for the gradual recovery of gastrointestinal function to give appropriately and timely. At present, there is no clear objective evaluation method for postoperative recovery of gastrointestinal function, so the time of resuming oral input is arbitrary. However, for patients undergoing general anesthesia, it is important to be able to take in nutrition relatively early for postoperative recovery. The evaluation of gastrointestinal function after general anesthesia has important clinical significance.
Studies on noninvasive methods judge the recovery of gastrointestinal function after general anesthesia are mainly the methods of the anal exhaust [1], electrogastrogram [2], bowel sounds (BSs) auscultation and dynamic magnetic resonance imaging [3].
Anal exhaust mainly depends on the patients' chief complaint, which cannot objectively and timely reflect the recovery of gastrointestinal function. Electrogastrogram is more objective and real than anal exhaust, but it is easily influenced by other bioelectric signals, and the current analysis of electrogastrogram data is not particularly mature and widely accepted. Dynamic magnetic resonance imaging (DMR) relies on large imaging equipment, so it is difficult to monitor patients in a long time during the perioperative period.
Auscultation of BSs is an important noninvasive way to judge gastrointestinal function. BSs are produced with the moving of substances in the intestine, so the sounds can objectively reflect the situation of intestinal peristalsis in real-time. Researches on BSs use the characteristics of BSs to observe gastrointestinal state and diagnose gastrointestinal diseases. In clinical practice, the observation of gastrointestinal peristalsis is used to monitor feeding events, thus providing a reference for the monitoring of blood glucose in the artificial pancreas system [4]; BS can also be used as one of the indicative parameters of gastrointestinal diseases [5]. Gastrointestinal tract development if lesions occur, such as gastroduodenal disease, intestinal disease, and large bowel disease, the corresponding intensity or number of BSs may also be abnormal. In addition, BS can indicate other diseases. Recent studies have found that BSs can not only indicate gastrointestinal state, but also have clinical significance for sepsis [6], Parkinson's disease [7] and other diseases. The above studies show that BSs can reflect gastrointestinal functions, so we can consider the application of changes in BSs to the evaluation of gastrointestinal function recovery in patients after general anesthesia.
However, using a handheld stethoscope for short-time auscultation is still the main way to obtain intestinal sounds in clinic. As for the strong randomness of gut sound, the result is questioned through a short time of subjective judgment [8][9][10]. In the study of BSs, the sounds are mainly acquired by means of the assembly of mature pickup parts and storage parts. There is no special bowel sound equipment in the clinical environment to collect bowel sound data.
In this paper, considering the requirements of the perioperative medical environment and the patient's poor cooperation before and after the operation, we used the self-developed wearable bowel device. The device can be easily attached to the patient's abdomen, and the sound data can be collected and stored with no pressure. If the results of BSs analysis indicate that the occurrence of changes of BSs in patients undergoing general anesthesia, it provides theoretical support for the use of bowel sounds as a reference index for anesthesia recovery evaluation.
Results
The research has been approved by the Medical Ethics Committee of Chinese PLA General Hospital for clinical research (No. 2018-176-01). We randomly selected 26 inpatients from the Second Department of Otolaryngology, Head and Neck Surgery, Chinese PLA General Hospital. Each subject signed the informed consent. We recorded clinical factors that might influence bowel sounds, including age, gender, BMI, anesthetic type, and exclusion of patients with gastrointestinal dysfunction. Three sets of 5-minute continuous BSs were collected for each patient. The first set of data was collected before operation (Pre-op) which defined as the time that fast for 24 hours and before entering the operating room. The second set was collected after entering the recovery room and completing tracheal extubation (Pro-op). The last set was collected at 3 hours after extubation (3 h-Pro-op) in the ward if conditions permit. The acquisition location of bowel sounds was determined as the right lower abdominal region [11]. Considering the influence of different devices and different operators on the accuracy of the test, one person used the same device to test the subjects' BSs during the experiment. Characteristic values (CVs) were calculated for each effective bowel sounds (EBS), including 7 linear parameters and 8 nonlinear parameters. After statistical analysis, p-value less than 0.05 was considered statistically significant; otherwise, no statistical difference was considered. Mean_Duration, Std_Duration, and Sum_Duration, in which the difference between the two parameters of Mean_Duration and Std_Duration was not statistically significant, while the Sum_Duration difference was statistically significant. This indicated that the energy and duration of locally EBSs were not affected by general anesthesia, but the difference in the sum of energy and duration of bowel sounds was statistically significant. Therefore, The frequency of bowel sounds, the energy of BSs and the duration of BSs were affected by the operation general anesthesia and were weakened as a whole, but the energy and duration of locally EBSs were not affected, indicating that general anesthesia affected the overall intestinal peristalsis intensity and did not inhibit the local intestinal peristalsis state. In the nonlinear recursive parameter analysis, the mean values of RR, Lmean, ENTR and TT were not statistically different. However, the standard deviation of RR, Lmean, ENTR and TT showed statistical differences. RR, Lmean and TT all reflect the stability of the signal, while ENTR reflects the complexity. There was no statistical difference in the mean of the recursive parameters of BSs, but there was a statistical difference in the standard deviation of these recursive parameters, indicating that the dispersion degree of stability and complexity of the system became smaller. However, there was a statistical difference between Sum_bs representing total energy and Sum_Duration representing total duration, both of which were larger, indicating that the overall energy and duration of BSs had recovered to a certain extent after three hours.
Discussion
General anesthesia can inhibit the gastrointestinal function of patients which include delay gastric emptying, small bowel transit and colonic transit [12,13].
Conclusion
In conclusion, after general anesthesia, the BSs change. The BSs were weakened after surgery, and three hours later, the BSs returned to the preoperative state. Therefore, the BS can be used as an indicator of intestinal function changes under general anesthesia, so as to provide guidance for postoperative feeding, which is of great clinical significance.
Data collection
Patients' BSs were collected using a self-developed wearable bowel sound device [16]. The device uses Knowles' SiSonic MEMS microphone, which has an ultra wide band (UWB) flat frequency response (± 2dB, 10 ~ 10000 Hz) and a tightly matched sensitivity of ± 3dB. Since the frequency of BSs is mainly distributed in 100-1000 Hz [17], this microphone is practical for the pick-up of BSs. The collection and storage of BSs and environmental noises are realized by the wearable BSs device which has carried out performance tests to ensure the reliability of data [18]. The sample rate of BSs was 8 kHz.
Signal processing
In the process of BSs acquisition, environmental noise is easy to be introduced, which directly affects the quality of the BS signal. Therefore, it is necessary to remove environmental noise to better analyze and identify the BSs. We use the noise acquisition channel of the recorder to collect the environmental noise, and the adaptive noise cancellation is used to remove the noise. Specifically, the least mean square (LMS) [19] algorithm is adopted, the order of the filter is determined to be 32, and the step size factor is set as 0.000001 to achieve a good adaptive cancellation.
Adaptive filtering can eliminate the environmental noise, but the high-frequency noise in the signal still affects the identification and analysis of effective bowel sounds (EBSs). As an effective and practical method, wavelet denoising has achieved good results in signal and image denoising, and has been widely used in engineering applications. Donoh [20,21] proposed a wavelet threshold denoising method. The wavelet coefficient of signal contains important information after wavelet transform with Mallat algorithm. The wavelet coefficient of the noise is less than the wavelet coefficient of the signal. By selecting a suitable threshold, the wavelet coefficients greater than the threshold are considered to be generated by signals and should be retained, while those less than the threshold are considered to be generated by noise and set to zero to achieve the purpose of denoising. In the process of wavelet decomposition, the wavelet basis, the number of decomposition layers and the threshold should be determined. For the selection of wavelet basis, we chose sym8 wavelet basis which is from the two common wavelet bases of db wavelet system and sym wavelet system. For determining the number of decomposition layers, too large or too small will both affect the final de-noising effect. In this paper, the number of decomposition layers is determined to be 5 after comparing the denoising effects of different decomposition layers. For the determination of threshold value, Birge-Massart [22] algorithm is used to obtain the threshold value of each layer of one-dimensional wavelet transform, and soft threshold function is used for denoising.
After the adaptive filtering and wavelet denoising, the waveform (Fig. 1) can be used to identify EBSs.
The fractal dimension (FD) can quantitatively describe the complexity of the signal. The FD of EBSs is different from that of background sounds [23]. To calculate the FD of time series, we can either reconstruct the phase space first and then calculate the correlation dimension of time series [24][25][26] or directly calculate the FD in the time domain. The time series in this paper is the audio signal with a high sampling rate and large data volume, so the FD is calculated directly in the time domain. The Katz method [23,27,28] used in FD calculation which can effectively judge the randomness of waveform. When calculating the FD of the BS signal, we employed a sliding window to realize the short-time processing of audio signals. The length of the sliding window is set to int (0.006*fs), where int indicates the integer part of the argument, and fs is the sampling frequency of the BS signal. The constant 0.006 is empirically set and justified by the efficient performance of the algorithm [23]. The FD of the data in the sliding window is calculated respectively. In order to ensure that the length of the data before and after calculating the FD is equal, the first and the last FD are used to make up the data at both ends. After the FD sequence calculated, the peak value is extracted to ensure the effective recognition of the BSs. The peak extraction method adopts FD-peak peeling algorithm (FD-PPA) [29].
FD-PPA makes the EBSs more obvious in the waveform, but the endpoint detection is needed to extract the EBSs. The purpose of voice endpoint detection (VAD) technology is to identify the starting point and ending point of speech accurately from a segment of the signal containing speech and distinguish speech and non-speech signals. It is an important aspect of speech processing technology.
As for the BS signal, we identify the EBSs which satisfying certain conditions, while the others are considered as non-bowel sound signals. In this paper, the time series after FD-PPA are used as the input sequence to judge the starting and ending points of EBSs. The threshold for entering the BS segment, the length threshold of identified noise, and the maximum allowed mute length in the BS segment are set. Based on the above three thresholds, the endpoint of EBSs is determined. As a rule of thumb, the first is the threshold for entering BS segments which is set to 1.01. When the input value is greater than 1.01, it is considered to be the starting point of EBSs. The second parameter is the minimum duration threshold of the EBS signal, and the BS segment less than this threshold is considered as noise. And this threshold is set to 50 milliseconds [30]. The maximum mute length allowed in the BS segment is the third threshold which is set to 250 ms. If the mute length in the BS segment is less than this value, the BS is considered unfinished, otherwise, the BS segment is considered finished.
After the VAD, there are also many kinds of vocal signals mixed in, such as heart sounds, breath sounds and background noises similar to BSs. Limited to the problem of environmental noise collection and filter residue, we set three thresholds to remove three kinds of residual noise based on experience. Specifically, the envelope of each EBS is obtained by complex analytic wavelet transform [31]. Then, we exclude the sound segment whose envelope maximum value is less than 50, which means that a sound segment with a too small amplitude is considered as noise. In the measured data, the confounding of heart sounds is obvious. We extracted the envelope of sound segment and calculated the peak number. And based on the experience in judging heart sounds we rule out the sound segment whose peak value is less than 3. We also found that for BS segments with a very small signal-to-noise ratio, there was residual noise and it was identified as a gut sound, which also needed to be removed. As for this speech segment with residual noise, we filter out the envelope peak number which is more than 3 in the length of 1000 sampling points based on experience. Table 5 The
Chatactersitic values extraction
The characteristic values (CVs) can quantitatively reflect the characteristics of BSs, so we extracted linear and nonlinear CVs for quantitative evaluation and statistical analysis. The linear CVs are mainly time-domain parameters, as shown in Table 5.
Physiological signals have been shown to be chaotic [32]. As the basic physiological signal, gut sound also has nonlinear dynamic characteristics. Therefore, nonlinear CVs are calculated in this paper.
Recurrence quantification analysis (RQA) [33] can measure the complexity of a short and nonstationary characteristic signal with noise [34]. It has been broadly applied in the analysis of physiological data [35][36][37]. In this paper, phase space reconstruction is carried out for each EBS signal. Based on the recursive graph, recursive quantitative analysis is carried out and quantitative parameters are extracted [38], as shown in Table 6. There are multiple EBSs in each period, so in order to realize the subsequent statistical analysis, the mean value(-mean) and standard deviation(std) of each CV in each period are calculated respectively. Statistical analyses were performed using IBM SPSS Statistics 25. For data satisfying normal distribution, the statistical method of paired t-test was used. Otherwise, the rank-sum test was used.
A value of p < 0.05 was considered to indicate statistical significance. Figure 1 The bowel sounds signal after adaptive filtering and wavelet denoising.
Figure 2
Overview of BSs data acquisition, processing, and analysis. BS is short for the bowel sound.
Pre-op is short for before operation. Pro-op is short for after operation. 3h-Pro-op is short for three hours after operation. FD-PPA is short for FD-peak peeling algorithm. EBS is short for the effective bowel sound. | 2020-04-16T09:17:52.810Z | 2020-04-14T00:00:00.000 | {
"year": 2020,
"sha1": "1a08a030ea5055746238d061fd9b6182869110d2",
"oa_license": "CCBY",
"oa_url": "https://biomedical-engineering-online.biomedcentral.com/track/pdf/10.1186/s12938-020-00805-z",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f368ca110639c9983dc5a844b77a7277c651fb9e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
56299606 | pes2o/s2orc | v3-fos-license | Dynamic Fracture Model for Acoustic Emission
We study the acoustic emission produced by micro-cracks using a two-dimensional disordered lattice model of dynamic fracture, which allows to relate the acoustic response to the internal damage of the sample. We find that the distributions of acoustic energy bursts decays as a power law in agreement with experimental observations. The scaling exponents measured in the present dynamic model can related to those obtained in the quasi-static random fuse model.
We study the acoustic emission produced by micro-cracks using a two-dimensional disordered lattice model of dynamic fracture, which allows to relate the acoustic response to the internal damage of the sample. We find that the distributions of acoustic energy bursts decays as a power law in agreement with experimental observations. The scaling exponents measured in the present dynamic model can related to those obtained in the quasi-static random fuse model. Crackling noise [1] is widely observed in systems as different as superconductors [2], magnets [3] or plastically deforming crystals [4]. A typical example is the acoustic emission (AE) recorded in a stressed material before failure. The noise is a consequence of micro-cracks forming and propagating in the material and should thus provide an indirect measure of the damage accumulated in the system. For this reason, AE is often used as a nondestructive tool in material testing and evaluation. Beside these practical applications, understanding the statistical properties of crackling noise has become a challenging theoretical problem. The distribution of crackle amplitudes follows a power law, suggesting an interpretation in terms of critical phenomena and scaling theories. This behavior has been observed in several materials such as wood [5], cellular glass [6], concrete [7] and paper [8] to name just a few.
The statistical properties of fracture in disordered media are captured qualitatively by lattice models, describing the medium as a discrete set of elastic bonds with randomly distributed failure thresholds [9,10,11]. After each crack the stress is redistributed in the lattice in a quasi-static approximation: i.e. the crack velocity is much slower than stress relaxation. Thus acoustic waves are not taken into account and the activity is monitored by the damage evolution or by the dissipated elastic energy. Numerical simulations indicate that microcracks propagate in avalanches giving rise to an heterogeneous response. The avalanche distribution is typically described by power law distributions and the results are usually interpreted in the framework of phase transitions [12,13,14,15,16]. Despite the fact that critical phenomena are normally associated with a certain degree of universality (i.e. the scaling exponents should not depend on micro-structural details), there has been so far no quantitative agreement between models and experiments. A reason that could account for this discrepancy is the absence of acoustic waves in most models. It is then not obvious how to relate AE activity to internal avalanches.
Dynamic lattice models have been widely used in the past to analyze fracture processes [17,18,19,20], but although acoustic waves are explicitly included, the AE sig-nal is usually not analyzed. Here we use a lattice model for dynamic fracture in a disordered medium, to obtain a direct correspondence between the recorded AE activity and the internal damage evolution. We find that the cumulative AE amplitudes are directly related -by a power law -to the cumulative damage. Next, we measure the distribution of the AE burst energies and find a power law with an exponent β ≃ 1.7 independent on the loading rate. This exponent can be related to the exponent describing failure avalanches in quasi-static models [12,13,14,15,16].
We consider a scalar model of dynamic fractures where a two-dimensional lattice is loaded in mode III [18]: the lattice lies in the (x, y) plane and deformation occurs along the z axis, so that the equations of elasticity become scalar. The equation of the motion for the antiplanar displacement u of a site with coordinate i, j is where the sum runs over the nearest neighbors (l, m) of site (i, j), K is the elastic constant, ρ is the density and dissipation is simulated by a viscous damping with a constant Γ. In order to suppress some lattice effects, we use a 45 degree tilted square lattice. A constant strain rate is imposed to the model, by moving the boundary sites on two opposite boundaries at constant velocity V and −V , respectively. Periodic boundary conditions are imposed in the other direction. Disorder is simulated assigning randomly distributed failure threshold: a bond is removed (i.e. K is set to zero) when ∆u > f c , and f c is uniformly distributed in [0, 1]. Notice that in the quasistatic limit (V → 0, ρ → 0, Γ → 0) the model reduces to the random fuse model (RFM), where a lattice of fuses with random threshold are subject to an increasing voltage [10,11]. Due to the scalar nature of our model there is a direct mapping between elastic and electric parameters [9]. a bond is stretched beyond its threshold the lattice constant is set to zero and an elastic wave is emitted. Due to the anti-plane constraint for the displacements, we only have transverse wave propagation with sound speed c = K/ρ = 1 in our units. The damping constant is chosen to be Γ = 0.1 so that typical length traveled by a wave is a little smaller than the lattice size. For smaller values of Γ ringing effects and reflected waves do not allow to separate the single pulses and the lattice breaks at once. On the other hand, excessive damping leads to very small acoustic activity and the sample breaks suddenly at the edges. Even if the damping constant is small reflected waves can induce boundary failure, due to the rigidity of the loaded edge. Thus we do not allow that bonds fail in two boundary layers of length l = 5 close to the loaded edges. This corresponds to apply a load through a soft contact. The model is simulated for a variety of loading velocities all much lower than the sound speed V ≪ c.
Measuring the displacements of every lattice site and calculating the forces for every time steps, we have obtained the stress-strain curve for four different value of the applied strain rate. In Fig. 1 we show that the stress is a linear function of the strain up to the yield point, which precedes the total failure of the sample. The applied strain rate has little effect on the linear part of the curve, while it influences the curve after the yield point.
Monitoring the activity of some particular lattice sites we have direct access to the AE signal. These sites mimic the effect of transducers coupled to the material in a typical AE experiment. In a typical run, we record the displacements, velocities and accelerations of four sites in the boundary layer and two sites in the interior. Typically, AE distributions are recorded from a single site and averaged over ten realizations of the disorder. We have tested that the statistical properties of the signal do not vary for different boundary sites, while there is a clear difference between boundary and inner sites. In the following, we concentrate on sites in the boundary layer, in order to avoid excessive fluctuations due to failures occurring on neighboring bonds in the inner region.
An example of the typical signals recorded with our model are reported in Fig. 2. A large acoustic activity is visible in the upper panel where we show the local acceleration a of a boundary site as a function of time. We can also monitor the velocity signal which is simply related to the acceleration and display the same features. In the present model, it will be convenient to use the acceleration as a AE monitoring tool, since the velocity has a bias induced by the external loading: even in the absence of cracking the lattice has a non-vanishing velocity. We define the associated cumulative energy as The behavior of the cumulative acoustic energy E(t) is typically monitored in AE experiments. In some cases, E(t) is found to increase as a power law [5], or exponentially in other cases [8]. In general one expects a marked peak close to failure, as we also observe in Fig 2, obtained for V = 10 −3 . The curve is well fitted by cubic law, E ∼ t 3 . state. In this way, AE can be used as a tool for damage evaluation. In our model, we have a direct access to the internal damage D that can be defined as the total number of failed bonds. We find that D increases linearly with time (see Fig. 3) apart from a rapid increase very close to failure. Rescaling the curves with the loading rate one sees that D is in fact a linear function of the applied strain γ ≡ (V t)/L (see the inset of Fig. 4). These observations thus lead to a direct scaling relation between internal damage and released acoustic energy: Fig. 4 shows that E scales as expected as D 3 . A direct consequence of this result is that the measured acoustic energy is proportional to the released elastic energy E el ∼ A large amount of theoretical activity has been devoted in the past to understand the origin of power law distributions of AE amplitudes widely observed in material fracture. Most of the analysis was devoted to quasi-static models, such as the the RFM, where fracture was shown to occur in damage burst, distributed as P (D) ∼ D −τ with τ ≃ 2.5 [12,13]. This value is in perfect agreement with the result τ = 5/2 obtained exactly [21] for the exponent of the avalanche distribution of the fiber bundle model (FBM) [22], where N fibers with random failure threshold are loaded in parallel. It was thus conjectured that the long-range stress transfer present in the RFM was equivalent to the infinite range load redistribution of the FBM, placing the two models into the same universality class [12,13]. A similar exponent was found in a vectorial fracture model, so that this class could be even broader [13]. Comparing this result with AE experiments is problematic since quasi-static models do not account for wave propagation.
Here we can directly measure the distribution of pulse sizes due to the acoustic activity. In Fig. 5 we report the distribution of energies for the acceleration signal, defining ǫ ≡ a 2 . In both cases the distribution decays as power law with an exponent β = 1.7±0.1, independent on the loading rate, which only affects the low part of the distribution. The same law is found in the case of the velocity signal. Experimental results report an exponent value in the same range, even if it differs a little from one material to another: for wood the exponent is β = 1.51±0.05, for fiberglass β = 2.0±0.01 [5], β = 1.30±0.1 for paper [8], β = 1.5 ± 0.1 for experiments on cellular glass [6].
Using the scaling relation between released acoustic energy and damage discussed above, we can relate the exponent β to τ . From E ∼ D 3 and D ∼ t, we expect ǫ ∼ D 2 . Substituting this expression in the equation for the probabilities P (ǫ)dǫ = P (D)dD, we obtain τ = 1+2(β−1) = 2.4, which is very close to τ = 5/2 measured in the RFM. Thus we conjecture that the acoustic energy exponent measured in our dynamic model is directly related with the damage exponent measured in the corresponding quasi-static model [23].
In conclusions, we have introduced a lattice model of dynamic fracture which can be used to model AE experiments. The model allows to clarify important issues in the interpretation of the experiments, namely the relation between internal damage and released acoustic en-ergy. In particular, we derive direct relations between the scaling behavior of failure avalanches and acoustic bursts. It would be interesting to generalize this analysis to more realistic situations, exploring the role of dimensionality, load conditions and lattice anisotropy. However, in comparing the simulated signal with experiments, we should be careful about the definition of the events in the time series, since the amplifier and the AE sensors could bias the recorded waveform, introducing a systematic error in the data. This work has been supported by the European Network contract FMRXCT980183 and the INFM center SMC. | 2018-12-18T02:06:55.128Z | 2002-07-17T00:00:00.000 | {
"year": 2002,
"sha1": "95c9e980a53afc54759f72f3d38758bae594208b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0207433",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "02eb3c7fe86ada9f0a7a2cdc14f0c3b01a8543e2",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
147704057 | pes2o/s2orc | v3-fos-license | Black Hole -- Galaxy Correlations in Simba
We examine the co-evolution of galaxies and supermassive black holes in the Simba cosmological hydrodynamic simulation. Simba grows black holes via gravitational torque-limited accretion from cold gas and Bondi accretion from hot gas, while feedback from black holes is modeled in radiative and jet modes depending on the Eddington ratio ($f_{Edd}$). Simba shows generally good agreement with local studies of black hole properties, such as the black hole mass--stellar velocity dispersion ($M_{BH}-\sigma$) relation, 2 the black hole accretion rate vs. star formation rate (BHAR--SFR), and the black hole mass function. $M_{BH}-\sigma$ evolves such that galaxies at a given $M_{BH}$ have higher $\sigma$ at higher redshift, consistent with no evolution in $M_{BH}-M_*$. For $M_{BH}<\sim 10^8 M_\odot$, $f_{Edd}$ is anti-correlated with $M_{BH}$ since the BHAR is approximately independent of $M_{BH}$, while at higher masses $f_{Edd}-M_{BH}$ flattens and has a larger scatter. BHAR vs. SFR is invariant with redshift, but $f_{Edd}$ drops steadily with time at a given $M_{BH}$, such that all but the most massive black holes are accreting in a radiatively efficient mode at $z>\sim 2$. The black hole mass function amplitude decreases with redshift and is locally dominated by quiescent galaxies for $M_{BH}>10^{8}M_{\odot}$, but for $z>\sim 1$ star forming galaxies dominate at all $M_{BH}$. The $z=0$ $f_{Edd}$ distribution is roughly lognormal with a peak at $f_{Edd}<\sim 0.01$ as observed, shifting to higher $f_{Edd}$ at higher redshifts. Finally, we study the dependence of black hole properties with \HI\ content and find that the correlation between gas content and star formation rate is modulated by black hole properties, such that higher SFR galaxies at a given gas content have smaller black holes with higher $f_{Edd}$
INTRODUCTION
During periods of strong activity, accreting supermassive black holes (SMBHs) can be optically identified at the nuclei of their host galaxies, and are referred to as active galactic nuclei (AGN). Optical spectroscopic surveys are able to detect a large number of AGN, albeit being biased toward the brighter and less obscured AGN. X-rays are considered the most reliable method of AGN identification, particularly hard X-rays, owing to a significant reduction in obscuration (Rosas-Guevara et al. 2016). Upcoming radio surveys such as MIGHTEE (The MeerKAT International GHz Tiered Extragalactic Exploration Survey; Jarvis et al. 2016) will likewise greatly assist in identifying AGN regardless of E-mail: thomas.nicolelynn@gmail.com obscuration out to high redshifts. Complementary multiwavelength data enables characterisation of key host galaxy properties such as stellar masses and star formation rates. By examining the relationship between AGN and galaxy properties across cosmic time, we can constrain models and assemble a more comprehensive picture of the co-evolution of galaxies and their central black holes.
It has long been recognised that SMBH properties correlate with larger-scale properties of their host galaxies. Relations between the black hole mass M BH and properties such as the galaxy stellar mass M (Kormendy & Ho 2013), bulge mass M bulge (Häring & Rix 2004), bulge luminosity L bulge (Shankar et al. 2004), velocity dispersion σ (Ferrarese & Merritt 2000), sérsic index η , and core size of massive ellipticals (Thomas et al. 2016) all show fairly tight relations. There is also substantial evidence that the cosmic history of star formation rate and the evolution of global black hole accretion rate evolve quite similarly (Aird et al. 2010;Rodighiero et al. 2010Rodighiero et al. , 2015Anglés-Alcázar et al. 2015) showing that galaxies and black holes grow together, albeit with large scatter in individual cases.
A more detailed examination highlights various ways in which galaxies are impacted by their SMBHs. Kormendy & Ho (2013) showed that z ≈ 0 SMBHs correlate tightly with only classical bulges and elliptical galaxies, not with disk properties, suggesting that the processes creating bulges such as mergers may also drive SMBH growth. However, this connection is much less clear at z ∼ 2 − 4 during what is sometimes called the "quasar era". At that time, the largest SMBHs grow via radiatively efficient accretion, in contrast to locally (z ∼ 0) where the most efficient SMBH growth happens in lower mass systems. Heckman & Best (2014) reviewed the general population of black hole-galaxy correlations to quantify where the local population of black holes reside. They divide AGN into two populations, "jet" and "radiative" mode AGN, corresponding to low-and high accretion efficiencies, respectively. They find that jet-mode AGN (roughly, AGN with Eddington ratios f Edd < ∼ 0.01) are generally large black holes that reside in massive quenched early-type galaxies, whereas radiative mode AGN arise from moderate sized black holes in large galaxies that often still have some disk component and a pseudobulge (Type 2 Radio Quiet AGN), and that have significant star formation. They also find that the growth of SMBHs occurs predominantly in moderately massive galaxies (M ∼ 10 10 − 10 11 M ) with young stellar populations.
The importance of black holes in galaxy formation models has grown substantially in recent years owing to the realisation that they are the most likely drivers for galaxy transformation from star-forming spirals to quenched ellipticals (e.g. Somerville & Davé 2015). Models that aim to understand global galaxy-black hole co-evolution must first be shown to reproduce observed galaxy population trends within a cosmological context, in order to be physically plausible. This has long been a major challenge for galaxy formation models.
Pioneering work by Springel et al. (2005a) and Di Matteo et al. (2005) incorporated black hole growth and associated feedback into hydrodynamical galaxy formation simulations. Their simulations were able to efficiently grow black holes during galaxy mergers, and the energetic AGN feedback injected thermally was able to clear out the gas and leave a quenched galaxy. By extrapolating results from a library of merger simulations, Hopkins et al. (2006) was able to show that the resulting properties matched a number of key galaxy-black hole observables. The resulting scenario highlighted the importance of mergers in driving rapid growth of both the bulge and black hole, culminating in a final short-lived quasar phase that blows out the remaining gas (Hopkins et al. 2007). Di Matteo et al. (2008) incorporated this model into cosmological runs, which was broadly successful at reproducing galaxy-black hole correlations. However, the feedback from the black holes was ultimately unable to sufficiently quench star formation to the extent required to match observed massive galaxies.
Semi-analytic models (SAMs) of galaxy formation also included black holes and associated feedback in order to quench galaxies. In contrast to the merger-driven scenario, in SAMs quenching was found to be effective when enacted via heating of halo gas (Croton et al. 2006;Bower et al. 2006), called "radio mode" or "maintenance mode" feedback (Somerville et al. 2008). It was found that mergers were not sufficiently frequent and did not release sufficient energy to keep galaxies fully quenched, which was later confirmed using cosmological hydrodynamic simulations (Gabor & Davé 2012. Thus the dominant mode for how black holes grow, and how they impact their host galaxies in order to reproduce observed scaling relations, remained controversial. Over the last five years there has been substantial work in improving models for black hole feedback in hydrodynamic simulations. The key observables that simulations aimed to reproduce are the numbers of quenched galaxies and the exponential cutoff in the stellar mass function, while simultaneously reproducing black hole-galaxy scaling relations. The Illustris simulation (Vogelsberger et al. 2014;Genel et al. 2014) was able to produce some quenched galaxies (Sijacki et al. 2015) by turning up the feedback strength relative to Di Matteo et al. (2008), but was not able to reproduce the correct colour distribution or produce an exponential cutoff, and moreover too strongly evacuated massive halos of their gas (Genel et al. 2014). The EAGLE simulation (Schaye et al. 2015) employed quite a different feedback model, and was able to reproduce both a bimodal colour distribution (Trayford et al. 2017) and a reasonable mass function. Illustris-TNG (Springel et al. 2018) improved upon Illustris's feedback model by including twomode feedback (Weinberger et al. 2018) reminiscent of observations (Best & Heckman 2012), yielding galaxy properties similar to those observed. All these simulations, however, assumed that the black hole feedback was spherical or quasispherical; in contrast, the Horizon-AGN simulation (Dubois et al. 2012(Dubois et al. , 2014Volonteri et al. 2016) used bipolar feedback that is more reminiscent of observed AGN feedback, but were unable to produce sufficiently quenched massive galaxies.
All the above simulations modeled black hole growth via Bondi-Hoyle-Lyttleton accretion (Hoyle & Lyttleton 1939;Bondi & Hoyle 1944;Bondi 1952) or variations thereof (Dubois et al. 2012;Choi et al. 2012;Rosas-Guevara et al. 2015). Owing to Bondi accretion's squared dependence on the mass of the SMBH, black hole growth must be selfregulated by feedback from the black hole itself in order to avoid runaway growth (Anglés-Alcázar et al. 2015). This self-regulation is difficult to achieve without quasi-spherical feedback, yet observations of black hole feedback from jets or outflows generally show bipolar feedback. It is worth noting that even feedback that is implemented spherically at small scales can result in bipolar outflows on larger scales, owing to collimation by the surrounding gas.
The recent Simba simulation (Davé et al. 2019) employed a different model for black hole growth and feedback. Simba's model is based on the idea that torques owing to disk instabilities are responsible for dissipating angular momentum and allowing the black hole accretion disk to be fed. Hopkins & Quataert (2011) found that non-axisymmetric perturbations in the stellar gravitational potential produce orbit crossings and shocks that efficiently remove angular momentum even at scales 10pc, and derived analytic equations for the loss of angular momentum in the presence of such shocks, which resulted in a model capable of reproducing gas inflow rates down to 0.1pc that matched very high-resolution simulations (orders of magnitude better than the Bondi parameterisation). Simba implements this gravitational torque limited model in a sub-grid manner (Anglés-Alcázar et al. 2013, 2017a, for accretion from cold gas, while still using Bondi accretion from hot gas where the Bondi assumption of gravitational capture from a hot medium is more appropriate. Anglés-Alcázar et al. (2013) showed that this so-called torque-limited accretion model results in galaxy-black hole scaling relations even without any self-regulating feedback, and Anglés-Alcázar et al. (2017a) showed that this result holds even in the presence of strong black hole feedback. Because Simba's accretion model does not require self-regulation, it is possible to implement black hole feedback in a more realistic way. In particular, Simba uses bipolar outflows whose velocity increases rapidly as f Edd drops, broadly motivated by the two-mode feedback seen observationally (Heckman & Best 2014). In Davé et al. (2019), we showed that this model produces a correlation between M BH and M that is in good agreement with observations, along with the correct fraction of quenched galaxies as a function of stellar mass.
In this paper, we extend the preliminary results in Davé et al. (2019) to more comprehensively consider a wider range of black hole properties, their relationship to host galaxy properties, and their evolution. We focus primarily on the predictions of black hole masses and accretion rates predicted and their correlation with galaxy properties such as M , SFR, and H i mass in Simba. We defer a more careful comparison in the observational plane to future work. We show that in most cases Simba produces good agreement with available observables, and makes interesting predictions for the relationship to host galaxy properties that can be tested in future multi-wavelength surveys.
In §2 we describe the Simba simulations and accretion and feedback models, followed by §3 which shows the resulting scaling relations in the local universe and their evolution with redshift. We then discuss our findings and conclusions in §4.
SIMULATIONS
We use the Simba simulation (Davé et al. 2019), run with a version of the cosmological gravity+hydrodynamics code Gizmo 1 (Hopkins 2015) in its Meshless Finite Mesh (MFM) hydrodynamics solver. Simba models a (100h −1 Mpc) 3 comoving randomly-selected volume down to z = 0 with 1024 3 dark matter particles and 1024 3 gas elements. Simba includes radiative cooling and photoionisation heating using the Grackle-3.1 package (Smith et al. 2017), assuming a Haardt & Madau (2012) ionising background that incorporates self-shielding on the fly via the prescription in Rahmati et al. (2013). Star formation is modeled by a Schmidt (1959) law on the molecular hydrogen component, where the H 2 fraction is computed via a subgrid prescription following Krumholz & Gnedin (2011). Chemical enrichment 1 www.tapir.caltech.edu/ phopkins/Site/GIZMO.html is followed for 9 metals, from Type II and Type Ia supernovae (SNe) and Asymptotic Giant Branch (AGB) stars. The Type II SN energy is assumed to (instantaneously) drive galactic outflows, implemented via decoupled, kinetic, twophase winds, with scalings of mass outflow rates with galaxy stellar mass based on the particle tracking results of Anglés-Alcázar et al. (2017b) using the Feedback in Realistic Environments (FIRE) simulations (Hopkins et al. 2014(Hopkins et al. , 2018. Energy from Type Ia and AGB stars is also added at later times by tracking stellar evolution based on the Bruzual & Charlot (2003) stellar population synthesis model.
We adopt a standard ΛCDM cosmology with parameters Ω Λ = 0.7, Ω m = 0.3, Ω b = 0.048, h = 0.68, σ 8 = 0.82, and n s = 0.97 (Planck Collaboration et al. 2016). Cosmological initial conditions are generated using Music (Hahn & Abel 2011) with the minimum comoving softening length set to 0.5% of the mean interparticle distance for dark matter particles, corresponding to a full minimum softening radius of = 1.4h −1 kpc with a 64-neighbour cubic spline kernel. The minimum gas smoothing length is half the minimum softening length. Further modeling details are available in Davé et al. (2019). Given the centrality of the black hole model to the results of this work, we present the Simba black hole growth and feedback models in more detail in the following sections.
Black Hole Seeds
Many uncertainties still remain with regards to black hole seeding (Volonteri 2010). For simplicity, we do not attempt to mimic the physics of any seed formation mechanism in detail, and instead assume that a black hole appears at the center of each galaxy once it exceeds a mass where efficient black hole growth can occur, which we take to be M > 10 9.5 M . Below this mass, higher resolution simulations have shown local stellar feedback disrupts black hole accretion and suppresses growth (e.g. Dubois et al. 2015;Rosas-Guevara et al. 2016;Anglés-Alcázar et al. 2017c;Habouzit et al. 2017). An on-the-fly fast friends-of-friends (FoF) algorithm is used to identify galaxies. If the FoF galaxy does not already include a black hole particle and is above the threshold stellar mass, we insert a seed of mass M seed = 10 4 h −1 M at the location of the most bound gas particle. This places the black hole well below the observed M BH − M relation, but as discussed in Anglés-Alcázar et al. (2013) and also shown below, the black hole grows fairly rapidly onto the relation via torque-limited accretion.
Black Hole Accretion
We employ a two-mode accretion model for the growth of black holes in Simba. The first mode follows the torquelimited accretion model presented by Anglés-Alcázar et al. (2017a) for cold gas (T < 10 5 K), while the second mode models Bondi accretion solely from hot gas (T > 10 5 K).
Gravitational Torque-Limited Model
Accretion rates are based on the gravitational torque model of Hopkins & Quataert (2011) which estimates the gas inflow rate, M Torque , driven by gravitational instabilities from We define T ≡ m × α T , where α T = 5 is the normalization of M Torque proposed by Hopkins & Quataert (2011) and m is a free parameter introduced to account for processes that affect the radial transport of gas at unresolved scales. We tune this to m = 0.1 in order to match the amplitude of the M BH − M relation at z = 0. R 0 is taken to be the radius enclosing 256 gas elements, with an upper limit of 2h −1 kpc. Evaluating equation 2 requires the separation of spheroidal and disk components within R 0 , which is done by means of a kinematic decomposition (Anglés-Alcázar et al. 2013.
Bondi-Hoyle-Lyttleton Parameterisation
The Bondi model has been widely used as a prescription for black hole growth in galaxy formation simulations (e.g. Springel et al. 2005a;Dubois et al. 2012;Choi et al. 2012). For a black hole mass M BH moving at a velocity v relative to a uniform distribution of gas with density ρ and sound speed c s , the Bondi rate is given by where α is a dimensionless parameter usually used to boost accretion rates and partially compensate for high mean gas temperatures as a consequence of the multi-phase subgrid model of star formation and/or the lack of resolution required to resolve the Bondi radius. We do not use a boost factor and rather suppress M Bondi by the same factor as M Torque for consistency (α ≡ m = 0.1).
Numerical Implementation
We apply the torque-limited accretion formula to all the gas within R 0 that has a temperature T < 10 5 K, while for T > 10 5 K gas we employ the Bondi formula, computing ρ and c s from the hot gas only within R 0 . A given black hole can thus accrete gas in both Bondi and torque-limited modes at any given timestep. The total accretion onto the black hole is then We limit Bondi accretion to the Eddington rate, while torque-limited accretion is capped at 3× the Eddington rate. Black holes are further limited to not grow beyond 0.1% of their mass in a single simulation time step to avoid large stochastic fluctuations, but this limit is very rarely invoked. Black hole accretion proceeds stochastically (Springel et al. 2005b). Gas particles within R 0 get a fraction of their mass subtracted and added to the black hole, with a probability that statistically satisfies the mass growth (eq. 7). If a particle is sufficiently small compared to its original mass, it is swallowed completely.
Black Hole Feedback
Simba employs three black hole feedback mechanisms: (a) Radiative feedback in highf Edd black holes; (b) Jet feedback in lowf Edd black holes; (c) X-ray feedback. The first two are implemented purely kinetically and purely bipolar, with the direction set as ± the direction of the angular momentum of gas within R 0 . Albeit that Whittam et al. (2018) reports a more continuous distribution of f Edd if the radio AGN sample are selected from deeper radio observations, the motivation for this is based on the observed dichotomy of black hole accretion rates and the corresponding properties of their outflows (Heckman & Best 2014). At high f Edd ( > ∼ few percent), AGN are observed to drive multi-phase winds at velocities of ∼ 10 3 km s −1 that include warm molecular and ionised gas. At low Eddington ratios, AGN mostly drive hot gas in collimated jets at velocities ∼ 10 4 km s −1 , that can be seen to inflate super-virial temperature bubbles in the surrounding hot gas. Within jet modes, this dichotomy can be seen between "high excitation" (HERG) and "low excitation" radio galaxies (LERG). The former are found typically in lower mass, bluer host galaxies, and the latter in more massive, earlier types. Simba thus models AGN feedback in such a way as to directly mimic the energy injection into large scale surrounding gas using bipolar feedback with properties taken as much as possible from AGN outflow observations.
Kinetic Feedback
For high f Edd mode outflows, an outflow velocity is chosen based on ionised gas linewidth observations of X-ray detected AGN from SDSS (Perna et al. 2017) and parameterised in terms of the black hole mass such that v w,E L = 500 + 500(log M BH − 6)/3 km s −1 and are referred to as AGN winds. If f Edd < 0.2, we slowly transition to the jet mode where the velocity becomes increasingly higher as f Edd drops: Barišić et al. 2017). We conservatively choose M BH,lim = 10 7.5 M . This mass limit prevents small black holes with temporary low accretion rates from driving high powered jets.
AGN-driven outflows are modelled by stochastically kicking particles around the black holes with velocity v w with probability where w j is a kernel weight and f m is the fraction of mass accreted by the black hole and subtracted from the gas particle before ejection. This gives an outflow mass loading factor of M out / M BH = (1 − f m )/ f m . We set f m for each outflow event such that the momentum ejected by the black hole is 20L/c, where L = η M BH c 2 .
X-ray Feedback
The energy input rate due to X-rays emitted by the accretion disk is computed according to Choi et al. (2012), assuming a radiative efficiency η = 0.1. In gas-rich galaxies, severe radiative losses are expected in the ISM, hence we only apply X-ray feedback below a galaxy gas fraction threshold of f gas < 0.2, and in galaxies with full velocity jets (v w > ∼ 7000 km s −1 ). For further details see Davé et al. (2019).
Analysis of Simulations
Simba outputs 151 snapshots to z = 0, but here we will be primarily concerned with snapshots at z = 0, 0.5, 1, 2, 3. Galaxies are identified as gravitationally bound collections of gas and star particles using a friends-of-friends galaxyfinder. Black holes are assigned to the galaxy to which they are most gravitationally bound and thus galaxies may have many black holes. We consider the largest black hole within the galaxy to be the central black hole and refer to this as the black hole mass. Typically, the other black holes are much smaller and add no significant mass relative to the central black hole. Galaxies are post-processed using the YT-based package caesar 2 which outputs all pre-computed galaxy information and key properties in a convenient hdf5 catalogue. All results here are obtained from caesar catalogues which are generated from simulation snapshots at specified redshifts.
More details about the simulations and black hole accretion and feedback models can be found in Anglés-Alcázar et al. (2013, 2017a; Davé et al. (2019).
RESULTS
Black hole properties are observed to be correlated with properties of their host galaxy. In this section we show predictions for key black hole-galaxy scaling relations, as well 2 caesar.readthedocs.io/ as distributions of black hole properties. The goal is to assess how well Simba reproduces the observed supermassive black hole population, and to identify trends in black hole properties arising from the input physics.
In Simba, black hole particles carry two properties: the black hole mass M BH , and the instantaneous black hole accretion rate M BH (eq. 7). From these, it is possible to compute the Eddington ratio In this paper, we will focus on relating these intrinsic black hole properties to the global galaxy properties such as stellar mass M , stellar velocity dispersion σ, star formation rate (SFR), and H i fraction f HI = M HI /M . We defer a comparison in terms of observational quantities such as AGN luminosities in various bands to future work.
Black hole growth history
Black holes and galaxies appear to grow at commensurate rates when viewed globally over cosmic time (Aird et al. 2010;Madau & Dickinson 2014), even though there is only a weak correlation between the instantaneous growth rates for individual systems (e.g. Hickox et al. 2014). In this section we examine the global growth of galaxies relative to black holes over cosmic time, in order to situate the forthcoming discussion of the relationship between black holes and their host galaxies. Figure 1, top panel, shows the global mass density in black holes vs. that in stars in Simba's resolved galaxy population over cosmic time. The bottom panel shows the global SFR density vs. the BHAR density. In each case, the galaxy quantity is shown as the blue dashed line, and the black hole quantity is shown as the orange solid line. The black hole curves have been multiplied by a factor that results in these quantities being equal at z = 0, and this factor is indicated in the legend. Data points are shown for the galaxy quantities, namely the stellar mass density and the SFR density, from Madau & Dickinson (2014), so should be compared with the blue dashed curves. Note that the bottom panel is similar to the Lilly-Madau plot shown in Davé et al. (2019), except here we are considering only resolved galaxies; this makes a very minor difference.
In a globally averaged sense, the black holes and galaxies in Simba track each others' growth fairly closely. Since black holes are only seeded once a galaxy reaches a certain mass, the mass density in black holes lags slightly behind the stellar density at early epochs. Conversely, once the black hole is seeded, it has to grow a bit more rapidly than the stars in order to catch up, as seen in the bottom panel. Comparing to observations of the stellar mass density growth, Simba shows excellent agreement. For the SFR density evolution, Simba falls short by a factor of two during Cosmic Noon (as discussed in Davé et al. 2019).
The values of the multiplicative factors are interesting. Overall, it is observed that typically M BH ≈ 0.0014M (Sun et al. 2015), which would suggest the required multiplicative factor should be about 670. Instead, it is ≈ ×2 lower. This arises because there is substantial scatter in the M BH − M relation (Figure 2), and the over-massive black holes contribute more than their share to the black hole mass density budget. Overall, the prediction is thus within the expected range.
In contrast, for the global instantaneous growth rate, the multiplicative factor is an order of magnitude lower in Simba than inferred from observations (Aird et al. 2010;Madau & Dickinson 2014). This may arise because in Simba, essentially all galaxies that have black holes are "active", in the sense that their SMBHs are accreting at a nonzero rate. However, many of these may not have sufficient accretion to be observable as AGN, and hence their accretion would not be counted observationally towards the global BHAR. Indeed, during much of cosmic time, the most rapidly growing black holes are the ones in moderate-mass star-forming systems, for which it is quite difficult to identify a black hole unless it is in a (rare) Seyfert phase. In Simba, a substantial amount of accretion occurs in such galaxies, as we will discuss later. Since the integral of the SFRD and BHARD should give the cosmic stellar mass and black hole mass density (modulo stellar evolution effects), the factor for mass growth and that for instantaneous growth should be comparable, which it is for Simba, but not so for observed values. This suggests that there is substantial instantaneous black hole growth missed by current surveys of AGN across cosmic time (Hickox & Alexander 2018). Other simulations have found similar results: Illustris (Sijacki et al. 2015) find a BHARD/SFRD ∼ 1000, while Weinberger et al. (2017) used the Arepo code (Springel 2010) to model AGN and black hole growth, which produces a SFRD 0.3 dex lower than observations and BHARD/SFRD of a few hundred.
Overall, black holes in Simba grow commensurately with galaxies in a globally-averaged sense, which is qualitatively consistent with observations. A naive quantitative comparison suggests that Simba overproduces the global accretion rate density at all epochs, but observational selection effects may play a significant role in this; we will look at this more closely in future works. We next examine the scaling relations for individual galaxies, to identify the galaxies where black holes have grown and are still growing.
M BH − M relation
A basic property of galaxies is their total stellar mass. There is a correlation between M and M BH that is fairly tight and roughly linear for bulge-dominated galaxies, while diskdominated systems show a large scatter and tend to lie below the M BH − M relation (Häring & Rix 2004;McLure et al. 2006;Reines & Volonteri 2015;Kormendy & Ho 2013;Mc-Connell & Ma 2013;Graham 2016). Local black hole mass measurements based on stellar and gas kinematics can introduce uncertainties, but here we will compare to derived black hole masses without considering such observational effects.
The left panel of Figure 2 shows the M BH − M * relation at z = 0 produced by Simba. This is as shown in Davé et al. (2019), except here each galaxy is coloured by the deviation of its stellar velocity dispersion σ, from the median M − σ relation. The black circles with error bars show the median M BH in a given M * bin with 1σ scatter around the median. The green, black, magenta, and red dashed lines show observations from Reines & Volonteri (2015) for AGN as well as for ellipticals and classical bulges, Kormendy & Ho (2013), and Häring & Rix (2004) respectively. From here on, we consider only M BH > 10 6 M and M * > 10 9.5 M as this is roughly the point above which black holes are approaching the M BH − M relation, and thus become broadly insensitive to the details of the seeding prescription (for a full description see Davé et al. 2019, Figure 13).
Simba produces a clear correlation between the stellar mass of a galaxy and the mass of its black hole. For a bestfit linear relation to the median in the form log[M BH /M ] = α log[M * /10 11 M ] + β, we find α = 1.147 ± 0.002 and β = 8.568 ± 0.214 for all galaxies. If we divide the sample into star-forming and quenched at sSFR= 10 −10.8 yr −1 , we obtain for the quenched sample (α, β) = (1.071 ± 0.002, 8.651 ± 0.205). This is in the range of observations of early-type galaxies, from Kormendy & Ho (2013) who find (α, β) = (1.16, 8.69) to Häring & Rix (2004) who find (α, β) = (1.12, 8.20), as is evident from the Simba data mostly lying in between these two observational fits. For the star-forming sample, the slope is poorly constrained because there is large scatter, but if we fix the slope to the observational value of α = 1.05 (Reines & Volonteri 2015), then our amplitude β = 8.231 ± 0.824 is Reines & Volonteri (2015) for AGN as well as for ellipticals and classical bulges, Kormendy & Ho (2013), and Häring & Rix (2004) respectively. Right: Evolution of the M BH − M . Each coloured line represents the running median at a given redshift. The filled and empty grey squares show observations from Sun et al. (2015) for 1 < z < 1.5 and 1.5 < z < 2 respectively. Simba predicts a roughly unevolving correlation between the stellar growth of a galaxy and its central SMBH which appears to be in agreement with observations shown here.
There is also a strong connection between the mass of the central black hole and the deviation of the stellar velocity dispersion of its host galaxy, σ, from the median M − σ relation. At a given stellar mass, larger black holes live in galaxies with higher stellar velocity dispersions. This trend is indicative of the tight correlation between M BH and σ that we will discuss in §3.2.2.
The right panel of Figure 2 shows the evolution of the M BH − M * relation for redshifts z = 0 − 5. We compare to observations by Sun et al. (2015) of Herschel-detected broad line AGN (BLAGN) for 1 < z < 2, shown as the squares. Simba predicts very little evolution in the M BH − M * relation, less than a factor of two over this redshift range. This is generally in agreement with the observations shown here, as well as with Shields et al. (2003) who found no distinct evolution in the M BH − M * relation for quasars out to z ∼ 3, and Mullaney et al. (2012) who used X-ray stacking analyses to determine that M BH /M * was approximately constant and independent of redshift. In contrast, McLure et al. (2006) looked at massive (M * ∼ 10 12 M ) early type galaxies and found a change in the ratio of M BH /M sph (where M sph is the spheroidal mass component, ≈ M for massive galaxies) increasing by ∼ ×4 out to z 2. However, AGN selection tends to bias high-z samples increasingly towards over-massive black holes, which can mimic evolution (Lauer et al. 2007). Thus the actual amount of evolution is not precisely determined, but appears to be generally modest, as Simba predicts.
M BH − σ Relation
It has long been recognised that present-day black holes correlate more closely with properties of a galaxy's bulge mass than its total stellar mass (see Kormendy & Ho 2013, and references therein). Kormendy & Ho (2013) argue that since bulges and elliptical systems formed from galaxy mergers, this argues for a connection between black hole growth and galaxy mergers. In contrast, pseudobulges, which are more related to the secular evolution of a disk galaxy, do not satisfy the tight BH-galaxy correlations. Ferrarese & Merritt (2000) and Gebhardt et al. (2000) showed that the correlation with black hole mass is tightest with galaxy stellar velocity dispersion, the so-called M BH − σ relation. Hence it is instructive to examine this relation in Simba as a test of this connection. Figure 3, left panel, shows the z = 0 M BH − σ relation for Simba galaxies. The 1-D velocity dispersion σ is calculated from individual star particles that are members of each galaxy; we have not applied an aperture correction. The individual galaxy points are coloured by sSFR, with the black Figure 3. Left: M BH − σ relation at z = 0. Each circle represents a Simba galaxy coloured by sSFR. Black points show the running median, while the solid black line shows a linear best fit to those points. The filled grey circles show observations of a sample of black holes measured in elliptical galaxies, and the empty grey squares are those of spiral galaxies with classical bulges compiled by Kormendy & Ho (2013). Right: Evolution of the M BH − σ relation. Each line represents the running median at a given redshift. Results from Simba are in close agreement with the tight relation between the mass of a SMBH and galaxies with a significant bulge counterpart found by observations. The normalization of this correlation decreases with redshift, as expected for a universal M BH − M correlation. circles with errorbars showing the median M BH in a given σ bin. The observed M BH − σ values for ellipticals (filled grey circles) and spirals with classical bulges (empty grey squares) from Kormendy & Ho (2013) are overlaid, with errorbars. Due to the large scatter in the relation for spirals with pseudobulges, these were not plotted here. Nonetheless, the simulated galaxies seem to lie in the same region as the Kormendy & Ho (2013) data, with a comparable scatter. At low-σ, there are few observed true bulges, but the Simba galaxies may have somewhat over-massive black holes in this range. We note that smaller galaxies in our simulations can have potentially inaccurate stellar velocity dispersions owing to lower particle numbers with which to compute the dispersions, which may bias the σ values low. Nonetheless, for bulge-dominated galaxies the predicted M BH − σ relation in Simba nicely tracks observations. This agreement may be surprising because ? argued that, owing to biases in the measurements of black hole masses, the observed M BH − M relation and the M BH − σ relation are inconsistent with each other, so it is surprising that Simba can match both simultaneously, which other models have had some difficulty doing (?). A relevant aspect of Simba is that at a given M , large black holes live in higher-σ galaxies, as shown by the colour-coding in Figure 2. This bias qualitatively mimics that seen in observations, and implies that the M BH − σ relation in Simba is not fully described by the M BH − M relation convolved with the mean M − σ relation.
Notably, the scatter in the M BH − σ relation is clearly higher, typically by ∼ ×2, compared with the scatter in the M BH − M relation (Figure 2). This is opposite to the trend generally inferred from observations (Ferrarese & Merritt 2000). Since σ responds to total mass while M only measures the stellar mass, there can be scatter introduced between these owing to variations in the dark matter content within the stellar region. In observations, the tight trend with velocity dispersion as well as central (< 1 kpc) surface density (Zolotov et al. 2015) suggests that central spheroidal growth is connected to black hole growth, indicative of structurally disruptive processes such as mergers driving both. At face value, this seems inconsistent with the growth of black holes via torque-limited accretion that associates black hole growth primarily with disk instabilities. However, we note that recent observations tend to paint a picture where only the brightest AGN are fueled by mergers, whereas most (Seyferts-like) AGN are not (e.g. Donley et al. 2018). Figure 3, right panel, shows the evolution of the running median of the M BH −σ relation. There is a clear evolution, in that galaxies at a given σ have lower M BH at higher redshift. This can be understood as a consequence of the invariance of the M BH − M * relation with redshift together with size evolution, related via σ ∝ GM/R. If M is the dominant mass component, or at least is a good tracer thereof, then this equation implies that the typical galaxy size R at a given mass must decrease with redshift. Indeed, Conselice (2014) describes the change in the effective radius R e for M * > 10 11 M galaxies by a power-law R e ∼ (1+z) β where β ∼ −0.82 to−1.5 depending on whether one is observing disk-like or spheroid-like galaxies. Such a trend is qualitatively seen in Simba, and will be detailed in a forthcoming paper (Appleby et al., in preparation). Other simulations have also found a similar trend. DeGraf et al. (2015) find increasing slope and decreasing normalisation of the M BH − σ relation along with insignificant evolution in the M BH −M * relation for z = 0 → 4. Sijacki et al. (2015), on the other hand, find a flattening of the best-fit slope and decrease in the normalisation of the M BH − σ relation with a M BH − M * relation that increase from 1.21 to 1.28 in slope but staying roughly constant in normalisation for z = 0 → 4.
Overall, Simba reproduces the observed relationship between stellar velocity dispersion and black hole mass, for moderate mass galaxies and larger. However, the scatter in this relationship is clearly larger than that in the M BH − M relation, which may be in tension with observations. We will investigate this further in future work with higher-resolution simulations that can more robustly model galaxy kinematic structure along with black hole growth.
BHAR-SFR relation
Since galaxies and black holes grow commensurately in a globally-averaged sense, one expects that the SFR and BHAR should be correlated. Indeed, such a correlation has been observed (Mullaney et al. 2012;Chen et al. 2013;Delvecchio et al. 2015), although measuring BHARs accurately remains challenging and time variability may obscure a direct, instantaneous SFR-BHAR connection (Hickox et al. 2014). For torque-limited accretion, Anglés-Alcázar et al. (2015) found via post-processing of a cosmological simulation a reasonably tight connection between the SFR and nuclear activity of galaxies when averaged over galaxy dynamical timescales, although instantaneous measures could still have a large scatter. In this section we examine the instantaneous BHAR-SFR correlation in Simba, to determine whether this connection persists in the self-consistent black hole accretion model, and how it fares versus observations. Figure 4, left panel, shows the relationship between BHAR and SFR for Simba galaxies, at z = 0.5. Each galaxy is represented by a circle colour-coded by sSFR. The black circles with errorbars show the median BHAR in a given SFR bin. The red band shows the total SFR-BHAR relation obtained by Diamond-Stanic & Rieke (2012) for Seyfert galaxies and the cyan and purple bands show results from Chen et al. (2013) and Delvecchio et al. (2015) respectively for star forming galaxies. The grey filled circles are observations of BLAGN from Sun et al. (2015) We choose to show our results at z = 0.5 as it is comparable to the observations that we compare to, but Simba predicts little evolution in this relation as can be seen from the right panel.
In Simba, the SFR broadly traces the BHAR. Note that the slope obtained by Simba is flatter than that of the observations. At z = 0.5, a fit to the median in the form log M BH = α log SFR + β for star forming galaxies yields α = 0.690 ± 0.002 and β = −2.34 ± 0.003. The relation clearly flattens at the lowest SFR values, owing to Bondi accretion starting to contribute significantly, which breaks the connection between star formation and black hole growth from torque-limited accretion. The 1σ scatter around the median relation is ≈ +0.5 −1.0 dex at the highest SFRs, increasing somewhat to lower SFRs.
Observations of the BHAR-SFR relation tend to focus on star-forming galaxies. The Simba predictions broadly lie within the region spanned by the observations. Chen et al. (2013) studied IR selected star forming galaxies at 250µm with AGN selected by X-ray and mid-IR criteria and found a slope of α = 1.05 and normalisation, β = −3.72. Diamond-Stanic & Rieke (2012) estimates the BHAR from O iv (25.89µm) flux measurements and the SFR from the 11µm aromatic feature for Seyfert galaxies at a median distance of 22 Mpc, while Sun et al. (2015) uses X-ray and IR data to infer the SFR and BHAR for BLAGN. Delvecchio et al. (2015) also uses IR selected star forming galaxies and have AGN identified by X-ray criteria, similarly to the Chen et al. approach. In detail, the observations typically show lower BHAR at a given SFR in the SFR range overlapping with Simba predictions. It is worth noting that the observations are generally for highly star-forming galaxies, as indicated by the shaded region for Chen et al. (2013), except for Diamond-Stanic & Rieke (2012) which samples SFRs from ∼ 0.01 − 10M yr −1 . Given the uncertainties in determining BHARs from data at particular wavelengths, such as the difficulty in disentangling AGN emission from SF-produced IR emission or X-ray binary contributions, it remains to be seen if this discrepancy is serious. Nonetheless, for star-forming systems, the predicted and observed slopes are quite similar.
The right panel of Figure 4 shows the evolution of the BHAR-SFR relation, which remains roughly unchanged from z = 5 → 0. The main evolutionary trend is that the low-SFR tail becomes populated at lower redshifts, but in all cases it appears that there is a gradual upturn towards the lowest SFR values. This generally agrees with the Sun et al. (2015) observations represented by grey squares. A similar lack of evolution has been found out to z ∼ 2 by Mullaney et al. (2012), who found a constant BHAR-SFR ratio up to z ∼ 2 which then produces the M BH − M * relation found in Figure 2. It is interesting that this "AGN main sequence", as denoted by Mullaney et al. (2012), constrained to match a roughly non-evolving M BH − M relation, requires higher accretion rates than other observational determinations, and even higher than Simba predictions. As mentioned earlier, the integral of the BHAR and SFR should reflect the correlation between M and M BH . Simba satisfies this constraint by construction, but applying it to observations may provide valuable constraints on BHAR evolution.
Overall, the BHAR-SFR relation in Simba shows a reasonable correlation, but with substantial scatter. This is consistent with the idea that black holes and galaxies grow com- Sun et al. (2015) for 1 < z < 1.5 and 1.5 < z < 2 respectively. The relation is tight for star-forming galaxies with a slope similar to that of observations and increasing scatter at low star formation rates. The M BH − SF R does not evolve significantly with redshift. mensurately on cosmological timescales, but not necessarily on inner galactic timescales. The BHARs in Simba are broadly in the range of observed values, although they appear to be somewhat higher than some recent observations; it remains to be seen whether this discrepancy constitutes a significant failing of the model.
Eddington Ratios
The Eddington ratio (eq. 11) appears to play a critical role in governing black hole accretion and feedback processes (Heckman & Best 2014). Observationally, AGN are often split into two broad categories, radiatively efficient with f Edd > ∼ few percent, and radiatively inefficient with f Edd < ∼ 0.01. Simba's AGN feedback model is motivated by this observed dichotomy, with f Edd being the key quantity that transitions from the radiative feedback mode that has a relatively minimal impact on galaxy growth, to the jet feedback mode which plays a crucial role in quenching (Davé et al. 2019). Observations of f Edd span a wide range, from quasars that approach values of unity and beyond (Liu et al. 2019), to inefficiently accreting black holes in massive ellipticals that have as low as f Edd ∼ 10 −5 , yet are still active as evidenced by their radio jets. Hence the Eddington ratio is an important quantity to examine. Figure 5, left panel, shows f Edd versus M BH for z = 0 Simba galaxies colour-coded by their sSFR. The black circles with errorbars shows the median f Edd value in a given black hole mass bin. The right panel shows the running median of this relation at redshifts from z = 0 − 4.
Simba produces a wide range of Eddington ratios, qualitatively consistent with observations. There is a clear anticorrelation of f Edd with black hole mass for M BH < ∼ 10 8 M . Above this mass, the scatter blows up, and there is a much wider range of f Edd . There is a shelf at around M BH ∼ 5 × 10 7 M dividing these two regions, above which there is a strong increase in low-sSFR galaxies. This is our minimum black hole mass for the onset of jet feedback, and shows that the jet feedback is directly responsible for quenching galaxies.
For lower-mass black holes, one can see a strong dependence in the f Edd on sSFR, where black holes in star forming galaxies accrete more efficiently at a given black hole mass. This is consistent with torque-limited accretion (eq. 2), in which increasing disk mass and gas fraction drive black hole accretion, accompanying an increase in star formation which is driven by these same factors (except over the entire disk, rather than the inner disk). In this regime, AGN feedback has only a minor impact (Anglés-Alcázar et al. 2017a;Davé et al. 2019), so the growth of both stars and black holes is supply-limited.
At high black hole masses, f Edd is typically well below Figure 5. Left: Eddington ratios of Simba galaxies as a function of the central black hole mass. Each circle represents a galaxy coloured by it's sSFR. The black circles with errorbars show the running median at z = 0. Right: Evolution of the Eddington ratios as a function of the central black hole mass. Each line represents the running median at a given redshift. The Eddington ratio and black hole mass show a clear anti-correlation with a slope that flattens with redshift and at which most black holes are accreting above 1% of Eddington by z ∼ 2 − 3. At z = 0, the more efficiently accreting black holes tend to live in more star-forming hosts.
Simba's jet feedback threshold of 0.02, so jet feedback is prevalent. Hence the low Eddington ratios are strongly correlated with quenched galaxies. In these systems, there is little cold gas, which means that torque-limited accretion becomes small. Meanwhile, hot gas is prevalent, making Bondi accretion efficient. In Anglés-Alcazár et al. (in preparation) we will examine these growth modes in more detail, but here we already see that Bondi-dominated accretion results in significantly more variability in the accretion rate, and hence f Edd . In this regime, there is no obvious correlation of f Edd with M BH .
The right panel of Figure 5 shows the evolution of the median Eddington ratios as a function of M BH . The obvious trend is that, overall, f Edd is higher at higher redshifts. This arises from the higher gas fractions and surface densities at high redshift (Anglés-Alcázar et al. 2015). Observations generally suggest that the Eddington ratios of accreting black holes increase with redshift (Kauffmann & Heckman 2009;Lusso et al. 2012;Aird et al. 2012), qualitatively consistent with these predictions.
Examining the evolution more carefully, one can see that the anti-correlation slope flattens with increasing redshift, particularly from z ∼ 1 → 4. At low masses, black holes are always in the radiative mode (as observed; e.g. Hale et al. 2018), while at higher masses, the Eddington ratios drop more quickly. Black holes with M BH ∼ 10 9 M are already in place at z ∼ 4, but they have accretion rates of 5 − 10% Eddington, whereas at z = 1 they have f Edd 1%. From z = 1 → 0, the growth of hot gas in high-mass halos results in Bondi accretion starting to become dominant for those black holes, and the anti-correlation between f Edd and M BH is less clear.
In summary, the Eddington ratio predicted in Simba drops with both black hole mass and cosmic time. The f Edd criterion for jet feedback that quenches galaxies thus kicks in for very high mass black holes at high redshift, and the black hole mass scale for jets (and thus quenching) drops with time. Inasmuch as black hole mass is correlated with stellar mass and thus halo mass, this suggests that the halo mass scale where quenching occurs should be higher at high redshift, which agrees with Hale et al. (2018) who found that the halo masses of efficient accretors flattens at high redshift. This is broadly consistent with expectations from the data-constrained equilibrium analytic galaxy formation model of Mitra et al. (2015), as well as empirical galaxy formation modeling such as Moster et al. (2018). In this way, the dropping efficiency of torque-limited accretion at both high masses and low redshifts, along with Bondi accretion from the hot gas, helps to enact and maintain quenching in Simba galaxies.
Distribution Functions
We have seen that Simba's black hole-galaxy scaling relations as a function of M , SFR, or M BH broadly agree with observations, albeit with some potentially interesting discrepancies. Here we examine the number densities of black holes of a given mass M BH and Eddington ratio f Edd . Observationally, these can be challenging to determine owing to completeness issues, nevertheless some general trends are evident to which we can compare Simba predictions. (black points with error bars), estimated using the Sérsic index of ellipticals and bulges, does not agree with the total BHMF produced by Simba, but agrees with the quiescent population at low masses. The total number density of black holes decreases with redshift, with the majority of the black hole population becoming dominated by star-forming hosts at higher redshift.
Black Hole Mass Function
We first consider the black hole mass function (BHMF). Observational estimates of the BHMF usually involve employing correlations of M BH with global galaxy properties. Shankar et al. (2009) used a compilation of X-ray and optical data to determine the AGN luminosity function and model the average growth rate of black holes, making predictions for the local BHMF assuming a single radiative efficiency and Eddington ratio for all black holes, which they compared to observational determinations of the local BHMF based on the M BH − M and M BH − σ relations. used the measured Sérsic indices of ∼ 10 4 galaxies from the Millennium Galaxy Catalogue to estimate the BHMF based on the empirical relation between M BH and Sérsic index from . We do not attempt to mimic these criteria in detail, owing principally to the fact that this requires structural decomposition of Simba galaxies which could potentially be compromised by resolution effects. Instead, we assume the observations are properly characterising the black hole masses, and compare to these directly. Figure 6 shows the BHMF predicted by Simba, at z = 3, 2, 1, 0 (upper left to lower right). The solid grey line shows the mass function, and the grey band shows the 1σ uncertainty determined via jackknife subsampling among the 8 simulation sub-octants. At z = 0, we compare to observations by Shankar et al. (2004) (black dotted line), Shankar et al. (2009) (solid black line) and (black circles with errorbars). Finally, we subdivide the black hole population into star-forming and quenched populations above and below sSFR lim = 10 (−1.8+0.3z) Gyr −1 (as in Davé et al. 2019), shown as the cyan (Simba-SF) and red (Simba-Q) dashed lines, respectively.
Overall, Simba produces a black hole mass function that is in very good agreement with observations for M BH > ∼ 10 7.5−8 M . There is some variance among the different observational determinations, but these are generally within the 1σ range of Simba predictions. Simba produces a turnover at low black hole masses, which is inter-mediate between the lack of turnover in the Shankar et al. (2004Shankar et al. ( , 2009 determinations, and the measurements. In Simba, we get this turnover because we seed black holes at 10 4 M , and the torque-limited accretion model grows them very quickly until they join the M BH − M relation, which results in a small number of rapidly-growing small black holes. If the numbers of these black holes are under-predicted, it could be that Simba's somewhat arbitrary initial seeding and resulting rapid growth phase may not be fully representative of true black hole growth, which would be unsurprising. Once black holes grow sufficiently large and are stably evolving upwards on the M BH − M relation, it appears Simba produces a z ≈ 0 black hole population that is in good agreement with data. The evolution of the BHMF in Simba is such that, in general, it decreases towards higher redshift. In detail, the lowest mass black holes always have a similar number density, owing to seeding at a given M ≈ 10 9.5 M whose number density also does not evolve much with redshift (Davé et al. 2019). As time evolves, the black hole population builds up towards a peak at M BH ∼ 10 8 M , above which efficient torque-limited accretion becomes less efficient owing to quenching and the diminution of cold gas, resulting in a dropping BHMF above this mass. The peak in the BHMF at M BH ≈ 10 7.5−8 M is therefore a consequence of the interplay between black hole accretion modes and galaxy quenching. As can be seen, current observational determinations even at z = 0 are not conclusive on whether such a peak exists, so this represents a reasonably generic prediction of the currently implemented Simba accretion model.
The drop in the BHMF to higher redshifts is broadly consistent with observational findings. Kelly & Merloni (2012) studied the evolution of the BHMF and compare several models including those from Shankar et al. (2009);Cao (2010); Merloni & Heinz (2008) and find that, with redshift, the mass function amplitude decreases. Rosas-Guevara et al. (2016) find that with an increase in redshift, the black hole mass function decreases in amplitude as well as width, similar to Simba predictions. They also find that between z = 1 and z = 0 the change in amplitude of the black hole mass function is much less rapid than at higher redshifts; this is also consistent with our predictions. A more detailed quantitative comparisons would involve properly accounting for measurement techniques and selection effects which we leave for future work, but it appears that the broad characteristics of observed BHMF evolution are reproduced in Simba.
Finally, we consider the BHMF split into star forming and quenched galaxies. The local BHMF is dominated by quiescent galaxies for M BH > ∼ 10 7.5 M , because the black hole itself is responsible for quenching those galaxies via jet feedback. At higher redshifts, the crossover mass scale grows, so that quenched galaxies dominate for M BH > ∼ 10 8.5 M at z = 2. As discussed in §3.2.4, the quenching scale drops with time owing to f Edd (M ) dropping with time, combined with Simba's assumption that only lowerf Edd black holes can give rise to jets that are responsible for quenching galaxies. Smaller black holes, in contrast, tend to live in star-forming galaxies. Galaxies with smaller black holes such as the Milky Way are thus predicted to be predominantly star-forming by Simba.
In summary, the BHMF predicted in Simba agrees well with observations for M BH > ∼ 10 7.5 M . Simba predicts a peak in the BHMF around this mass owing to black holes growing rapidly below this mass but slowly above it; observations are inconclusive in the shape of the BHMF at lower masses. The BHMF is lower at higher redshift, with a less prominent peak owing to less quenching. Large black holes tend to live in quenched galaxies, with a crossover at M BH > ∼ 10 7.5 M below which star-forming galaxies dominate at z = 0; this crossover moves to higher M BH at higher redshifts. These results are broadly in agreement with various observational constraints, showing that Simba plausibly grows black holes over time. Figure 7 shows the distribution of Eddington ratios of all galaxies with black holes in Simba, represented by the grey band (based on jackknife resampling), at z = 3, 2, 1, 0. A breakdown into star-forming and quiescent populations is shown by the cyan and red lines, respectively.
Eddington Ratio Distribution
The f Edd distribution at every redshift is peaked, with a rapid dropoff to high f Edd and a slower, power-law dropoff to low f Edd . The peak occurs at f Edd ≈ 0.1 at z = 3, dropping to f Edd ≈ 10 −2.5 at z = 0. This drop in the characteristic Eddington ratio with time was noted in Figure 5, and is partly responsible for the increase in jet feedback activity and quenching at later epochs. Black holes in star-forming galaxies accrete more efficiently than those in quenched galaxies. The growing quenched galaxy population creates a tail to low f Edd values that becomes quite prominent at low redshifts, and causes the overall distribution to broaden.
We compare to observations of the Eddington ratio distribution from Kauffmann & Heckman (2009), who use the [OIII] luminosity of SDSS galaxies to derive f Edd assuming a bolometric correction factor of ∼ 600. With this, log L[OIII]/M BH can be converted to log f Edd by adding ∼ 1.7 dex. Their sample is separated by the amplitude of the 4000Å break, D n 4000, showing that galaxies with D n 4000 < 1.5, which are galaxies with recent or ongoing star-formation, follow a lognormal distribution while galaxies with D n 4000 > 1.8, which are galaxies with little or no star formation, have a power law distribution. With this, it is deduced that when there is cold gas in the bulge of a galaxy, the central black hole regulates its own growth, and when this cold gas is depleted, the growth of the black hole is regulated by the rate at which evolved stars lose their mass. Figure 8 shows the fractional distribution of f Edd in Simba. The total distribution is represented by the black solid line and the red and cyan dashed lines are the quiescent and star-forming fraction distributions respectively split by the same sSFR cut used throughout this paper. The grey band shows observations by Kauffmann & Heckman (2009) of D n 4000 < 1.5 galaxies assuming a range of bolometric corrections ∼ 300 − 600. These generally correspond to starforming systems, so it is more appropriate to compare to the dashed cyan star-forming galaxy predictions from Simba.
For high f Edd , the star-forming population in Simba is in good agreement with the observations; in Kauffmann & Heckman (2009), this regime is dominated by low-D n 4000 galaxies indicative of star-forming systems. At low f Edd , we see discrepancies in which the f Edd values of star-forming galaxies in Simba are somewhat overpredicted. The differences may partly be explained by the fact that we use a sSFR cut and not a D n 4000 cut to separate the galaxies. Figure 7. The evolution of the distribution of Eddington ratios in Simba for z = 0 − 3. The grey band shows the total distribution while the cyan and red lines show the star-forming and quiescent populations respectively. High Eddington ratio, or efficiently accreting black holes, dominantly reside in star-forming hosts and by z ∼ 2 − 3 quiescent galaxies make little to no contribution to the Eddington ratio distribution. The Eddington ratio distribution shifts toward higher values with redshift and show evidence that most black holes are accreting efficiently ( f Edd > 1%) by z ∼ 2 − 3. Figure 8. The fractional distribution of Eddington ratios in Simba at z = 0. The black line shows the total distribution for SMBHs in Simba while the red and cyan lines show the fractional distributions from quiescent and star-forming hosts respectively with errorbars removed for clarity. The grey band depicts [OIII] observations from Kauffmann & Heckman (2009) for SDSS galaxies. The low f Edd end of the distribution is slightly overpredicted by Simba, as well as the high f Edd end is slightly underpredicted. However, the overall distribution of Eddington ratios well trace the observations. For quenched galaxies, Simba follows a log-normal f Edd distribution shifted to lower f Edd from the star-forming systems. This is, however, inconsistent with the power-law distribution at low f Edd seen by Kauffmann & Heckman (2009). This may be a consequence of the selection effect since the low f Edd objects tend to be the larger elliptical galaxies that are typically more easily identified in SDSS and thus the low end is typically more biased. Also, because our black hole accretion model occurs from a kpc-sized region, it is dynamically limited in its ability to capture variability on small timescales. One might regard the accretion rates in Simba as reflective of time-averages over a typical inner disk dynamical time ( > ∼ 10 Myr), which would tend to turn an intrinsic power-law distribution in f Edd dominated by shortterm variability into a lognormal distribution.
Overall, Simba produces a fair agreement with observations of Eddington ratios for star-forming systems, and generally produces lower f Edd values for quenched systems. Issues with selection effects and variability could be impacting these comparisons, which we will investigate in future work.
Black hole dependence on HI
In Simba, the growth phase of black holes is roughly commensurate with that of the stellar content, resulting in a global co-growth of galaxies and their black holes. This cogrowth breaks at late epochs when massive, quenched galax- ies appear, whose black holes can grow via Bondi accretion from hot gas that cannot form stars. Since our star formation model is directly tied to molecular gas content, one expects these trends to also broadly hold for the H 2 content of galaxies. However, the H i cold gas reservoir is not directly tied to star formation, and hence its correlation with black hole mass and accretion is not immediately evident. Still, the Mufasa simulation showed a significant correlation between H i content and SFR (Davé et al. 2017), and this persists in Simba (Davé et al. 2019), which suggests that H i is a reservoir of gas that will ultimately form stars, which should ultimately be also correlated to black hole growth. The de-tailed connection between the H i and black hole growth is thus an interesting prediction that connects gaseous fuel in galaxy outskirts with feeding and feedback in the centre of the galaxy.
Owing to present observational challenges, there have been a relatively limited number of studies connecting H i and black holes. Fabello et al. (2011) showed that the H i content of galaxies does not appear to be correlated with black hole accretion in Seyfert galaxies. Heckman & Best (2014) argue that this is expected because the H i lies on larger scales, and is often conspicuously absent in the cores of disk galaxies where the hydrogen is mostly in molecular form. Hence there is not expected to be an instantaneous connection between H i and black hole growth, just as the instantaneous connection between SFR and M BH is also weak. However, in the near future, upcoming radio surveys such as MIGHTEE (Jarvis et al. 2016) and LADUMA (Holwerda et al. 2012) with the new MeerKAT array will soon provide significantly improved data on both H i-21cm emission as well as black hole accretion than has been available previously. This will enable larger statistical studies that can identify correlations over longer timescales, and more accurately measure the scatter between gas reservoirs and black hole growth. It is thus interesting to make predictions for the connection between H i and black hole properties. Figure 9 shows how black hole properties depend on the H i content of galaxies in Simba, specifically the total H i mass M HI (left column), and the H i mass fraction f HI = M HI /M * (right column). The rows show various black hole properties, from top to bottom: M BH , M BH , M BH /M , and f Edd . All galaxies are colour-coded by sSFR. The running median at z = 0 is shown as the black line with 1σ uncertainties from jackknife resampling. The blue and turqoise lines show the running medians at z = 1, 2 respectively; the individual galaxy points are not shown at those redshifts.
The top left panel shows that the black hole mass is essentially uncorrelated with H i mass, and there is little evolution in this relation. More interestingly, at a given H i mass, larger black holes populate more quiescent galaxies. This shows that there is a strong connection between black hole growth and gas removal in galaxy outskirts, likely owing to suppression of cooling to feed the H i reservoir. The most star-forming galaxies primarily have the lowest M BH , but there is also a weaker trend that they have the highest M HI . Hence star formation is enhanced in galaxies that have both small black holes and high gas content. The top right panel shows M BH vs. f HI , which displays a strong anti-correlation, reflecting a tight M BH − M relation at late times, with the most star-forming galaxies having concurrently the smallest black holes and highest gas fractions. There is modest evolution upwards in this relation with time, such that galaxies at a given black hole mass have higher f HI at earlier times, reflecting the overall increase in gas content in galaxies at earlier epochs (Davé et al. 2019).
The second row shows the dependence of BHAR on H i content. At z = 0, BHAR shows little correlation with H i mass, but there is an evident correlation at z > ∼ 1 such that black hole accretion is stronger for higher H i masses. At these earlier epochs, the accretion is dominated by the torque-limited mode, which depends on gas fraction. Even though torque-limited accretion is computed within the core of the galaxy while the H i is more diffusely distributed, the overall enhanced gas content appears to drive black hole accretion. By z = 0, in contrast, the emergence of quenched galaxies dominated by Bondi accretion results in no obvious correlation with H i mass. Even at z = 0, the starforming galaxies appear to follow the relations at higher redshift; however, a large population of low-sSFR galaxies overwhelms that trend. Interestingly, at low-M HI , there is a larger scatter of accretion rates in the quenched galaxies, suggesting that Bondi accretion is more stochastic. For f HI (right panel), we see that there is no correlation at any redshift with H i fraction, but the most star-forming galaxies have both high f HI along with higher BHAR. As discussed previously, the enhanced gas content commensurately drives both stellar and black hole growth.
The third row of Figure 9 shows the dependence of M BH /M on H i properties. The values are M BH /M ≈ 10 −2.5 − 10 −3 at every redshift. In detail, there is a weak anti-correlation, with the highest H i masses having slightly lower M BH /M , independent of redshift. A stronger trend is seen when examining SFR properties, where at a given M HI , galaxies with under-massive black holes are clearly more star-forming. Quenched galaxies, on the other hand, sit well above the mean relation, with M BH /M ∼ 10 −2 . This plot most starkly shows that black hole mass is a key governor of whether a galaxy is star-forming or quenched. The right panel, versus f HI , tells a similar story as the top right panel: galaxies that have the lowest sSFRs have the largest black holes and lowest gas content, and vice versa.
The bottom row shows the Eddington ratio, which is just a scaled version of M BH /M BH (eq. 11), versus H i properties. At z = 0, just like with the BHAR (second row), there is no trend with H i mass, while a trend emerges at higher redshifts. However, unlike for the BHAR, there is a clear trend with f HI . In terms of M HI , the contours of constant sSFR are essentially horizontal, showing that f Edd is a strong and clear predictor of sSFR at a given H i mass, independent of M HI . It is interesting that the highest SFR galaxies have both the most undermassive black holes and the highest specific black hole accretion rates, showing that they are in the process of "catching up" to the typical galaxy in terms of both their black hole and stellar content. The origin of why some galaxies end up in this state, relative to other galaxies with overmassive black holes that quench more quickly, will be examined in forthcoming work (Cui et al., in preparation).
These trends represent predictions that can be tested against forthcoming large-scale H i surveys, where ancillary data can provide other global galaxy quantities such as the stellar mass, SFR, and black hole mass. The strong trends relating the black hole mass and accretion rate with SFR at a given H i mass, and as we saw earlier also at a given stellar mass, are direct outcomes of Simba's black hole growth and feedback models. Confirming or falsifying these predictions will be an important test of Simba's black hole evolution model and quenching feedback.
SUMMARY AND DISCUSSION
We have presented results from the 100h −1 Mpc Simba cosmological hydrodynamic simulation (Davé et al. 2019). Simba employs a novel two-mode subgrid black hole accretion model: gravitational torque-limited accretion (Anglés-Alcázar et al. 2017a) from cold gas based on the analytic model of Hopkins & Quataert (2011), and Bondi accretion from hot gas as widely used in other galaxy formation simulations. In this paper we examine the predictions of Simba for the growth and evolution of the black hole population relative to their host galaxies, in order to assess the model's broad plausibility and characterise basic predictions of galaxy-black hole co-evolution. Our main results are as follows: • The global black hole mass density and black hole accretion rate density trace the stellar mass and star forma-tion rate densities, respectively. On average, black holes and galaxies grow commensurately, which is broadly consistent with observations. At z = 0 in Simba, the ratio of the total M to M BH density, and also that of SFR to BHAR density, is ≈ 300 − 400, which is fairly constant throughout cosmic time. For stellar to black hole mass density this is mostly in agreement with observations, but for SFR to BHAR density it is an order of magnitude lower than observed, suggesting that current surveys may miss a large fraction of black hole accretion.
• The mass of black holes is strongly correlated with M * , and there is no significant evolution in the M BH − M * relation for z = 0 − 5. There is larger scatter at lower masses, in the regime where black hole seeds are converging onto the M BH − M * relation.
• The black hole mass also correlates with stellar velocity dispersion in galaxies, though not quite as tightly as with M . The predictions agree with observational determinations from Kormendy & Ho (2013). M BH − σ evolves with redshift, but only in a manner that is expected for a universal M BH − M * relation, assuming the expected size evolution of galaxies.
• The black hole accretion rate M BH increases with the SFR of its host galaxy for high SFR > ∼ 1M yr −1 , but the relation flattens with significantly more scatter at lower SFR. The relatively tight correlation for star-forming galaxies arises from a common gas reservoir driving both star formation and black hole growth in the torque-limited mode (Anglés-Alcázar et al. 2015, 2017a. Bondi accretion dominates for massive black holes in gas poor galaxies, which does not yield a strong correlation with SFR. The predictions are broadly consistent with observations of BHAR in star forming galaxies, with a hint that it may be slightly higher (∼ ×2 − 3) than observed. The predicted M BH − SF R relation shows no evolution for z = 0 − 5.
• Black hole Eddington rates are strongly anti-correlated with black hole mass at M BH < ∼ 10 8 M , with power-law slope nearly −1, showing that BHAR is mostly uncorrelated with black hole mass; this is broadly consistent with observations (Kauffmann & Heckman 2009). This is expected in the torque-limited model owing to the weak dependence on M BH (eq. 2). At higher M BH > ∼ 10 8 M , Bondi accretion dominates, and the BHAR scales more strongly with black hole mass, resulting in a flatter slope. The scatter in f Edd becomes very large, indicating that Bondi accretion from hot gas is quite stochastic in Simba. The f Edd (M BH ) relation evolves fairly strongly towards higher f Edd at a given M BH , with a mild trend for a flattering of the slope at higher z. At z > ∼ 2, the most massive black holes are accreting at reasonably high f Edd , broadly consistent with observations of quasars at those epochs.
• The BHMF predicted by Simba shows an increase at the low mass end, and an exponential truncation at the massive end. This gives a broad peak at M BH ≈ 10 7.5−8 M , which is where the jet mode feedback in Simba kicks in and begins to quench galaxies that also have low f Edd . Splitting galaxies by quenched vs. star-forming clearly shows a dichotomy at this black hole mass scale. The existence of a BHMF peak is thus a direct prediction of Simba's black hole feedback model. The Simba BHMF is in quite good agreement with observations above this peak, but current observational determinations to lower masses are inconclusive about whether there is a peak or not. We note that the low-mass prediction in Simba may owe in part to our seeding prescription, which causes small black holes to grow rapidly; a different seeding prescription may alter the predictions below this peak. The peak also is less prominent at higher redshifts, because jets are rarer and thus quenching is less effective. The mass scale at which quenched galaxies dominate also increases in M BH to higher redshifts, showing that Simba predicts that the quenching mass scale downsizes in M BH , and by association, also in M and M halo .
• The Eddington rate function at z = 0 shows a powerlaw rise up to f Edd ≈ 10 −2 , and then an exponential cutoff above this. Star-forming galaxies dominate at f Edd > ∼ 10 −3 , and quenched galaxies only appear at f Edd < ∼ 0.02, by which point the jets in Simba are ejected at maximum velocity. The f Edd distribution is in reasonable agreement with SDSS observations from Kauffmann & Heckman (2009) for star-forming galaxies. At higher redshifts, the f Edd distribution shifts towards higher values, with a diminishing tail of quenched galaxies resulting in a sharper peak; at z > ∼ 2, f Edd > ∼ 0.1 typically, and we start to see some black holes accreting at the full Eddington rate or even slightly above.
• The H i content of galaxies shows interesting correlations with black hole mass and accretion rate. There are at best weak correlations in M BH , M BH , M BH /M , and f Edd with H i mass, but there is always a clear trend that galaxies that are quenched at a given H i mass tend to have large black holes that are accreting inefficiently, while the most star-forming galaxies have undermassive black holes that are accreting efficiently. By examining the H i fraction in galaxies, we see that galaxies are highly star-forming if they have both high gas content and small black holes. The interplay between gas content, star formation, and black holes is a prediction from Simba that can be tested in detail with upcoming multi-wavelength surveys.
In general, for where there is observational data, Simba reproduces observed black hole-galaxy correlations fairly well, with potential discrepancies such as an overprediction of BHARs at a given SFR or the underprediction of the BHMF at M BH < 10 7 M . This shows that the new black hole accretion and feedback models in Simba are plausible as a platform for studying galaxy-black hole co-evolution and the role of black hole feedback in quenching galaxies. In upcoming work, we will focus on tracking indvidual black holes to better understand the modes by which black holes grow, examine in more detail how black hole feedback is responsible for quenching, and compare to observational-plane properties such as AGN luminosity functions while more carefully modeling sub-populations of AGN such as high-and lowexcitation radio galaxies. By bringing together results from Simba and upcoming surveys of AGN and galaxy evolution, we have a powerful tool to put constraints on the physical mechanisms driving black hole accretion and the extent of its effect on large scale properties of galaxies. | 2019-05-09T13:07:14.760Z | 2019-05-07T00:00:00.000 | {
"year": 2019,
"sha1": "bafc3c6f1a08d828aba446472e36be5db6a3685f",
"oa_license": null,
"oa_url": "https://www.pure.ed.ac.uk/ws/files/121575315/1905.02741.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e4564b05af8ca1683c69c83b38ff68b5efe38ea4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
46862190 | pes2o/s2orc | v3-fos-license | CORRELATIONS BETWEEN CHANGES IN PHOTORECEPTOR LAYER AND OTHER CLINICAL CHARACTERISTICS IN CENTRAL SEROUS CHORIORETINOPATHY
This study of 222 eyes with central serous chorioretinopathy demonstrates that morphologic changes in photoreceptor layer, visual acuity, decreasing foveal outer nuclear layer thickness, and symptom duration correlated closely but may behave asynchronously. These objective parameters, besides symptom duration, could be helpful when considering the timing of CSC treatment.
C entral serous chorioretinopathy (CSC) is a common macular disease and often presents with well-circumscribed serous retinal detachment in the macular region on clinical examination, with one or several leakage points at the level of the retinal pigment epithelium detectable with fluorescein angiography. 1 With the advent of optical coherence tomography (OCT), it is now possible to obtain high-resolution cross-sectional images of the retina in a noninvasive manner. [2][3][4] The typical pathologic changes that occur in CSC, such as serous retinal detachment and retinal pigment epithelium abnormalities, have been clearly demonstrated with OCT. 5,6 Recently, it was observed that foveal outer nuclear layer (ONL) thickness significantly decreased in active CSC eyes, which continued as the subretinal fluid persisted. 7,8 In addition, the ONL thinning was associated with vision loss in both active and resolved CSC eyes. [8][9][10][11] Moreover, the morphologic changes in the photoreceptor layer (PRL), referred to elongation, thickening, granulation, thinning, defect, or scattered dots, have been reported. 7,9,[12][13][14][15] However, the correlations between the PRL changes, the decrease of ONL thickness, and clinical characteristics, such as symptom duration and visual acuity, remain unclear. To explore the possible correlations between these features, the morphologic changes in the PRL during active CSC were studied, and the potential relationships with ONL thinning and clinical characteristics were analyzed in this study.
Methods
This prospective observational cross-sectional study was approved by the Ethics Committee of the Eye and Ear Nose Throat Hospital, Fudan University, Shanghai, China (KJ2009- 16). Informed consent was obtained from each patient.
Consecutive patients with active CSC who visited the clinic of the Eye & Ear Nose Throat Hospital of Fudan University between September 2014 and June 2016 were enrolled. The clinical diagnosis of CSC was based on symptoms, reduced visual acuity with or without metamorphopsia or micropsia, and the presentation of serous retinal detachment on both fundus and OCT examinations. All subjects underwent a thorough ocular examination of both eyes, including best-corrected visual acuity (BCVA), measured with a standard Snellen chart and converted to the logarithm of the minimum angle of resolution for statistical analysis; the measurement of intraocular pressure, using a noncontact tonometer; slit-lamp biomicroscopy; an OCT examination; and the collection of data on symptom duration. The subjects included were those with one affected eye in the first episode of CSC and with a normal contralateral eye (BCVA $ 6/6, intraocular pressure , 21 mmHg, and no clinical signs or history of any intraocular disease). The subjects excluded were those with either clinical signs or a history of any other intraocular disease in either eye; active CSC in both eyes; a former episode of CSC in either eye; any steroid use; or who could not define their symptom duration.
All OCT images were obtained through a dilated pupil with either a high-definition 5-line raster scan protocol (length 6 mm, spacing 0.075 mm; Cirrus HD-OCT; Carl Zeiss Meditec, Dublin, CA) or a line scan protocol (line scans of 30°, composed of 100 averaged images; Heidelberg Spectralis OCT; Heidelberg Engineering, Heidelberg, Germany). For each enrollee, this protocol was applied both vertically and horizontally and centered on the fovea in both eyes. The OCT images (vertical and horizontal) that passed through the central fovea were selected for the analysis of morphologic changes in the PRL and the measurement of the ONL thickness. Two authors (J.Y. and C.J.), who were masked to the information on BCVA and symptom duration, independently evaluated all the OCT images, and a senior retinal specialist (G.X.) acted as arbiter in cases of disagreement between the two authors. 14 The foveal ONL thickness was the average of the distances between the internal limiting membrane and the external limiting membrane at the center of the fovea measured from the horizontal and vertical images, respectively. The difference in the foveal ONL thickness was defined as the difference between the foveal ONL thickness of the CSC eye and that of the contralateral eye. The measurements were made manually using the supplied software (SW version 7.0.1.290; Carl Zeiss Meditec, Inc; or in 1:1 mm mode; HRA/Spectralis Viewing Module 6.0.9.0; Heidelberg Engineering).
When classifying the PRL appearance, the intragrader repeatability and intergrader reproducibility were determined for all the OCT images evaluated by 2 authors (J.Y. and C.J.), who each read all the OCT images twice, at 4-month intervals. Both intragrader repeatability and intergrader reproducibility were assessed as percentage agreement. 16,17 Kappa (k) statistics and the corresponding 95% confidence intervals were also reported using the guidelines proposed by Koch and Landis 18,19 : .0.80 = near perfect agreement, 0.61 to 0.80 = substantial agreement, 0.41 to 0.60 = good agreement, and 0.21 to 0.40 = fair agreement. All the scans and measurements of foveal ONL thickness were made by J.Y. Repeatability of the foveal ONL thickness measurements was calculated from two horizontal scans taken in each eye during a single visit; 20 normal eyes and 20 CSC eyes were included. Intraclass correlation coefficients were used to assess the repeatability of measurements (intraclass correlation coefficient values of 0.81-1.00 indicated almost perfect agreement between repeated measurements; values ,0.40 indicated poor to fair agreement). 20 The data were analyzed with SPSS for Windows version 21.0 (SPSS, Chicago, IL). The Kolmogorov-Smirnov test was used to confirm the normality of the data. Descriptive statistics were calculated, including medians, means, proportions, and frequencies. Either the Kruskal-Wallis test or one-way ANOVA, followed by post hoc multiple comparisons, was used to test the differences in symptom duration, BCVA, and the foveal ONL thickness difference between the eyes classified in three or more categories, whereas either the Mann-Whitney U test or a t-test was used for these comparisons between eyes classified in two categories. Either Pearson correlation coefficient or Spearman correlation coefficient was used to examine the correlation between BCVA, symptom duration, and the difference in foveal ONL thickness. A P value of ,0.05 was considered statistically significant.
In the OCT images, all 222 eyes presented wellcircumscribed serous retinal detachment in the macular region. However, the appearance of the PRL outer border varied and could be classified into three groups: smooth, with or without PRL defect; granulated, with or without protruding foveal PRL; or in the form of scattered dots attached to the external limiting membrane ( Figure 1). The classification of the appearance of the PRL showed good repeatability and reproducibility. The intragrader repeatability scores for J.Y. and C.J. were similar, with 95.1% agreement (k = 0.94, 95% confidence interval 0.89-0.98) and 96.9% agreement (k = 0.96, 95% confidence interval 0.92-0.99), respectively. The intergrader reproducibility was also satisfactory, with an agreement rate of 91.0% (k = 0.89, 95% confidence interval 0.82-0.95). The measurement of the foveal ONL thickness showed good repeatability, with an intraclass correlation coefficient value of 0.953 for normal eyes and 0.986 for CSC eyes.
Eyes with different PRL outer border appearances had significantly different symptom durations, BCVA, and differences in foveal ONL thickness (Tables 1-3). The eyes with a smooth PRL outer border had the shortest symptom duration, best BCVA, and lowest foveal ONL thickness difference (all P = 0.00, Table 1), whereas the eyes with scattered dots of PRL had the longest symptom duration, worst BCVA, and the greatest difference in foveal ONL thickness.
Some of the eyes with a smooth PRL outer border had a PRL defect involving the fovea or the extrafoveal region. The three different eye types all had similar symptom durations, whereas the eyes with a foveal PRL defect had poorer BCVA and a greater foveal ONL thickness difference than the eyes either with no PRL defect or with an extrafoveal PRL defect (all P = 0.00, Table 2). Eyes with a granulated PRL outer border, either with or without a protruding foveal PRL, had similar BCVAs and foveal ONL thickness differences (P . 0.05, Table 3), whereas those with a protruding foveal PRL had longer symptom durations (P = 0.018, Table 3).
Discussion
This study demonstrates that, in eyes with active CSC, the appearance of the PRL outer border can be classified into three main groups, either smooth, granulated, or in the form of scattered dots, and that these different manifestations are associated with a 10fold difference in the median symptom duration (18,180, and 1,855 days, respectively), a significant difference in the median BCVA (6/10, 6/15, and 6/120, respectively), and an almost 2-fold difference in the mean foveal ONL thickness difference (216, 232, and 260 mm, respectively) ( Table 1). Furthermore, eyes with a PRL defect involving the fovea had relatively poor BCVA and a notably reduced ONL thickness in the early phase of the disease (Table 2).
Besides the mutual correlations between symptom duration, BCVA, and the foveal ONL thickness difference, which are consistent with previous findings, [8][9][10][11] we also found that these parameters correlated closely with the different types of PRL appearance: smooth, granulated, or in the form of scattered dots attached to the external limiting membrane. Although the reasons for these correlations are not fully understood, they might be explained in the following way. Normally, the outer parts of the photoreceptor outer segments are phagocytized continuously by the retinal pigment epithelium, whereas the inner parts are regenerated at the junction with the inner segment. 7,10,21 However, in CSC eyes, it was speculated that a lack of phagocytosis by the retinal pigment epithelium leads to the elongation of the outer segments. 7,10 Although the PRL outer border was designated "smooth" in the first group (Table 1), the OCT images showed that the PRL layer changed slightly and, in some cases, was a little thicker than normal (Figure 1, A-C). As the retinal detachment proceeded, disintegration of the outer segments and phagocytosis by macrophages or microglial cells of the outer segments would probably lead to an inhomogeneity of the PRL and granulation of the posterior surface of the detached retina. 12,14,22 Furthermore, as the period of serous retinal detachment increased, photoreceptor cell death increased, with a corresponding reduction in ONL thickness (Table 1). 8,[23][24][25] This would possibly lead to the reduction of PRL renewal and/or the asynchronous elongation of the outer segments, thus contributing to the uneven outer border of the granulated PRL (Figure 1, D and E). As the CSC lasted for quite a long time (median duration of the third group: 1,855 days), the foveal ONL thickness decreased significantly (Table 1), which may suggest a further increase in photoreceptor cell death. 8,[23][24][25] Theoretically, this would inevitably lead to a marked reduction in PRL renewal. Moreover, the scattered dots, which were assumed to be phagocytes with phagocytized outer segments, 22 might indicate that macrophages or microglial cells continued to migrate and phagocytize the PRL. Both of these factors could have contributed to the near absence of PRL in the eyes with scattered dots attached to the external limiting membrane ( Figure 1F).
In addition to the changes in the appearance of the PRL, the ONL thickness continued to decrease as the symptom duration increased, which is consistent with the finding of Hata et al 8 (Table 1). It is speculated that continuous ONL thinning results from the continuation of photoreceptor cell death after retinal detachment, through apoptosis, necroptosis, autophagy, and macrophage or microglial infiltration. 8,[23][24][25] We also found that among the eyes with a smooth PRL outer border, which had similar symptom durations, the reduction in the foveal ONL thickness in eyes with a PRL defect involving the fovea was nearly double that in eyes either with no PRL defect or with an extrafoveal PRL defect ( Table 2: 226 vs. 213 mm or 213 mm, respectively). The foveal ONL thickness difference in the eyes with a foveal PRL defect (226 mm) was close to that found in the eyes with a granulated PRL outer border (232 mm), although the latter group had a much longer symptom duration (median duration: 30 vs. 180 days, respectively). The overlap in their location suggests that the PRL defect and the thinning of the ONL are closely related. The exact mechanism remains unknown, but the following scenario is possible. It has been reported that neuronal defects in one compartment can trigger cellular degeneration in distant parts of the neuron. 26 Similarly, the partial detachment of the PRL, which is part of the photoreceptor cell, can lead to the impairment of the cell bodies (ONL). Furthermore, when the PRL is partially pulled from the retina, which is considered to cause the PRL defect, 15 some photoreceptor cell bodies (ONL) can also be pulled from it. Moreover, the photoreceptor cell death that occurs after retinal detachment, through various mechanisms, can reduce PRL renewal, and thus contribute to different PRL changes, including PRL defects.
Photoreceptor layer defects and PRL granulation both presented with an uneven PRL outer border but differed in several respects. Although photoreceptor cell death might contribute to both changes, the PRL defect is considered to be primarily caused by subretinal exudation in the early phase of CSC, 15 whereas it is speculated that PRL granulation results from the disintegration and/ or phagocytosis by macrophages or microglial cells of the photoreceptor outer segments. 12,14,22 A PRL defect can be detected much earlier than PRL granulation (median duration: 28 days for PRL defect 15 vs. 180 days for PRL granulation), and in OCT images, the PRL defect is usually focal (in our group: average 665 mm, range, 141-3,693 mm), whereas PRL granulation extends throughout the detached retina. Symptom duration, determined from the recollection of the patient, is subjective and sometimes ambiguous in CSC. 14 Therefore, the treatment timing of CSC, which mainly depends on the duration of the symptoms, remains somewhat arbitrary. [27][28][29][30] In this study, we compared the symptom durations, BCVA ranges, and reductions in foveal ONL thickness with each type of change in the PRL. These data showed that the appearance of the PRL changed with the symptom duration increased, the BCVA decreased, and the foveal ONL thinned (Table 1). In this study, we also identified some partly inconsistent cases: eyes with similar symptom durations differed in the appearance of their PRL, their BCVA, and the reduction in their foveal ONL thickness ( Table 2); or conversely, eyes with different symptom durations were similar in these parameters (Table 3). Therefore, our study suggests that, although these four aspects are closely correlated, they may not always behave in a synchronous manner. Symptom duration alone may be insufficient to indicate all the other clinical characteristics during an episode of CSC. Therefore, besides symptom duration, other objective parameters, such as the change in the PRL or the degree of ONL thinning, may be helpful when considering the timing of treatment. This study was limited by its cross-sectional design, and further longitudinal studies involving treatment might tell us more.
In conclusion, PRL appearance, BCVA, the reduction in foveal ONL thickness, and symptom duration correlated closely, but these parameters may behave asynchronously in some CSC eyes. These objective parameters could be used to complement symptom duration when considering the timing of CSC treatment. | 2018-04-03T05:35:23.730Z | 2018-02-01T00:00:00.000 | {
"year": 2018,
"sha1": "03a273e89b4c5462f9166b09b21988b67370dd15",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.lww.com/retinajournal/Fulltext/2019/06000/CORRELATIONS_BETWEEN_CHANGES_IN_PHOTORECEPTOR.12.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "03a273e89b4c5462f9166b09b21988b67370dd15",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10267503 | pes2o/s2orc | v3-fos-license | Homogenization of a singular random one-dimensional PDE
This paper deals with the homogenization problem for a one-dimensional parabolic PDE with random stationary mixing coefficients in the presence of a large zero order term. We show that under a proper choice of the scaling factor for the said zero order terms, the family of solutions of the studied problem converges in law, and describe the limit process. It should be noted that the limit dynamics remain random.
Introduction
Our goal is to study the limit, as ε → 0, of a linear parabolic PDE of the form where a and c are stationary random fields, and c is centered.
Let us recall (see [1]) that in the periodic case the equation admits homogenization under the natural condition c = 0 ( · stands for the mean value) and that the homogenized operator takes the form with constantâ andĉ.
In contrast with symmetric divergence form parabolic problems, in the presence of the lower order terms the asymptotic behaviour of operators with random coefficients might differ a lot from that of periodic operators.
Homogenization problem for parabolic operators whose coefficients are periodic in spatial variables and random stationary in time, were studied in [4,5,14]. It was shown that, under natural mixing assumptions on the coefficients, the critical rate of the potential growth is of order 1/ε. In this case the limit equation is a stochastic PDE.
If the oscillating potential is random stationary (statistically homogeneous) in spatial variables then the range of the oscillations (the power of ε −1 in front of potential c) should depend on the spatial dimension.
In this work we deal with a one-dimensional spatial variable and show that the range of oscillation should be of order 1 √ ε . This means that for larger powers of 1 ε the family of solutions is not tight as ε → 0, while for smaller powers of 1 ε the contribution of the potential is asymptotically negligible. It turns out that the Dirichlet forms technique which is usually quite efficient in homogenization problems, does not apply to problem (1.1) because one cannot prove any lower bound for the quadratic form corresponding to the operator (1.1). This is due to the fact that the problem is stated on the whole line R, and not on a compact interval, and that the coefficients of the operator are a.s. unbounded, see the discussion in Section 6. Instead we use the direct approach combining the Feynman-Kac formula with several correctors, Itô calculus and martingale convergence arguments.
The main result of the paper (see Theorem 2.2) states that under proper mixing conditions the solution u ε of eq. (1.1) converges in law to a random field where B · and W · are independent Brownian motions, E and L y t are respectively the expectation and the local time related to √ã B · , andã,c are constants. The interpretation of this expression is given in the last section of the paper. It is shown that the effective equation is not a standard SPDE but rather a parabolic PDE with random coefficients.
Let us give an intuitive explanation of our result. The Feynman-Kac formula for the solution of eq. (1.1) yields where E means expectation with respect to the law of the diffusion X ε,x · , the random field c being frozen (or "quenched"). Under the assumptions which we shall make below, one can apply a version of the functional central limit theorem, which tells us that converges weakly towards cW (x), where W is a standard Wiener process. Now the exponent in the above Feynman-Kac formula reads where L y,ε t denotes the local time at time t and point x of the diffusion process {X ε,x }. One might expect that the last integral converges towards the integral of the limiting local time, with respect to the limiting Wiener process. This is one of the results which will be established in this paper.
Our paper is organized as follows. In Section 2, we formulate our assumptions, and the results. In Section 3, we prove some weak convergence results. Section 4 is devoted to the proof of the pointwise convergence of the sequence u ε (t, x), while Section 5 is concerned with convergence in the space of continuous functions. Finally in Section 6 we discuss the limiting PDE.
Set up and statement of the main result
We make the following assumptions: 2) The coefficients {a(x), x ∈ R} and {c(x), x ∈ R} are stationary random fields defined on a probability space (Σ, A, P ), and we assume that where E denotes expectation with respect to the probability measure P . (A.3) Let F x := σ{a(y), c(y); y ≤ x}, F x := σ{a(y), c(y); y ≥ x}.
We assume that the random fields a and c are φ-mixing in the following sense. Define, for h > 0, φ(h) the mixing coefficient with respect to the σ-algebras from above, as We suppose that Consider now the family of Dirichlet forms {E ε,σ , ε > 0, σ ∈ Σ} on L 2 (R) defined by with domain H 1 (R). For each ε > 0, σ ∈ Σ there exists a unique self-adjoint operator L ε,σ with domain D(L ε,σ ), such that for u ∈ D(L ε,σ ), v ∈ L 2 (R). For each initial point x ∈ R, σ ∈ Σ and ε > 0, there exists a continuous Markov process {X ε,σ,x t , t ≥ 0} defined on some probability space (Ω, F , P ε,x,σ ) whose generator is L ε,σ and which starts at time t = 0 from x. The probability may depend on the three parameters ε, x, σ. x will be fixed throughout this paper, so we drop it from now on. Note that the process {X ε,x t , t ≥ 0} is in fact defined on the probability space (Σ × Ω, A ⊗ F , Q ε ) (as such, it is not a Markov process), where the probability Q ε on the product space Σ × Ω is defined as The Feynman-Kac formula allows us to write down an explicit formula for the solution of eq. (1.1): where E ε,σ denotes expectation with respect to P ε,σ .
Considering assumption (A.2), we define the finite quantities (2.6) In view of Theorem 5.1 and Lemma 5.1 from [11], we may state the following theorem.
Theorem 2.1. We have the following convergence, P a.s.: The main result of this paper is the following theorem.
where W denotes a one-dimensional standard Brownian motion defined on the probability space (Σ, A, P ) and L y t is the local time at time t and point y of the process {X t , t ≥ 0} defined on (Ω, F , P). Then u ε ⇒ u in law in C(R + × R), as ε → 0.
We introduce the notation The first step in the proof of Theorem 2.2 is to establish the weak convergence of the pair (X ε,x t , Y ε,x t ), which is done in the next section.
Weak convergence
The main result of this section is the following theorem.
weakly, as ε → 0, with where, as above, L y t is the local time at point y and time t of the Brownian motion {X t , t ≥ 0} defined on (Ω, F , P), and {W y , y ∈ R} is a Wiener process defined on (Σ, A, P ), so that (X, L) and W are independent. Theorem 3.1 will follow easily from Propositions 3.7 and 3.10, as we shall see at the end of this section. Note that all we shall need in the next section is both Propositions 3.7 and 3.10.
Let us first state a consequence of Aronson's estimate, see Lemma II.1.2 in [16]: There exists κ > 0, which depends only on c and C in (2.1), such that for all ε > 0, r > 0, We next prove the easiest part of the above result, i.e. we give a proof of Theorem 2.1, since we shall need some of its details later.
Proof of Theorem 2.1. Let {χ(x), x ∈ R} be the zero mean random process given by the formula We note that from Birkhof's ergodic theorem (see e.g. Theorem 24.1 in [3]), x → 0, P a.s., as |x| → ∞. Moreover, this random process satisfies the two relations: and We now define It follows from the Itô-Fukushima decomposition (see [7] or Theorem 0.10 in [11]) and (3.2) that (here and further below, M X ε,x denotes the martingale part of the process X ε,x ) s. a P-martingale. Moreover, its quadratic variation is given by It will be proved below in Lemma 3.9 that in Q ε probability. It now follows from well-known results that P a.s., where {B t , t ≥ 0} is a standard Brownian motion defined on the probability space (Ω, F , P). Moreover, for all T > 0, consequently P a.s., Let Φ denote the solution of the ordinary differential equation: which is defined as follows: We let and We first prove the following proposition. Proof. Denote, for . According to assumptions (A.1), (A.2), (A.3) and the functional central limit theorem (see e.g. [2], pages 178, 179), it follows that
525
where {W 1 (x), x ≥ 0} and {W 2 (x), x ≥ 0} are mutually independent standard Brownian motions. Finally we denote by {W (x), x ∈ R} the process defined by It remains to show why Theorem 3.1 follows from Theorem 2.1 and Proposition 3.3. First we define, for x ∈ R, We have the following lemma.
Proof. Since a(·) is bounded away from zero, a −1 (·) is bounded. Hence the collection of random functions k ε is tight in C(R). It then suffices to show that the finite dimensional marginals converge in law to those of the deterministic function k. But from Birkhoff's ergodic theorem, for any in P a.s., as ε → 0.
Denote by C + (R) the space of continuous and increasing functions on R, and by S the map from C(R) × We have the following lemma.
Lemma 3.5. The mapping S is continuous, from E = C(R) × C + (R), equipped with the product of the locally uniform topology of C(R) × C(R), into C(R), equipped with the locally uniform topology.
Proof. It suffices to show that for each But this follows from Lemma 5.8 in [8].
We now have the following lemma. Lemma 3.6. As ε → 0, x 0 W (z) dz and W ε and F ε are defined in (3.4) and (3.5) respectively.
The next step is to show that the triple (X ε , W ε , F ε ) converges. This is essentially a consequence of the three following facts: X ε converges, (W ε , F ε ) converges, and the two limits X and (W, F ) are defined on (Ω, F , P) and (Σ, A, P ) respectively. We now prove that fact rigorously.
Proof:. We first choose two arbitrary functionals P a.s., and from Lemma 3.6, as ε → 0. Hence, from the Bounded Convergence Theorem, we conclude that
It now suffices to note that
It follows from Lemma 3.8 that in other words and M X ε,x denotes again the martingale part of the process X ε,x . In particular the quadratic variation of M ε,x is given by the quantity and the joint quadratic variation of M ε,x and Z ε is and by virtue of the Itô-Fukushima decomposition (see [7] or Theorem 0.10 in [11]), we get for |x| ≤ M + 1 The result follows, by letting M → ∞, with the help of Lemma 3.2. Define Proof. Denote θ(x) = 1 a(x) − 1 a . Then θ(x) is a bounded stationary field with zero mean. Letting and repeating the argument in Lemma 3.8, we get where In the same way as in the proof of Proposition 3.7, one can show that the families {ε 3/2 Θ( . Indeed, Θ is constructed from θ exactly as Φ from c. Moreover, θ is, exactly as c, a stationary mixing bounded and zero mean random field. Now, multiplying the relation (3.7) by √ ε, we conclude that For any f ∈ C b (R) and ε > 0, let us define the process {N f,ε t ; t ≥ 0} by We now prove the following proposition.
Proposition 3.10. P a.s., , Proof. Since (X ε,x − Z ε ) converges to zero in probability, (N f,ε t , X ε,x t ) behaves as ε → 0 exactly as (N f,ε t , Z ε t ), hence we consider the two-dimensional martingale (N f,ε t , Z ε t ), and compute its associated bracket process, which takes values in the set of 2 × 2 symmetric matrices. We have Combining Theorem 2.1, Lemmas 3.5 and 3.9 we obtain that this R 4 -valued process converges P a.s. in P law towards We then conclude that P a.s., The statement below is a straightforward consequence of Birkhoff's ergodic theorem.
Proposition 3.11. For any N > 0, x ∈ R and f ∈ C(R) the following convergence holds P a.s.
We now establish the version of (3.6) for ε = 0, which is an Itô type formula for the process {F (x + X t ), t ≥ 0}, where F (y) := c a y 0 W (z) dz, y ∈ R. More precisely, we have the following lemma.
Lemma 3.12. For any t ≥ 0 and x ∈ R, we have Proof. We prove this formula by using smooth approximations of the process {W }, obtained by convolution. Let ρ be a C 2 0 (R) function such that ρ ≥ 0, supp(ρ) ⊆ [−1, 1] and R ρ(y) dy = 1. Define now ρ n (y) := nρ(ny), W n (y) : From the uniform continuity of W on compacts, W n − W C(K) → 0, P a.s., as n → ∞, for any compact set K in R. Moreover, taking into account the fact that W is a standard Brownian motion, we get for y ∈ R. Set Itô's formula applied to the process {F n (x + X t ), t ≥ 0} gives Recall that {x + X t , t ≥ 0} is a non-standard Brownian motion independent of {W }. It is easy to see that the left-hand side in the last formula tends to F (x + X t ) − F (x), P × P a.s., as n → ∞. Moreover, since W n (x + X s ) → W (x + X s ), ds × P × P a.e., as n → ∞, and moreover the sequence {(W n (x + X s )) 2 , n ≥ 1} is ds × P × P uniformly integrable on [0, t] × Σ × Ω, thanks to (3.10). Finally, from the occupation time formula for continuous semimartingales (see e.g. Corollary 1.6, page 209 in [15]), with {L · t } denoting the the local time of the process {X s , 0 ≤ s ≤ t}, in L 2 (Σ), P a.s., as n → ∞ (for more details see Section 5.7 in [10]). We used again the fact that the Brownian motions {X t , t ≥ 0} and {W (y), y ∈ R} are independent. Passing now to the limit in the formula (3.11) we get the desired result.
We can finally proceed with the following proof.
Proof of Theorem 3.1. Since the mapping is continuous, if we equip the three spaces with the topology of uniform convergence on compact sets, we first conclude from Proposition 3.7 that Hence from the formulas for M ε,x t and M ε,x , Z ε t above, and Lemma 3.9, we deduce easily that and consequently From Proposition 3.7, those convergences are joint with those of (X ε,x · , F ε ). Consequently The convergence Y ε,x t ⇒ Y x t now follows from (3.6) and (3.8). The result finally follows from the fact that all the above convergences are joint with that of X ε,x · .
Pointwise convergence of the sequence u ε
The first part of this section is devoted to establishing uniform integrability estimates for the exponent in the Feynman-Kac formula (Propositions 4.4 and 4.5) which are essential for the proof of the pointwise convergence part of Theorem 2.2, to which the second part of this section is devoted. We first define the following R + -valued random variables, for 0 < γ < 1/2, ε ≥ 0: We have the following lemma.
Proof. Due to the symmetry it is sufficient to estimate |W ε (x)| for x > 0. We have By Proposition 7.2.6. in [6] the process η t is stationary and |η t | ≤ c 1 a.s. with a non-random constant c 1 . Moreover, and thus we deduce from Doob's inequality Summing up over j ≥ 1, we deduce that The lemma is established.
Remark 4.2.
We can in fact show that, as ε → 0, provided again 0 < γ < 1/2, but we shall not use that result.
We next state a result, which is an immediate consequence of Lemma 3.2.
Lemma 4.3. There exists a continuous mapping
We next establish the following proposition.
where we have used Lemma 3.2 for the last inequality. The second factor on the r.h.s. of (4.2) can be estimated as follows The first term on the r.h.s. does not exceed 1. For the second one we have by Jensen's inequality where we have again used Lemma 3.2 for the last inequality. The result clearly follows.
Clearly, the same proof allows us to establish the slightly more general proposition.
We can now proceed with the following proof.
Proof of the pointwise convergence in Theorem 2.2. We will now show that for each (t, x) ∈ R + × R, u ε (t, x) ⇒ u(t, x), as ε → 0. We delete the parameters t and x. It suffices to show that for any ϕ ∈ C(R; [0, 1]), ϕ Lipschitz continuous, as ε → 0,
and (recall (3.6) and (3.5))
The fact that this Y equals the exponent in the Feynman-Kac formula for u(t, x) follows from (3.8). We first approximate Y ε by Y ε,M as follows. For each ε > 0, M > 0, let We postpone the proof of the following lemma.
Since the collection of random processes {W ε (y); y ∈ R} is P -tight, for all δ > 0, there exists N ∈ N and f δ,1 , f δ,2 , . . . , f δ, and finally Note that f δ,k ∞ depends on δ. However, we can and will assume that for some 0 < γ < 1/2, for all δ > 0, k ∈ N. We now develop The last term in the above right-hand side is bounded in absolute value by δ. Now for 1 ≤ k ≤ N , We postpone the proofs of following lemmas.
Lemma 4.7. There exists a constant C, which depends only on t, f δ,k ∞ and the constants appearing in It follows readily from Propositions 3.10 and 3.11 together with Lemma 4.7 that, as ε → 0, Let B δ k , 1 ≤ k ≤ N , denote the sets defined exactly as the B δ,ε k 's, but with W ε replaced by W . The boundaries of those sets being of zero Wiener measure, we conclude from the last statement and the fact that W ε ⇒ W that as ε → 0, Now, in the same way as above we obtain All we need to conclude the proof is the next lemma. It remains to prove the four lemmas.
Proof of Lemma 4.7. There exist two constants c 1 and c 2 such that
537
The bound for the first factor on the right follows from Lemma 3.2, and the bound for the second factor follows easily from the boundedness of both f δ,k and d ds Z ε s .
Proof of Lemma 4.8. We have, by an argument similar to that in the proof of Lemma 4.6, as δ → 0. The end of the proof is similar to that of Lemma 4.6.
Proof of Lemma 4.9. This proof is similar to that of Lemma 4.8.
and on the set B δ k , using in particular (4.4), which clearly goes to 0, as δ → 0. The result follows.
Convergence in
It remains both to prove the convergence of the finite dimensional distributions of u ε towards those of u, and to establish that the sequence {u ε ; ε > 0} is tight as a collection of random elements of C(R + × R).
Proof. We only sketch the proof, the details being identical to those of the proof of the pointwise convergence, as were given in the previous section. For each 1 ≤ i ≤ ℓ, we define X ε i := X ε,xi ti and Y ε i := 1 √ ε ti 0 c X ε,xi s ε ds.
We need to take the limit as ε → 0 in the quantity where ϕ ∈ C(R ℓ ; [0, 1]) is Lipschitz continuous. For that sake, referring to the notations in the previous section, for each δ > 0, 1 ≤ k ≤ N , we define Y δ,ε i,k := 2c We have that for each 1 ≤ i ≤ ℓ,
The first factor in the last expression contributes to the coefficient Φ(T, M, ξ γ,ε ) in (5.2), while the difference between the second factor and contributes to ρ ε . Now the absolute value of the second term in the right-hand side of (5.3) is dominated by cE(A ε + B ε ), where | 2008-06-16T08:07:52.000Z | 2008-06-01T00:00:00.000 | {
"year": 2008,
"sha1": "eb0898a6bf0a329e9c56bff421edda6b287df1d1",
"oa_license": "implied-oa",
"oa_url": "https://doi.org/10.1214/07-aihp134",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "95e8f8780602067f4e41a498ef91d7be85e92db7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
13134136 | pes2o/s2orc | v3-fos-license | Framing overdiagnosis in breast screening: a qualitative study with Australian experts
Background The purpose of this study was to identify how the topic of overdiagnosis in breast cancer screening is framed by experts and to clarify differences and similarities within these frames in terms of problems, causes, values and solutions. Methods We used a qualitative methodology using interviews with breast screening experts across Australia and applying framing theory to map and analyse their views about overdiagnosis. We interviewed 33 breast screening experts who influence the public and/or policy makers via one or more of: public or academic commentary; senior service management; government advisory bodies; professional committees; non-government/consumer organisations. Experts were currently or previously working in breast screening in a variety of roles including clinical practice, research, service provision and policy, consumer representation and advocacy. Results Each expert used one or more of six frames to conceptualise overdiagnosis in breast screening. Frames are described as: Overdiagnosis is harming women; Stop squabbling in public; Don’t hide the problem from women; We need to know the overdiagnosis rate; Balancing harms and benefits is a personal matter; and The problem is overtreatment. Each frame contains a different but internally coherent account of what the problem is, the causes and solutions, and a moral evaluation. Some of the frames are at least partly commensurable with each other; others are strongly incommensurable. Conclusions Experts have very different ways of framing overdiagnosis in breast screening. This variation may contribute to the ongoing controversy in this topic. The concept of experts using different frames when thinking and talking about overdiagnosis might be a useful tool for those who are trying to negotiate the complexity of expert disagreement in order to participate in decisions about screening. Electronic supplementary material The online version of this article (doi:10.1186/s12885-015-1603-4) contains supplementary material, which is available to authorized users.
Background
Overdiagnosis in breast screening has become a highly contentious issue and source of strong disagreement amongst experts. In this paper we use the term "overdiagnosis" to mean the diagnosis through mammographic screening of an asymptomatic breast condition that is non-progressive or so slowly progressive that it would not otherwise have come to the patient's attention in her lifetime, and where this diagnosis provides no net benefit to the patient [1]. The possibility of overdiagnosis in breast screening was acknowledged from its early days of use. The idea that breast screening might lead to the detection of lesions that are "morphologically malignant but clinically benign" was raised as early as the 1970s ( [2], p490). Later it was also recognised that mammographic screening would uncover a significant number of in-situ cancers, at least some of which "might not have entered an invasive phase during their lifetime" ( [3], p14) and would likely fall into the category of overdiagnosis. Despite this, there was limited controversy about overdiagnosis when breast screening programs were being introduced in many Western countries during the 1980s and 1990s. This may have been partly because of poor outcomes from treatment of symptomatic breast cancers and the evidence-based promise of a 30 % reduction in population breast cancer mortality.
Since that time, however, the evidence-based estimates of the mortality benefit from breast screening have been revised and reduced [4,5]. In addition, improvements in breast cancer treatment are likely to have further reduced the potential impact of screening in the modern Western setting [5,6]. These developments have fostered a growing interest amongst breast screening experts about the significance of overdiagnosis, which is now a topic of major international concern [7][8][9].
Researchers and clinicians present many different views about overdiagnosis, and focus on different problems and solutions, including: preventing overdiagnosis harm [10]; communicating with women about overdiagnosis [11][12][13]; and quantification of overdiagnosis [14][15][16]. There are also big differences of opinion within these topics. Understanding how and why experts form their opinions about this complex issue, and sometimes arise at opposing views, would add to our understanding of the current processes for early detection in breast cancer and assist those who seek to contribute to mammography screening policy, as well as those participating in consumer decisions about screening.
We conducted a detailed qualitative study of the views and opinions of Australian breast screening experts on a range of topics related to mammography screening. We used a framing approach to map and analyse experts' views on the issue of overdiagnosis. Framing describes the particular mind-set through which a topic is understood. The framing of an issue determines how the problem is conceived, what information is selected and the value judgements that are made. Different frames incorporate different, apparently self-evident, strategies to solve the perceived problem [17,18]. Frames can be used in politics or by institutions to convey a particular message or point of view [19]. Frames are not only used as deliberate tools: they are also used by individuals, often unconsciously, as a way of thinking about and making sense of a complex topic. Framing theory is particularly well-suited to the study of overdiagnosis because it allows for a detailed examination of different viewpoints held, and used, by experts about this contentious topic. We present our analysis of how experts framed the topic of overdiagnosis in breast screening. Our research questions were: How do Australian breast screening experts frame overdiagnosis? How do those frames present the problems, causal elements, value judgements and solutions relevant to overdiagnosis?
Methods
This study is part of a larger Australian National Health and Medical Research Council (NHMRC) funded project examining ethical issues in cancer screening in Australia [20]. One component of the larger project was a qualitative study of contemporary issues in breast cancer screening, using semi-structured interviews with influential breast screening experts. This paper is reporting on one aspect of this breast screening study. We defined "influential experts" as people working or researching in breast screening who influence the public, primary care practitioners and/or policy makers by engaging in one or more of: media commentary; academic or lay publications and presentations; senior service delivery management; membership of government advisory bodies, professional committees and/or non-government/consumer organisations related to breast screening. We sampled purposively from this population, seeking to obtain a wide diversity of views by inviting participants with a range of publicly aired positions [21]. We reasoned that perspectives on screening might be associated with professional backgrounds so we ensured that we included experts with a range of roles and responsibilities. See Table 1 for further participant details.
Non-clinical researchers 14 (3)
• Epidemiologists/biostatisticians 9 (1) • Others [NOS] 5 (1) Administrators/managers 6 (2) Advocacy leaders 6 (7) • Consumers working in advocacy 3 (6) • Clinicians/researchers working in advocacy 3 (1) Public stance on breast screening+ Supportive 16 (9) Mostly supportive # 3 (1) Critical 6 (0) Unknown to researchers 8 (3) *note that some experts held more than one professional rolê Most clinicians engaged in research to a greater or lesser extent + We loosely categorised potential interviewees as being "supportive", "mostly supportive" or "critical" about breast screening based on publicly available commentary # Broadly supportive of breast screening but with selected concerns about one or more elements of the program government advisory and advocacy bodies involved in breast screening, and following up suggestions from colleagues and participants. We used information in the public domain to contact experts by email. Fortysix experts were contacted, and 33 (17 male, 16 female) participated in the study. Thirteen people either did not wish to participate (3), did not respond (9) or were unable to participate in the time available (1). We had a low response rate from senior community advocacy figures. Speculatively, this may have been due to a higher turnover of staff in these (largely volunteer) positions than in other professional roles. That is, the individuals may no longer have been contactable at the email addresses that we had access to. We continued sampling until we had good representation of a range of professional roles and until we reached thematic saturation in our analysis [22].
We used an interview format for in-depth exploration of the views and reasoning of experts. LP conducted semistructured interviews from October 2012 to October 2013, meeting in the participant's or her own workplace, or talking over telephone if unable to meet in person. The interviews lasted between 39 and 105 min (average 66 min) and there was no observed difference between face to face and telephone interviews in terms of quality or length [23]. At the beginning of each interview, LP discussed her interest in the topic with the expert, explaining that she was a medical practitioner with clinical experience in breast screening, currently undertaking doctoral studies in cancer-screening ethics. She clarified that the purpose of the interviews was to glean the range of opinions amongst Australian experts about breast screening. The interviews drew loosely on a set of core questions designed to draw out the participant's views. We also sought to tailor each interview to the particular expertise and interests of the participants, and explored the leads and topics that arose throughout the discussion [22,24]. We encouraged the participants to talk about overdiagnosis, asking generally for interviewees' views on this topic, without pre-empting ideas about what might be considered important. We only pursued particular lines of enquiry about controversial elementsas informed by the literatureif this flowed on from preceding comments of the participant. An additional file outlines sample interview questions (see Additional file 1).
The interviews were taped, transcribed and deidentified. We used an inductive analytic methodology, developing a set of categories that captured the most important views and values in the experts' comments. Each interview was read repeatedly and coded in detail to capture views and values relevant to overdiagnosis. The analysis was conducted as an iterative process comprising detailed coding of individual transcripts (LP) and discussion and revision of the findings in group analysis meetings (all authors). We used framing theory to organise and understand different ways that experts thought about overdiagnosis, identifying the dominant frames in use and categorising important elements of each frame in terms of problems, causes, solutions and moral evaluation [18].
Ethics approval was granted from the Cancer Institute NSW Population & Health Services Research Ethics Committee [HREC/12/CIPHS/46] and the University of Sydney Human Research Ethics Committee [#15245]. All participants gave individual consent to be interviewed, and were free to withdraw from the study at any stage.
Results
We identified six frames that Australian breast screening experts used with regard to overdiagnosis ( Table 2).
Frame 1: overdiagnosis is harming women
"I would like to see breast cancer eradicated too but not at the expense of … potentially treating them with serious treatments for a condition that maybe didn't need to be found in the first place… To me, it's all about how do we run this program in a way that minimises the harm … without losing the benefit." (Expert #33, clinician) Experts who used this frame were passionate about the topic of overdiagnosis in breast screening and saw it as a major threat to the wellbeing of women. The frame emphasised both quantity and quality of harm. Harm quantity was described in terms of the high number of overdiagnosed cases compared to the number of lives saved by screening. Harm quality was discussed by highlighting the serious negative impact from each case of overdiagnosis, including both the psychological impact of a breast cancer diagnosis on a woman and her female relatives (for whom it has perceived risk implications), and the short and long term impact of unnecessary treatment on lifestyle and physical health. This framing of overdiagnosis as a serious problem was grounded in a strong commitment to avoiding harm in any public health program.
This frame encompassed two categories of solution. Experts who were enthusiastic about the potential benefits of screening suggested reducing overdiagnosis through a targeted, personalised screening program, matching recommended screening frequency to breast cancer risk as determined by factors such as breast density. This would enable the population to simultaneously retain benefits of screening and reduce harms. Experts who were more sceptical about the benefits accruing from breast screening preferred a more extreme solution: reducing overdiagnosis by decreasing overall breast screening participation. However, they assumed that cessation of public funding for the program was politically unlikely, and promoted more realistic solutions such the removal of governmental promotions and personalised screening invitations. This frame centres on the negative publicity generated by overdiagnosis discussions and the decrease in breast screening participation that might ensue. Underlying this concern is a firm belief in the net benefit of breast screening and a strong desire to have women avail themselves of life-saving opportunities. The frame delivers a choice between life and overdiagnosis: "saving a life is more important than the harm that's caused in damaging normal breasts." (Expert #3, clinician). Experts using this frame regarded overdiagnosis as a minor problem, for several reasons. Firstly, and most commonly, it was seen as an inevitable part of screening, particularly breast screening where cancer growth is variable and unpredictable. Secondly, the number of overdiagnosed cases was considered low relative to the total number of breast cancers picked up through the program. Finally, the harm associated with each overdiagnosed case was seen as low. This was justified in several ways: 1) individual women could not know whether or not their cancer was a case of overdiagnosis; 2) women (allegedly) disregarded the concept of overdiagnosis when considering treatment options; and 3) treatment for small, low-grade cancers (ie those most likely to be cases of overdiagnosis) was viewed as relatively benign. In addition to the lack of harm, the frame highlighted possible benefits from overdiagnosis. Although, by definition, an overdiagnosed cancer will not itself threaten a woman's life, experts suggested that as the woman would be at increased risk of a second breast cancer she would benefit from being identified and treated with tamoxifen. In this frame, personal autonomy and informed choice were important values in healthcare. However experts rejected the idea that stopping 'squabbling in public' might conflict with respecting womens' autonomy. Their central concern was not so much that overdiagnosis was mentioned, but that overdiagnosis was invariably (mis) represented as an important harm: "Harm is a term that's been developed by academics, along academic lines… There's a possibility of over diagnosis … it's not very much … you shouldn't call that harmful." (Expert #17, consumer advocate) Some experts used this frame with the view that informed choice was an unattainable goal, because overdiagnosis in breast screening is just so complex: "There's all this business of informed consent. Well, frankly, I think it's for the birds. I think it's a very difficult thing for people to have informed consent. When people argue a lot, you know, people that are informed, supposedly, argue, I don't know how you give informed consent. It's very difficult for the average layperson to understand." (Expert #9, clinician) There was also moral condemnation of the particular impact that negative publicity has upon disadvantaged women. This group was presented as being particularly likely to be confused by public debates, and vulnerable to screening disengagement: "There's probably people in the [suburbs of lower socioeconomic status] who stop going to screening. Because they're not as sophisticated … and they come from non-English speaking backgrounds. The message they get is that screening is not needed… It's okay if you're in the [suburbs of higher socioeconomic status] because you'll keep coming anyway." (Expert #29, clinician) In this frame, appropriate solutions focussed on preventing a fall in participation rates. They included: avoiding any implication that overdiagnosis is a harm; keeping discussions confined to academic circles; and informing women about overdiagnosis only when attendance is secured (such as at the point of mammogram or after diagnosis).
Frame 3: don't hide the overdiagnosis problem from women
"We should absolutely tell people, 'These are the benefits, these are the harms'; and some people say that public health benefits should be what we are aiming for, but for me I think you absolutely cannot compromise on telling people. It's just not something I'm prepared to do." (Expert #23, researcher NOS) This frame centres on the lack of communication about overdiagnosis from screening providers to women. Experts acknowledged that while some women prefer a simple advisory message about breast screening, others want an informed decision making process, with the readily available and easily-understood information. The current lack of communication about overdiagnosis was presented as a deliberate strategy by screening providers to avoid risking a decline in participation. In this frame, informed choice was an absolute right for individual women, taking priority over the delivery of population health benefits.
The solution was to make information about overdiagnosis available to women, despite the inherent complexities in the topic and the tension with trying to encourage participation: "I agree with you that the experts can't agree and how do you talk to women about it, and it is a very complex area and hard to talk about, but clearly an important issue in the context of screening… I think you have to share with women your uncertainty." (Expert #25, epidemiologist) This frame accommodated a variety of solutions ranging from detailed publicising of overdiagnosis information in every screening pamphlet and advertisement, to making detail of possible harms from screening available upon request. In this frame provision of information could coexist alongside government promotion of screening.
Frame 4: we need to know the overdiagnosis rate "There is a recognition that there are tumours found that are either frankly non-progressive or are likely to progress so slowly they don't matter. I don't think too many people would say, 'Well that wouldn't exist at all'. The argument is over how much and the scale of that." (Expert #22, epidemiologist) In this frame, the main problem was overdiagnosis measurement and quantification. Experts spoke of overdiagnosis as being of indeterminate significance because of uncertainty about the overdiagnosis rate. They saw the wide range of estimates as a central conundrum, possibly explainable by different methodologies and variable data sets. A subsidiary problem was the inconsistent presentation of overdiagnosis figures, variably portrayed as acceptably low by comparing with the (large) number of cancers diagnosed, or as unacceptably high by comparing with the (smaller) number of lives saved by screening. This made it difficult to compare studies and understand the implications of overdiagnosis. In this frame sloppy research methods aimed at generating quick or provocative publications were a particular problem, eliciting strong disapproval. The first step to solving this quantitative problem would be to reach consensus on the most reliable and robust ways to calculate and present overdiagnosis.
Frame 5: balancing harms and benefits is a personal matter
"Descriptively they're quite different … I don't think there is any formula for the balance… It's very subjective of the balance of disparate outcomes." (Expert #20, clinician) Through this frame, the problem was comparing harms and benefits of breast screening. Experts discussed both overdiagnosis harms and mortality benefits accruing from breast screening. They suggested that while each are likely to be important to women, current estimates about their rates meant that harms and benefits were closely balanced; in this situation, qualitative differences between the two made it impossible for experts to draw exact conclusions about where and when equipoise arose. In this frame, such uncertainty required that the public should assist with decision making. Experts explained that since individual attitudes to harms and benefits would determine what was perceived as the net outcome of screening, the process of decision making needed consumer input: it was insufficient to rely on pre-determined program values or system priorities. The frame encompassed two possible solutions. Some experts discussed seeking public assistance with decision making at the policy level, using a deliberative process such as a citizens' jury to make a ruling about the balance between benefits and harms: "I believe that for a lot of screening things there should be a community jury. There are some things that are obvious, that we can just proceed with them, but other things where there's a balance between the benefits and harms, I think we need some sort of deliberative democracy process." (Expert #21, researcher NOS) Others spoke of more explicit attempts to achieve informed consumer decision making, encouraging women to consider the net value of screening for themselves as individuals. They suggested screening participation decisions should be based on women's personal priorities rather than potentially coercive input from screening providers.
Frame 6: the problem is overtreatment "I don't really believe in overdiagnosis as such. I mean, I think there's over treatment … Finding it is not the issue. Treatinghow it's treated is the issue, as I see it." (Expert #9, clinician and provider) The final frame through which overdiagnosis was understood purposefully separated the treatment process from the screening process, and presented the problem as arising from treatment decisions. Several causal elements for the growing problem of overtreatment were presented: some experts spoke of the increasing sensitivity of radiological equipment, meaning that more and more lesions were identified. Others noted that diagnostic criteria for certain pathological entities were vague, and "not … easy to get inter-observer agreement on." (Expert #28, clinician) They discussed resulting disagreements about the threshold for atypia, with tendencies amongst some pathologists for 'overcalling' cancer so that benign changes were more likely to be named and treated as borderline lesions. Finally, experts commented on the limited research around natural history and management guidelines for low-risk lesions. Expert #28, (clinician) noted that, "a lot of those guidelines are based on reviews of data which are not robust" and suggested that they were instead driven by clinicians' observer bias and accepted by women with high levels of anxiety and fear. Women with low-risk lesions were perceived as undergoing aggressive treatments while, "you really wonder whether any of it was actually necessary." (Expert #13, clinician) In this frame, both mortality benefit and harm avoidance were valued. Thus appropriate solutions in this frame maintained current screening parameters, and only altered downstream elements. Experts presented a range of solutions including: regular pathology updates on diagnostic criteria and thresholds; research into better prognostic tools (such as biological markers of aggression); development of more targeted / less harmful therapies, research into less aggressive treatment regimes for low-risk lesions; and patient-centred care for women with borderline lesions, relying on correlation between clinical, radiological and pathological findings to make a diagnosis and plan the management, rather than following set guidelines.
How experts used frames
Each expert used between one and four frames. Some experts employed two or more moderately incommensurable frames, and were often conscious of inherent contradictions. For example Expert #7 (clinician) used both the "stop squabbling in public" and "stop hiding the problem" frames, acknowledging the possible inconsistency of this position. However, none of the experts' discussions combined frames that were strongly incommensurable, for example, no experts used both the "overdiagnosis is harming women" and the "stop squabbling in public" frames. The "stop hiding the problem" frame was the most commonly used, and was adopted by experts working across all roles except consumer representation/ advocacy. All (three) consumers working in advocacy roles used the "stop squabbling in public" frame.
There were observable patterns between experts' overall views on breast screening and their use of overdiagnosis frames. All experts who were critical of breast screening used the "don't hide the problem" frame, and none of them used the "stop squabbling in public" frame. Experts who were supportive of breast screening used one or other, but not both, of these frames (in approximately equal numbers), and were the only group to use the "stop squabbling in public" frame. Further detail on this is available in Additional file 2: Table S1-S2).
Discussion
It is recognised in the breast screening literature that experts hold differing opinions about overdiagnosis, but the basis for those differences has not been explored. We identified six overdiagnosis frames in use by Australian breast screening experts and analysed the elements of each frame. There was considerable variation between frames, in terms of: how overdiagnosis was problematised, what information was highlighted as being relevant, what values were prioritised as being important, and what solutions were suggested. These multiple points of difference explain much of the controversy and disagreement that surrounds this important topic.
To our knowledge, there has been no detailed empirical study on what and how breast screening experts think about overdiagnosis. Some journals have presented debates containing opposing arguments as a way of exploring some of the diversity within this topic [25,26]. Others have published letters to the editor in response to controversial elements within breast screening articles [27]. Our work builds upon and extends the existing literature, providing a comprehensive analysis of the frames used to talk about and understand overdiagnosis in breast screening. Previous research has suggested that consumers are largely unaware about overdiagnosis [12], but nevertheless an important avenue for future research would be to investigate whether women have pre-existing ideas and concerns about aspects of overdiagnosis that have not been captured within the frames presented here.
An understanding of the elements within different overdiagnosis frames will help those who work in, or consider participating in breast screening [28,29]. The different frames may be a useful scaffold upon which to generate thoughtful discussion amongst practitioners. These frames also offer new tools for experts to clarify their own positions and to understand the opinions of others on overdiagnosis including views on whether and how it is a problem, and what solutions might be appropriate. This may facilitate recognition of points of agreement and form a basis for co-operative dialogue in the best interests of consumers [19]. Policy makers are faced with a baffling array of suggestions about what, if anything, should be done with regard to breast screening overdiagnosis. The experts who participated in this study offered a range of solutions, focusing on different points along the screening journey, including primary research, evidence translation and presentation, communication with consumers, screening practices, diagnostic practices, and treatment. By viewing these solutions in connection with the frame to which they belong, it becomes easier to see why one solution might be preferred over another, and by whom. Any management plan or policy is likely to need multiple solutions, and incommensurability between some frames will necessitate compromises and negotiations.
This study benefits from the open qualitative methodology, which allowed us to explore a topic about which there was little pre-existing knowledge. We were able to access the views and opinions from a range of influential individuals and expert stakeholders from different parts of Australia. Its strength lies in the depth of its enquiry and its ability to capture the complexity of the evidence base and value judgements underlying the range of different views. As with much qualitative work, we cannot make any predictions about the prevalence or pattern of our results within the wider population, and this may be a useful avenue for future survey research. While this study was limited to the Australian setting, much of the developed world has organised breast screening programs, comparable values, and access to the same body of scientific evidence, and thus the findings are likely to be broadly applicable across these countries. It is possible that experts who participated in our study were somehow different from those who were invited but did not participate. We sought to minimise any bias of this sort by ensuring that we interviewed experts with a range of attitudes to screening, and a wide variety of professional roles and experience.
Conclusions
Our results demonstrate that experts approach overdiagnosis in various ways, see a range of issues and values at stake, and are inclined to promote different solutions. This may be an important contributor to the ongoing controversy in this topic, and offers a new explanation for why some debates about overdiagnosis are so heated. The concept of experts using different frames when | 2015-09-23T00:31:53.000Z | 2015-08-28T00:00:00.000 | {
"year": 2015,
"sha1": "a940cfa1c3b58a968e4f6946ef07645c71a4427b",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-015-1603-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd1c17b9d60f3f82cf7ac6cd21c3baf53850c651",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259254740 | pes2o/s2orc | v3-fos-license | Accuracy of intravascular ultrasound-derived virtual fractional flow reserve (FFR) and FFR derived from computed tomography for functional assessment of coronary artery disease
Background Coronary computed tomography-derived fractional flow reserve (CT-FFR) and intravascular ultrasound-derived fractional flow reserve (IVUS-FFR) are two functional assessment methods for coronary stenoses. However, the calculation algorithms for these methods differ significantly. This study aimed to compare the diagnostic performance of CT-FFR and IVUS-FFR using invasive fractional flow reserve (FFR) as the reference standard. Methods Six hundred and seventy patients (698 lesions) with known or suspected coronary artery disease were screened for this retrospective analysis between January 2020 and July 2021. A total of 40 patients (41 lesions) underwent intravascular ultrasound (IVUS) and FFR evaluations within six months after completing coronary CT angiography were included. Two novel CFD-based models (AccuFFRct and AccuFFRivus) were used to compute the CT-FFR and IVUS-FFR values, respectively. The invasive FFR ≤ 0.80 was used as the reference standard for evaluating the diagnostic performance of CT-FFR and IVUS-FFR. Results Both AccuFFRivus and AccuFFRct demonstrated a strong correlation with invasive FFR (R = 0.7913, P < 0.0001; and R = 0.6296, P < 0.0001), and both methods showed good agreement with FFR. The area under the receiver operating characteristic curve was 0.960 (P < 0.001) for AccuFFRivus and 0.897 (P < 0.001) for AccuFFRct in predicting FFR ≤ 0.80. FFR ≤ 0.80 were predicted with high sensitivity (96.6%), specificity (85.7%), and the Youden index (0.823) using the same cutoff value of 0.80 for AccuFFRivus. A good diagnostic performance (sensitivity 89.7%, specificity 85.7%, and Youden index 0.754) was also demonstrated by AccuFFRct. Conclusions AccuFFRivus, computed from IVUS images, exhibited a high diagnostic performance for detecting myocardial ischemia. It demonstrated better diagnostic power than AccuFFRct, and could serve as an accurate computational tool for ischemia diagnosis and assist in clinical decision-making.
Introduction
The functional evaluation of coronary artery disease (CAD) plays a significant role in diagnosis and guiding treatment strategies in patients with known or suspected CAD. The invasive fractional flow reserve (FFR) has the highest recommendation (class IA) for the evaluation of CAD in societal guidelines [1]. However, the adoption of FFR in daily clinical practice is limited because of the invasive nature of the procedure, requirement of pressure wire, and the administration of hyperemic agents [2][3][4]. The computation of FFR from coronary artery imaging may increase the utility of FFR assessment in clinical practice. Recently, there has been increasing interest in these novel applications of FFR derived from coronary artery imaging without invasive pressure wire and administration of hyperemic agents [5].
Coronary artery imaging, such as coronary computed tomography angiography (CTA), intravascular ultrasound (IVUS), and optical coherence tomography (OCT), can provide anatomical information, including the arterial lumen structure and plaque characteristics. CTA, a non-invasive imaging method, has been recommended as a potential test before invasive coronary angiography (CAG) in the outpatient setting [6]. OCT and IVUS, which are both invasive imaging methods, offering higher resolution than CAG and CTA. Moreover, with anatomical information such as plaque characteristics, stent placement can be improved, and stent-related challenges can be minimized. However, anatomical information alone cannot reveal the functional significance for the target vessels. FFR derived from coronary artery imaging is a combination of anatimical and functional assessment.
In a previous study, a new method called AccuFFRivus was developed to calculate FFR using IVUS and CAG [7]. The method showed good diagnostic performance and strong correlation with FFR, indicating its potential for hybridizing coronary anatomical and physiological evaluation of CAD in the catheterization laboratory [7]. On the other hand, based on the computational fluid dynamics, CT-FFR can calculate the FFR from CTA [8]. It is a non-invasive technique for the functional assessment of coronary stenosis, allowing for a comprehensive coronary assessment outside the catheterization laboratory [9]. CT-FFR is recommended as the "gatekeeper" for coronary angiography and intervention, primarily in the outpatient setting [9][10][11]. Recently, a CT-FFR method called AccuFFRct was proposed, which can efficiently calculate non-invasive FFR based on anatomical and physiological information [12].
Both CT-FFR and IVUS-FFR can enable a one-stop assessment of the anatomical and functional aspects of CAD [13]. However, their computational algorithms and clinical utilization are quite different. Using FFR as the reference standard, this study aimed to compare the diagnostic performance between AccuFFRivus and AccuFFRct in realworld clinical practice.
Baseline clinical and lesion characteristics
During the study period, 670 patients (698 vessels) with known or suspected CAD who had CAG were screened from January 2020 to July 2021. Among these patients, 59 patients (61 lesions) underwent both IVUS and FFR in our catheter lab. A total of 40 consecutive patients (41 lesions) who had the IVUS and FFR within six months after completing the CCTA were included. In the included population, five patients (five vessels) had inadequate CTA to calculate CT-FFR value, one patient (one vessel) had an ostial lesion, and two patients (two vessels) underwent predilatation of the balloon before IVUS. In the final analysis, 35 patients (36 vessels) were included in this study (Fig. 1 mean of the intravascular ultrasound-derived minimum lumen area (IVUS-MLA) was 2.83 ± 0.53 mm 2 . The baseline lesion characteristics is listed in Table 2.
Comparison of the correlations and agreements among AccuFFRivus, AccuFFRct, and invasive-FFR Figure 2 illustrates a visual representation of AccuFFRivus, AccuFFRct, and invasive-FFR measurements. Figure 3 presents the correlation and agreement among these measurements. The results demonstrate that both AccuFFRivus and AccuFFRct are strongly correlated with invasive FFR, with R of 0.7913 (P < 0.0001) and 0.6296 (P < 0.0001), respectively. A good agreement is demonstrated by both AccuFFRivus and AccuFFRct with invasive-FFR, with similar mean differences of − 0.0094 ± 0.061 and 0.0050 ± 0.080, respectively. Additionally, a high correlation was observed between AccuFFRivus and AccuFFRct (R = 0.7323, P < 0.0001), and moderate agreement between the two measurements, with a mean difference of − 0.0144 ± 0.069 (Fig. 3).
Diagnostic performance of AccuFFRivus and AccuFFRct for predicting FFR ≤ 0.80 Figure 4 illustrates the sensitivity, specificity, and Youden index for different cutoff values of AccuFFRct and AccuFFRivus in predicting FFR ≤ 0.80. The optimal cutoff value for AccuFFRivus and AccuFFRct to predict FFR ≤ 0.80 was 0.80 and 0.80, with a sensitivity of 96.6% and 89.7%, specificity of 85.7% and 85.7%, and a Youden index of 0.823 and 0.754, respectively. Notably, AccuFFRivus demonstrated a much better diagnostic performance in detecting ischemia-causing stenoses than that of AccuFFRct. Figure 5 presents the receiver operating characteristic (ROC) curves for AccuFFRivus, AccuFFRct, good diagnostic performance indicates that an accurate assessment of coronary stenosis is feasible.
Discussion
FFR was used as the reference standard in the present study to assess the diagnostic performance of AccuFFRivus and AccuFFRct. The primary study findings are summarized as follows: (a) both AccuFFRct and AccuFFRivus demonstrated strong correlations and good agreements with FFR. (b) The AUC of AccuFFRivus demonstrated better discrimination ability than AccuFFRct in defining hemodynamic significance of coronary stenosis. (c) The diagnostic performance of AccuFFRivus is better than that of AccuFFRct. (d) Compared to AccuFFRct, AccuFFRivus allows for simultaneous intracoronary imaging and functional assessment of coronary artery lesions in the cardiac catheterization laboratory, showcasing its potential in the coronary anatomical and physiological evaluation of CAD. This unique capability highlights that by integrating anatomical and functional information, AccuFFRivus provides a more comprehensive assessment of CAD. Although CTA has become an essential tool for assessing the morphological features of coronary arteries in patients with CAD, it has limitations in determining whether coronary artery stenosis is the underlying cause of myocardial ischemia [14]. To address this limitation, CT-FFR has emerged as a non-invasive method to evaluate the functional [15][16][17][18]. In our study, AccuFFRct incorporates three-dimensional (3D) reconstruction of coronary artery geometry and patient-specific physiological parameters including blood pressure, heart rate [12]. When comparing the diagnostic performance of AccuFFRct to invasive-FFR, AccuFFRct demonstrated good diagnostic accuracy in assessing the functional relevance of target vessels (AUC = 0.897, accuracy 88.9%). Thses results were comparable to previous studies such as DISCOVER-FLOW (per-vessel AUC = 0.90, accuracy 84.3%), NXT (per-vessel AUC = 0.93, accuracy 86%) and MACHINE (AUC = 0.84, accuracy 73%). Additionally, the correlation (R = 0.6296) and diagnostic performance of AccuFFRct in the present study were similar to the can further enhance the role of CT-FFR in streamlining the diagnostic process for CAD, providing a valuable and efficient tool for clinicians and patients alike.
For the functional assessment of coronary artery lesions, IVUS-FFR is being developed as an alternative approach to CT-FFR. Previous studies have explored the feasibility of using computational fluid dynamics (CFD) to simulate FFR and improve diagnostic accuracy. Carrizo et al. reconstructed a coronary LAD from IVUS images and used CFD to calculate the fractional flow reserve [19]. In a subsequent study, including 24 patients (34 vessels), IVUS-FFR demonstrated better accuracy, sensitivity, and specificity in detecting ischemia compared to MLA or angiography. However, CFD simulations required the target vessel reconstruction for each arterial branch and took an average of 9.1 h per study vessel [20]. Seike et al. used FFR ≤ 0.80 as the diagnostic gold standard, and a strong correlation was found between IVUS-FFR and FFR (R = 0.78), higher than that between IVUS-MLA and FFR (R = 0.43) [21]. Similarly, Wei et al. reported a strong correlation between IVUS-FFR and FFR (R = 0.87) using invasive FFR ≤ 0.80 as the gold standard, with an AUC of 0.97, which was higher than that of IVUS-MLA (0.89) [22]. No significant differences were found between IVUS-FFR and FFR , regardless of factors such as lesion location or previous history of myocardial infarction [22]. Although previous IVUS-FFR studies demonstrated good diagnostic performance for detecting myocardial ischemia, their clinical use was limited due to the time-consuming nature of CFD calculations. Recently, a new method for fast computtaion of FFR from the fusion of IVUS and angiographic images was developed, allowing for accurate modeling of vessel bending geometry and inlet flow. Using this technique, AccuFFRivus, the real lumen of a 3D image can be obtained, and the true angle and direction of vessels presented by 2D angiography can be analyzed [7]. Previous studies demonstrated that AccuFFRivus had better diagnostic performance (93.75%) than DS% (65.62%) and MLA < 4 mm 2 (53.12%). The diagnostic performance (accuracy 94.4%) and area under the curve (AUC) (0.960) were similar to those reported in previous studies. Therefore, coronary artery stenosis can be assessed using AccuFFRivus, which is a time-efficient and accurate method, and the visualized anatomic geometry of the coronary artery can be used for subsequent clinical planning and therapeutic regimens. Although hemodynamic variables and anatomical geometry were not directly compared between CCTA and IVUS through this study, promising results were observed in the comparison of diagnostic performance between invasive FFR and IVUS-FFR or CT-FFR [23,24]. However, these investigations did not compare the diagnostic efficacy and benefits of CT-FFR and IVUS-FFR. CT-FFR is used for quick capture of coronary anatomical and functional information non-invasively; however, it lacks the ability to identify lesion characteristics such as plaque load and high-risk plaque features [25]. IVUS-FFR, on the other hand, can assess the 3D morphology of coronary stenosis, providing lumen border and plaque features and accurately segmenting the lumen and exterior elastic membrane [7,22]. Additionally, stent placement can be improved, and clinical outcomes may be enhanceed using IVUS-FFR. Thus, by combing the advantages of CT-FFR and plaque characteristics, IVUS-FFR allows for comprehensive assessment of coronary stenosis. AccuFFRct and AccuFFRivus were used in this study for hemodynamic assessment of target vessels without the need of pressure guidewire and hyperemic agents. The results demonstrated the high efficacy of both AccuFFRct and AccuFFRivus in diagnosing coronary artery stenosis and their potential as indexes to identify hemodynamic significance. Although both AccuFFRct and AccuFFRivus exhibited similar correlation and diagnostic performance with FFR, they serve different clinical roles. CT-FFR, known as the gatekeeper to the pathway of coronary angiography, was obtained from the outpatient setting and could reduce the number of unnecessary coronary angiography in patients without functionally significant CAD [26][27][28][29]. However, CT-FFR cannot assess complex lesion and plaque features [25,28,29]. AccuFFRivus had slightly better diagnostic performance and the ability to identify complex lesions and plaque characteristics, thereby guiding clinical diagnosis and revascularization procedures. Moreover, the AccuFFRivus only required 5 min per examination to calculate the virtual FFR. The physiological significance of coronary stenosis can be assessed immediately after IVUS image acquisition without the need for additional instrumentation. Therefore, IVUS-FFR enables easy and fast coronary anatomical and physiological evaluation of CAD in the cardiac catheterization laboratory. Additionally, stent malapposition and under-dilation can also be determined in patients with stents, which may potentially improve the accuracy of AccuFFRivus' physiological assessment in the stented segments [7,23]. Based on the advantages of IVUS-FFR, it is a "one-stop" assessment in the catheter room in the pre-percutaneous coronary intervention (PCI) and post-PCI evaluations, offering a novel technique to diagnose CAD based on intracoronary imaging. However, our study had several limitations. First, it was a single-center and retrospective study with a small sample size. This might have introduced selection bias even though consecutive patients were included. The limited number of enrolled patients due to the low adoption rate of patients undergoing both IVUS and FFR in clinical practice also affected the statistical efficiency of the study. Secondly, AccuFFRct and AccuFFRivus were only assessed in the main coronary arteries, excluding side branches, which may have affected the diagnostic accuracy of AccuFFRct and AccuFFRivus and disregarded the impact of collateral stenosis on myocardial ischemia. Thirdly, AccuFFRct and AccuFFRivus computation requires automatic reconstruction of 3D anatomical models of coronary vessels, and further studies should be conducted to analyze the impact of anatomical features on diagnostic accuracy in target vascular lesions. Lastly, there are some practical limitations associated with the use of AccuFFRivus and AccuFFRct in the clinical setting. AccuFFRivus requires careful attention to the projection angle and location of the target lesions in coronary angiography, and AccuFFRct has a relatively complex reconstruction process, it is recommended that calculation be performed by well-trained staff to ensure accurate results.
Conclusions
AccuFFRivus and AccuFFRct exhibit strong correlation and good agreement with invasive FFR, providing good diagnostic accuracy in detecting myocardial ischemia. IVUS-FFR has the potential to become a new clinically relevant tool for functional evaluation in diagnosing functional significance of CAD and may emerge as a mainstream technique for lesion-specific coronary assessment bedises CT-FFR.
Study population
A retrospective, single-center, observational study was conducted from January 2020 to July 2021, including consecutive patients with CAD who had the IVUS, FFR and 2D-QCA within six months of completing the CCTA were eligible for enrollment. The inclusion criteria were as follows: (1) age ≥ 18 years; (2) patients with suspected or known CAD; and (3) at least one lesion with 30-80% diameter stenosis (DS%) based on visual estimation. The exclusion criteria were as follows: (1) angiographic evidence of thrombi-containing lesions; (2) patients in which the IVUS catheter could not cross the lesion owing to tight stenosis or tortuosity; (3) severe valvular heart diseases; (4) left ventricle ejection fraction < 30%; (5) significant foreshortening or vessel overlapping; (6) previous coronary artery bypass grafting; (7) inadequate contrast flush; (8) deep catheter intubation into the lesion precluding complete visualization of stenosis; (9) severely calcified vessels; (10) balloon dilatation performed before IVUS; or (11) inconsistent image format. The study was conducted in compliance with the Declaration of Helsinki, and was approved by the Ethics Review Committee of Zhejiang Hospital. Individual informed consent was waived due to the retrospective nature of the study.
Image acquisition and data analysis
The present investigation was a retrospective, single-center, observational study performed at the Zhejiang Hospital. This study aimed to compare the diagnostic performance between AccuFFRivus and AccuFFRct in real-world clinical practice, using FFR as the reference standard. The same equipment, Siemens Force CT and Boston Scientific/ SCIMED IVUS were used to perform imaging for all patients. The latest guidelines were used to conduct CCTA and IVUS procedures, respectively [7,9,10,30,31]. Two senior radiology physicians, blinded to the clinical data, independently analyzed all images to ensure unbiased analysis. In cases of disagreement, a third, more experienced radiologist (an associate or chief physician) reviewed the images for the final determination. The degree of stenosis was quantitatively assessed based on the criteria for segmented coronary vessel images. An obstructive coronary artery lesion was indicated when luminal diameter stenosis was greater than 50% [29,32]. The baseline patient information, clinical data, and auxiliary examination results were collected and screened by researchers at the applicant's hospital. To calculate AccuFFRct and AccuFFRivus, the CCTA and IVUS data were subsequently analyzed at the core laboratory of ArteryFlow Technology (Hangzhou, China). Then the group of Zhejiang Hospital analyzed the AccuFFRivus and AccuFFRct. To ensure accurate and high-quality data, both CTA and IVUS imaging were performed in strict adherence to standardized protocols with blinded and independent image analysis.
ICA and measurement of physiological indices
The radiographic system Allura Xper FD20/10 (PHILIPS Medical Systems, the Netherlands) was used for the angiographic imaging at a rate of 15 frames/s. The contrast medium was injected at a stable rate of approximately 4 mL/s using a pump. The 2D-QCA was performed using the Angiogram QCA software (Allura Xper FD20/10; PHILIPS Medical Systems, The Netherlands). A coronary pressure wire (St. Jude Medical, St. Paul, Minnesota, USA) was used to calculate FFR with the pressure sensor positioned at 2-3 cm distal to the target lesion of the coronary artery. Before placement, the pressure wire was calibrated and equalized, and intravenous adenosine triphosphate concentration was 150-180 g/kg/min to induce maximum hyperemia of the coronary microvascular system. Simultaneously, the distal and proximal coronary artery pressures at the pressure sensor (Pd) and coronary ostium (Pa) were recorded. The pressure sensor was then pulled back to the proximal end to assess or correct pressure drift. The FFR was determined by dividing Pd by Pa. Further analysis was performed at the core laboratory using all ICA and FFR data. Thus, a standardized radiographic system, pressure wire, and software were used, and strict protocols were followed for data collection and analysis to ensure accuracy and reliability [33].
AccuFFRct measurements
The CT-FFR results were calculated using the latest AccuFFRct analysis software (AccuFFRct, V 1.0, ArteryFlow Technology, Hangzhou, China) and analyzed in the AccuFFRct core laboratory. The calculation process comprised the following four steps: (1) reconstruction and segmentation of a 3D model of the coronary artery and left ventricle using the CCTA image data. First, the fast marching algorithm and colliding fronts algorithm were applied to the aortic and coronary tree segmentation, and the level set method was used to identify the optimal vascular boundaries. Moreover, the Marching cubes method [34][35][36][37] was adopted to obtain the anatomical model of the coronary tree. Following this, the deep learning segmentation method based on an eight-layer residual U-Net was used to extract the left ventricular model and determine the myocardial volume [38][39][40]. (2) The coronary artery anatomical model was preprocessed, including hole inspection, smoothing, and boundary surface editing of the 3D model, and then transformed into a mesh model. A numerical CFD simulation was performed later to obtain the blood flow field.
(3) The Navier-Stokes equations were solved using the finite volume method, and flow field information was calculated, including the pressure and velocity of each cell of the mesh model. (4) The AccuFFRct value was assessed as the ratio of the distal pressure at the FFR measuring point to the mean aortic pressure. Depending on the CT image quality, the AccuFFRct analysis required about 35 min per examination [12].
AccuFFRivus measurements
The latest AccuFFRivus analysis software AccuFFRivus V1.0 (ArteryFlow Technology, Hangzhou, China), was used for analysis in the AccuFFRivus core laboratory. Firstly, ECG gating and geometric parameter calibration were performed using the radiographic angiography image processing [41][42][43][44][45][46], and an accurate vascular model was generated using two matched images obtained from two different projections at the end of the diastolic cardiac stage [47,48]. This calibration method can eliminate geometric errors and achieve the optimal matching of two images using three pairs of physiological points. The vascular lumen boundary of the angiographic image was detected using Dijkstra minimum path algorithm [34,35], and could be adjusted as required. The analysis was performed from the proximal segment points to the distal segment points, and finally, the whole vessel lumen point cloud with layered distribution was constructed. Based on the principle of minimum energy, the trajectory of the IVUS catheter in the vessel was calculated, and the position of the guide wire was extracted using the Dijkstra algorithm [49,50].
The images at the end of the diastole were selected to process IVUS images, and the lumen of 2D IVUS images was automatically segmented using a U-Net-based algorithm [51,52]. The IVUS and angiographic images were fused to conduct 3D coronary artery modeling. The blood flow velocity was determined from the geometric characteristics of the artery and the number of frames of blood flow from the proximal end to the distal end when the blood flow velocity was assumed to be directly proportional to the square of the coronary artery diameter. The reference vessel diameter was determined by the linear fitting of the initial reference vessel diameter slope. The pressure drop from the proximal to distal may be composed of friction-related viscous and expansion pressure drop related to the coefficient of energy loss and flow rate. The AccuFFRivus value is determined by dividing the mean distal coronary pressure by the mean proximal aortic pressure. Depending on IVUS image quality, the AccuFFRivus analysis required around 5 min per examination [7,53,54].
Statistical analysis
Continuous and binary variables were presented as mean ± standard deviation (SD) and percentages, respectively. Pearson's or Spearman's correlation coefficients were used to quantify the correlations between AccuFFRivus, AccuFFRct, and invasive FFR. Bland-Altman plots were used to assess the agreements between AccuFFRivus, AccuFFRct, and invasive FFR, which displayed the differences between each pair of measurements versus their mean values with reference lines for the mean difference of all paired measurements. The agreement limits were defined as the mean ± 1.96 SD of the absolute difference. To predict functionally significant stenosis (defined as FFR ≤ 0.80), sensitivity, specificity, and the Youden index (defined as [sensitivity/100] + [specificity/100] − 1) were calculated for different cutoff values of AccuF-FRivus and AccuFFRct. To assess the area under the curve (AUC) of AccuFFRivus and AccuFFRct, receiver operating characteristic (ROC) curve analysis was performed. All statistical analyses were performed using MedCalc (version 19.0, MedCalc Software, Ostend, Belgium), and P < 0.05 was defined as statistically significant. | 2023-06-27T14:21:21.180Z | 2023-06-27T00:00:00.000 | {
"year": 2023,
"sha1": "55768d0c912cccf0f5c11a5e56f6487b4131a437",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "55768d0c912cccf0f5c11a5e56f6487b4131a437",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254767600 | pes2o/s2orc | v3-fos-license | CTLA-4 Expression Is a Promising Biomarker of Idiopathic Pulmonary Arterial Hypertension and Allows Differentiation of the Type of Pulmonary Hypertension
Pulmonary arterial hypertension (PAH) is an increasingly frequently diagnosed disease, the molecular mechanisms of which have not been thoroughly investigated. The aim of our study was to investigate subpopulations of lymphocytes to better understand their role in the molecular pathomechanisms of various types of PAH and to find a suitable biomarker that could be useful in the differential diagnosis of PAH. Using flow cytometry, we measured the frequencies of lymphocyte subpopulations CD4+CTLA-4+, CD8+ CTLA-4+ and CD19+ CTLA-4+ in patients with different types of PAH, namely pulmonary arterial hypertension associated with congenital heart disease (CHD-PAH), pulmonary arterial hypertension associated with connective tissue disorders (CTD-PAH), chronic thromboembolic pulmonary hypertension (CTEPH) and idiopathic pulmonary arterial hypertension (iPAH), and in an age- and sex-matched control group in relation to selected clinical parameters. Patients in the iPAH group had the significantly highest percentage of CD4+CTLA-4+ T lymphocytes among all PAH groups, as compared to those in the control group (p < 0.001), patients with CTEPH (p < 0.001), CTD-PAH (p < 0.001) and CHD-PAH (p < 0.01). In iPAH patients, the percentages of CD4+CTLA-4+ T cells correlated strongly positively with the severity of heart failure New York Heart Association (NYHA) Functional Classification (r = 0.7077, p < 0.001). Moreover, the percentage of B CD19+CTLA-4+ cells strongly positively correlated with the concentration of NT-proBNP (r = 0.8498, p < 0.001). We have shown that statistically significantly higher percentages of CD4+CTLA-4+ (p ≤ 0.01) and CD8+ CTLA-4+ (p ≤ 0.001) T cells, measured at the time of iPAH diagnosis, were found in patients who died within 5 years of the diagnosis, which allows us to consider both of the above lymphocyte subpopulations as a negative prognostic/predictive factor in iPAH. CTLA-4 may be a promising biomarker of noninvasive detection of iPAH, but its role in planning the treatment strategy of PAH remains unclear. Further studies on T and B lymphocyte subsets are needed in different types of PAH to ascertain the relationships that exist between them and the disease.
Introduction
Pulmonary arterial hypertension (PAH) is a severe clinical condition characterized by enhanced pulmonary vascular resistance (PVR), leading to increased pulmonary artery Int. J. Mol. Sci. 2022, 23, 15910 2 of 15 pressure (PAP)and the remodeling of the pulmonary arteries [1]. If left untreated, it leads to deterioration of right ventricular function, multiorgan failure and death. Invasive hemodynamic evaluation with right heart catheterization is the gold standard to establish the diagnosis of PAH. According to a recent update, pulmonary arterial hypertension (PAH) is diagnosed when the mean pulmonary artery pressure (mPAP) is ≥20 mm Hg and the normal pulmonary capillary wedge pressure (PCWP) is ≤15 mm Hg [2].
Consistent with the European Society of Cardiology (ESC)/European Respiratory Society (ERS) Guidelines, there are five groups of pulmonary hypertension (PH), according to clinical and pathophysiological criteria: group 1 refers to idiopathic pulmonary arterial hypertension (iPAH), as well as drug-induced PAH, connective tissue disease-related PAH and all heritable forms of PAH; group 2 includes the PH secondary to left-sided heart failure; group 3 includes PH due to the chronic lung disease and/or hypoxia; group 4 is called chronic thromboembolic pulmonary hypertension (CTEPH); group 5 consists of PAH due to uncertain multifactorial mechanisms [3,4].
Targeted medical therapy or interventional treatment can be offered to patients diagnosed with PAH and CTEPH, respectively. The prognosis of PAH varies broadly and depends mostly on the etiology of PAH, but is also based on hemodynamic, biochemical and functional parameters that indicate the severity of right ventricular failure, as well as on response to specific treatment. Risk stratification seems to be crucial for identifying patients at high risk and for optimizing therapeutic management. Thus, biomarkers and molecules may specifically indicate the disease and provide information about the disease stage and treatment response in a relatively easily accessible and noninvasive way.
CTLA-4 (cytotoxic t cell antigen 4) (CD152) molecules belong to the type I membrane receptor family and play an important role in signaling between immune cells [5]. CTLA-4 is mainly localized on the surface of activated CD4+ T cells and regulatory T cells (Treg), as well as on B19+ cells and dendritic cells [5,6]. The ligands of this receptor are CD80 and CD86 molecules, which are mostly seen on antigen-presenting cells. The main function of CTLA-4 is inhibitory; it is a key element in the negative regulation of the immune response, and when combined with a specific ligand, it inhibits T lymphocytes [7]. Two types of mechanisms influence this. The first is an extracellular mechanism and involves affecting the ability of antigen presenting cells (APCs) to stimulate T cells. The second mechanism is intracellular and involves suppression of signals sent to T cells [8]. The reproducibility of CTLA-4 measurements were shown by Grywalska et al. [9,10].
The aim of the present study was to investigate lymphocyte subpopulations and better understand their role in the molecular pathomechanisms of different types of PAH, and to find a new biomarker that could be useful and widely used in the differential diagnosis of PAH.
Results
Cytometric analysis allowed us to determine the percentage of CD19+ B cells, CD4+ T cells and CD8+ T cells with CTLA-4 receptor expression ( Figure 1).
Patients in the iPAH group had, by significant distance, the highest percentage of CD4+CTLA-4+ T lymphocytes among all PAH groups, as compared to those in the control group (p < 0.001), patients with CTEPH (p < 0.001), CTD-PAH (p < 0.001) and CHD-PAH (p < 0.01). Additionally, a higher percentage of CD4+CTLA4+ T lymphocytes was observed in CHD-PAH patients, as compared to CTD-PAH (p < 0.001) and CTEPH (p < 0.01) patients. The lowest percentage of CD4+CTLA4+ T lymphocytes was in the group of patients with CTD-PAH, which was statistically significant when compared to the control group (p < 0.01) ( Table 1). The obtained relationships are presented in Figure 2. Patients in the iPAH group had, by significant distance, the highest percentage of CD4+CTLA-4+ T lymphocytes among all PAH groups, as compared to those in the control group (p < 0.001), patients with CTEPH (p < 0.001), CTD-PAH (p < 0.001) and CHD-PAH (p < 0.01). Additionally, a higher percentage of CD4+CTLA4+ T lymphocytes was observed in CHD-PAH patients, as compared to CTD-PAH (p < 0.001) and CTEPH (p < 0.01) patients. The lowest percentage of CD4+CTLA4+ T lymphocytes was in the group of patients with CTD-PAH, which was statistically significant when compared to the control group (p < 0.01) ( Table 1). The obtained relationships are presented in Figure 2. Comparison of the percentage of CD8+CTLA4+ T lymphocytes in selected types of PAH and in the control group revealed a significantly higher percentage of these lymphocytes in the group of patients with iPAH than in the control group (p < 0.001). The obtained relationships are shown in Figure 3. Comparison of the percentage of CD19+CTLA4+ B lymphocytes in selected types of PAH and in the control group revealed a significantly higher percentage of these Comparison of the percentage of CD8+CTLA4+ T lymphocytes in selected types of PAH and in the control group revealed a significantly higher percentage of these lymphocytes in the group of patients with iPAH than in the control group (p < 0.001). The obtained relationships are shown in Figure 3. Comparison of the percentage of CD8+CTLA4+ T lymphocytes in selected types of PAH and in the control group revealed a significantly higher percentage of these lymphocytes in the group of patients with iPAH than in the control group (p < 0.001). The obtained relationships are shown in Figure 3. Comparison of the percentage of CD19+CTLA4+ B lymphocytes in selected types of PAH and in the control group revealed a significantly higher percentage of these Comparison of the percentage of CD19+CTLA4+ B lymphocytes in selected types of PAH and in the control group revealed a significantly higher percentage of these lymphocytes in the iPAH group than in the CTD-PAH and CHD-PAH groups (p < 0.05). The resulting relationships are shown in Figure 4. lymphocytes in the iPAH group than in the CTD-PAH and CHD-PAH groups (p < 0.05). The resulting relationships are shown in Figure 4. We have shown that statistically significantly higher percentages of CD4+CTLA-4+ (p ≤ 0.01, Figure 5) and CD8+ CTLA-4+ (p ≤ 0.001, Figure 6) T cells, measured at the time of IPAH diagnosis, were found in patients who died within 5 years of the diagnosis, which allows us to consider both of the above lymphocyte subpopulations as a negative prognostic/predictive factor in iPAH. We have shown that statistically significantly higher percentages of CD4+CTLA-4+ (p ≤ 0.01, Figure 5) and CD8+ CTLA-4+ (p ≤ 0.001, Figure 6) T cells, measured at the time of IPAH diagnosis, were found in patients who died within 5 years of the diagnosis, which allows us to consider both of the above lymphocyte subpopulations as a negative prognostic/predictive factor in iPAH. lymphocytes in the iPAH group than in the CTD-PAH and CHD-PAH groups (p < 0.05).
The resulting relationships are shown in Figure 4. We have shown that statistically significantly higher percentages of CD4+CTLA-4+ (p ≤ 0.01, Figure 5) and CD8+ CTLA-4+ (p ≤ 0.001, Figure 6) T cells, measured at the time of IPAH diagnosis, were found in patients who died within 5 years of the diagnosis, which allows us to consider both of the above lymphocyte subpopulations as a negative prognostic/predictive factor in iPAH. The percentages of CD4+CTLA-4+ T cells correlated strongly positively with the severity of heart failure New York Heart Association (NYHA) Functional Classification (Spearman's rank correlation r = 0.7077,p < 0.001, Figure 7). The percentages of CD4+CTLA-4+ T cells correlated strongly positively with the severity of heart failure New York Heart Association (NYHA) Functional Classification (Spearman's rank correlation r = 0.7077, p < 0.001, Figure 7). The percentages of CD4+CTLA-4+ T cells correlated strongly positively with the severity of heart failure New York Heart Association (NYHA) Functional Classification (Spearman's rank correlation r = 0.7077,p < 0.001, Figure 7).
Discussion
In this study, we analyzed lymphocyte subpopulations-CD4+, CD8+, CD19+-and surface antigen CTLA-4 in patients with different types of PAH: CHD-PAH, CTD-PAH, CTEPH and iPAH. Accordingly, CD4+, CD8+ and CD19+ levels were studied mostly in patients with iPAH. There are few data on patients with CHD-PAH, CTD-PAH and CTEPH so our study also focused on other types of PAH. CHD-PAH occurs in 5-10% of congenital heart disease (CHD) patients, mostly woman [11]. CTD-PAH is most common in patients with systemic scleroderma and its development contributes to poor disease prognosis and an increased risk of death [12]. CTEPH is a relatively rare type of PAH, possibly due to the great difficulty in diagnosis [13].
We focused on CTLA-4, which is a receptor on the surface of lymphocytes. This was because CTLA-4 controls T cell responses, alongside manipulation of CTLA-4, has become a cornerstone in the development of therapies for autoimmune diseases and cancer [14]. In general, CTLA-4 is a widely studied antigen for the treatment of malignancies; however, the close association of CTLA-4 blockade with the development of immune toxicity is problematic. The use of anti-CTLA4 blocking antibody has the effect of increasing Th17 cells in patients with metastatic melanoma, which enhances immune toxicity [15].
In our study, we observed a twofold increase in CD4+CTLA-4+ in patients with iPAH, but a decrease in patients with CTD-PAH and CTEPH. In CD4+ studies without CTLA-4, an increase in CD4+ T cells was reported in patients with PAH [16,17]. CD4+ T lymphocytes aggravate PAH progression, increase inflammation and exert autoimmune effects through the secretion of cytokines IL-2, IL-4, IL-6, IL-13, IL-21, TNF-α and IFN-γ by CD4+ T cells [18]. It was reported that CTLA-4 expression levels were elevated on activated Th cells in iPAH [19]. Moreover, an increased percentage of cTfh-17 cells in the CD4+ population was observed in patients with iPAH [20]. A study by Maston et al. [21] in mouse models showed that CD4+ cells have a role in the development of hypoxia induced by PAH. Additionally, in normoxic and CH mice, Th17 was present in the cells, along with increased levels of pro-inflammatory IL-6. This suggests that T cells have a role in PAH induction [21]. In CD4+ T cells cultured in the presence of monocyte-derived DCs (MoDCs) from patients with PAH, reduced expression of IL-4 (Th 2 response) and higher levels of IL-17 (Th17 response) and increased activation and proliferation of CD4+ T cells were observed, as compared with CD4+ T cells cultured with MoDCs from control patients [22]. In current literature, an increase in Treg levels in iPAH patients has been
Discussion
In this study, we analyzed lymphocyte subpopulations-CD4+, CD8+, CD19+-and surface antigen CTLA-4 in patients with different types of PAH: CHD-PAH, CTD-PAH, CTEPH and iPAH. Accordingly, CD4+, CD8+ and CD19+ levels were studied mostly in patients with iPAH. There are few data on patients with CHD-PAH, CTD-PAH and CTEPH so our study also focused on other types of PAH. CHD-PAH occurs in 5-10% of congenital heart disease (CHD) patients, mostly woman [11]. CTD-PAH is most common in patients with systemic scleroderma and its development contributes to poor disease prognosis and an increased risk of death [12]. CTEPH is a relatively rare type of PAH, possibly due to the great difficulty in diagnosis [13].
We focused on CTLA-4, which is a receptor on the surface of lymphocytes. This was because CTLA-4 controls T cell responses, alongside manipulation of CTLA-4, has become a cornerstone in the development of therapies for autoimmune diseases and cancer [14]. In general, CTLA-4 is a widely studied antigen for the treatment of malignancies; however, the close association of CTLA-4 blockade with the development of immune toxicity is problematic. The use of anti-CTLA4 blocking antibody has the effect of increasing Th17 cells in patients with metastatic melanoma, which enhances immune toxicity [15].
In our study, we observed a twofold increase in CD4+CTLA-4+ in patients with iPAH, but a decrease in patients with CTD-PAH and CTEPH. In CD4+ studies without CTLA-4, an increase in CD4+ T cells was reported in patients with PAH [16,17]. CD4+ T lymphocytes aggravate PAH progression, increase inflammation and exert autoimmune effects through the secretion of cytokines IL-2, IL-4, IL-6, IL-13, IL-21, TNF-α and IFN-γ by CD4+ T cells [18]. It was reported that CTLA-4 expression levels were elevated on activated Th cells in iPAH [19]. Moreover, an increased percentage of cTfh-17 cells in the CD4+ population was observed in patients with iPAH [20]. A study by Maston et al. [21] in mouse models showed that CD4+ cells have a role in the development of hypoxia induced by PAH. Additionally, in normoxic and CH mice, Th17 was present in the cells, along with increased levels of pro-inflammatory IL-6. This suggests that T cells have a role in PAH induction [21]. In CD4+ T cells cultured in the presence of monocyte-derived DCs (MoDCs) from patients with PAH, reduced expression of IL-4 (Th 2 response) and higher levels of IL-17 (Th17 response) and increased activation and proliferation of CD4+ T cells were observed, as compared with CD4+ T cells cultured with MoDCs from control patients [22]. In current literature, an increase in Treg levels in iPAH patients has been reported [23,24]. Sada et al. [25] concluded in a study of Treg cells in iPAH patients that CTLA-4 expression levels in the immunosuppressive CD 4 CD 45 RA+-FoxP3 high aTregs (aTregs) and CD 4 CD 45 RA+-FoxP3 low non-Tregs (non-Tregs) subgroups were higher than those in control patients; however, the level of aTregs subgroup in iPAH patients did not change when compared with healthy patients, and the level of non-Tregs subgroup was higher than in healthy patients [25]. In addition, Tm levels were increased in iPAH patients [24]. Our data show differences in CD4+CTLA-4+ levels in patients with iPAH versus CTD-PAH and CTEPH. This information sheds new light on previous studies. Because decreased CD4+CTLA-4+ levels in patients with CTD-PAH and CTEPH may correlate with the development of immune toxicity and, therefore, a severe disease course, we suggest further studies of the CD4+CTLA+ group as divided into CD4+ Th, Treg and Tm cells. Such work may provide the information needed to understand the mechanisms involved in CD4+ in patients with PAH.
In this study, we observed an increase in CD8+CTLA4+ T lymphocytes. We thus conclude that CD8+ T lymphocytes aggravate PAH progression, increase inflammation and exert autoimmune effects, albeit through strong cytolytic activity [26]. However, in a study by Hautefort et al. [22], no changes in CD8+ counts were found in patients with PAH. Still, an increase in CD8+ levels in patients with iPAH was reported [16,27] and Ulrich et al. [23] reported a decrease in CD8+ levels. In contrast, the percentage of CD8+ T cells was much higher than other T cells, and it seems that the inflammatory infiltrate in PAH consisted mainly of CD8+ [17,28]. A role for Tc in autoimmunity in PAH has been suggested based on information gleaned from tumor studies [23].
An absence of changes in CD19+ B lymphocytes levels in patients with PAH had been reported [20,22]. In our study, however, we observed an increase in CD19+CTLA-4+ in patients with iPAH, and a decrease in patients with CHD-PAH and CTD-PAH. The elevated levels of CTLA-4 found on B lymphocytes is an interesting observation because, under conditions of body equilibrium, CTLA-4 is not locatable on B lymphocytes [29]. Since CTLA-4 can appear on the surface of B lymphocytes as a result of activation by T lymphocytes [30], the elevated levels of CD19+CTLA-4+ may be a response to enhanced levels of CD4+CTLA-4+ and CD8+CTLA-4+ [31].
Limitations of the Study
The limitation of the study is that it involves a small study group. The enrollment to the study was quite difficult because PAH is a rare disease and we only selected newly diagnosed PAH patients using quite strict inclusion criteria, such as no infection three months prior to the study, being without immunomodulatory treatment, no presence of allergy, etc. We only found 25 iPAH patients fulfilling the parameters. A larger study group may, therefore, provide more statistically significant correlations or differences between PAH patients and healthy controls.
Material and Methods
The study was conducted on 70 patients with PAH (50 women and 20 men). The diagnosis of PAH was based on ESC/ERS Guidelines [32]. The age of the patients was on average 57.74 ± 17.17 years (median: 60 years, minimum: 23 years, maximum: 81 years). Patients were classified by type of pulmonary arterial hypertension into chronic thromboembolic pulmonary hypertension (CTEPH) (10 patients, 7 women), PAH associated with congenital heart disease (CHD-PAH) (26 patients, 19 women), pulmonary arterial hypertension associated with systemic connective tissue disease (CTD-PAH) (9 patients, 9 women), and idiopathic pulmonary arterial hypertension (iPAH) (25 patients, 15 women). The heritable PAH patients were not included in this study. In patients with PAH, the WHO functional class of heart failure was established. The basic clinical and laboratory parameters characterizing patients with selected types of PAH and persons from the control group are decribed in Table 2. The basic hemodynamic parameters assessed during cardiac catheterization and echocardiography in patients with CHD-PAH, CTD-PAH, CTEPH and iPAHare delineated in Table 3. The study was conducted in subjects who showed no signs of infection or allergy and did not have immunosuppressive treatment or a blood transfusion in the 3 months prior to the study.
The control group consisted of 20 subjects (12 women and 8 men) aged 58.1 ± 11.1 years (median: 56 years; minimum: 39 years; maximum: 77 years). Only subjects with no history of cardiovascular disease, no history of treatment with agents affecting the immune system, no history of infection, no history of autoimmune disease, no history of allergy and no history of blood transfusion were selected as volunteers.
The protocol of the conducted study received a positive opinion of the Bioethics Committee at the Medical University of Lublin (number KE-0254/309/2016). The material for the study was peripheral blood, which was collected from patients with pulmonary arterial hypertension and from the control group. Accordingly, 10 mL of blood was collected into tubes containing EDTA via an aspiration-vacuum system (Sarstedt, Germany). The collected blood was immediately processed to obtain plasma, to evaluate lymphocyte immunophenotypeand to isolate peripheral blood mononuclear cells (PBMCs).
Cytometric Analysis
Cytometric analysis was performed using CellQuest software (Becton Dickinson, Franklin Lakes, NJ, USA). The employment of a FACSCalibur flow cytometer (Becton Dickinson, USA) equipped with an argon laser (wavelength 488 nm) allowed for the reading of the following parameters: FSC, SSC, FL-1 (green fluorescence intensity), FL-2 (orange fluorescence intensity) and FL-3 (red fluorescence intensity). Herein, fluorescence intensity is dependent on antigen binding by monoclonal antibodies labeled with the appropriate fluorochromes.
Lymphocyte subpopulation and surface antigen analysis was performed with 20,000 cells counted from the lymphocyte gate (R1 region). The correct position of the gate was confirmed by using antibodies directed to CD45 and CD14 antigens. The result of the cytometric analysis was presented as the percentage of cells positively stained with the respective monoclonal antibodies.
To assess the presence of peripheral blood lymphocyte surface antigens, the appropriate monoclonal antibodies were separated into tubes at 20 µL. Subsequently, 50 µL of whole blood was added to each tube and the monoclonal antibodies were incubated with whole blood for 20 min at room temperature. Table 4 shows the monoclonal antibodies used for labeling and lists the fluorochromes to which they were conjugated. Table 4. List of antibodies used to assess lymphocyte immunophenotype.
Statistical Analysis
Descriptive characteristics of continuous variables were presented as: arithmetic mean, standard deviation (SD), minimum value, maximum value and median. Intergroup comparisons were performed using analysis of variance (ANOVA) with Duncan's or Games-Howell post-hoc tests, depending on verification of the assumptions of analysis of variance, or the Kruskal-Wallis test with Dunn's post-hoc test. Comparison of mean values of independent variables depended on meeting the criteria of normality of distributions and equality of variance was performed using Student's t tests for independent samples.
Conclusions
Patients in the iPAH group had the significantly highest percentage of CD4+CTLA-4+ T lymphocytes among all PAH groups, as compared to those in the control group, patients with CTEPH, CTD-PAH and CHD-PAH.In iPAH patients, the percentages of CD4+CTLA-4+ T cells correlated strongly positively with the severity of heart failure New York Heart Association (NYHA) Functional Classification. Moreover, the percentage of B CD19+CTLA-4+ cells strongly positively correlated with the concentration of NT-proBNP. We have shown that statistically significantly higher percentages of CD4+CTLA-4+ and CD8+ CTLA-4+ T cells, measured at the time of iPAH diagnosis, were found in patients who died within 5 years of the diagnosis, which allows us to consider both of the above lymphocyte subpopulations as a negative prognostic/predictive factor in iPAH. CTLA-4 may be a promising biomarker of noninvasive detection of iPAH, but its role in planning the treatment strategy of PAH remains unclear. Further studies on T and B lymphocyte subsets are needed in different types of PAH to ascertain the relationships that exist between them and the disease. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: Due to privacy and ethical concerns, the data that support the findings of this study are available on request from the First Author, (M.T.). | 2022-12-17T16:12:06.279Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "31e0ab1138aac10f1e3d6bffd41fb43f234ab2b9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/24/15910/pdf?version=1671017444",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ec2251415e853508f64c2fc57c4dc43c838d8abe",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204938170 | pes2o/s2orc | v3-fos-license | Full-density multi-scale account of structure and dynamics of macaque visual cortex
We present a multi-scale spiking network model of all vision-related areas of macaque cortex that represents each area by a full-scale microcircuit with area-specific architecture. The layer- and population-resolved network connectivity integrates axonal tracing data from the CoCoMac database with recent quantitative tracing data, and is systematically refined using dynamical constraints. Simulations reveal a stable asynchronous irregular ground state with heterogeneous activity across areas, layers, and populations. Elicited by large-scale interactions, the model reproduces longer intrinsic time scales in higher compared to early visual areas. Activity propagates down the visual hierarchy, similar to experimental results associated with visual imagery. Cortico-cortical interaction patterns agree well with fMRI resting-state functional connectivity. The model bridges the gap between local and large-scale accounts of cortex, and clarifies how the detailed connectivity of cortex shapes its dynamics on multiple scales.
Introduction
Cortical activity has distinct but interdependent features on local and global scales, molded by connectivity on each scale. Globally, resting-state activity has characteristic patterns of correlations (Vincent et al., 2007;Shen et al., 2012) and propagation (Mitra et al., 2014) between areas. Locally, neurons spike with time scales that tend to increase from sensory to prefrontal areas (Murray et al., 2014) in a manner influenced by both short-range and long-range connectivity (Chaudhuri et al., 2015). We present a full-density multi-scale spiking network model in which these features arise naturally from its detailed structure.
Models of cortex have hitherto used two basic approaches. The first models each neuron explicitly in networks ranging from local microcircuits to small numbers of connected areas (Hill & Tononi, 2005;Haeusler et al., 2009).
The second represents the large-scale dynamics of cortex by simplifying the ensemble dynamics of areas or populations to few differential equations, such as Wilson-Cowan or Kuramoto oscillators (Deco et al., 2009;Cabral et al., 2011).
These models can for instance reproduce resting-state oscillations at ∼ 0.1 Hz. Chaudhuri et al. (2015) developed a mean-field multi-area model with a hierarchy of intrinsic time scales in the population firing rates, relying on a gradient of excitation across areas.
Cortical processing is not restricted to one or few areas, but results from complex interactions between many areas involving feedforward and feedback processes (Lamme et al., 1998;Rao & Ballard, 1999). At the same time, the high degree of connectivity within areas (Angelucci et al., 2002a;Markov et al., 2011) hints at the importance of local processing. Capturing both aspects requires multi-scale models that combine the detailed features of local microcircuits with realistic inter-area connectivity. Another advantage of multi-scale modeling is that it enables testing the equivalence between population models and models at cellular resolution instead of assuming it a priori.
Two main obstacles of multi-scale simulations are now gradually being overcome. First, such simulations require large resources on high-performance clusters or supercomputers and simulation technology that uses these resources efficiently. Recently, important technological progress has been achieved for the NEST simulator (Kunkel et al., 2014). Second, gaps in anatomical knowledge have prevented the consistent definition of multi-area models. Recent developments in the CoCoMac database (Bakker et al., 2012) and quantitative axonal tracing (Markov et al., 2014a,b) have systematized connectivity data for macaque cortex. However, it remains necessary to use statistical regularities such as relationships between architectural differentiation and connectivity (Barbas, 1986;Barbas & Rempel-Clower, 1997) to fully specify large cortical network models. Because of these difficulties, few large-scale spiking network models have been simulated to date, and existing ones heavily downscale the number of synapses per neuron (Izhikevich & Edelman, 2008;Preissl et al., 2012), generally affecting network dynamics .
We here use realistic numbers of synapses per neuron, building on a recent model of a 1 mm 2 cortical microcircuit with ∼ 10 5 neurons (Potjans & Diesmann, 2014). This is the smallest network size where the majority of inputs per neuron (∼ 10,000) is self-consistently represented at realistic connectivity (∼ 10%). Nonetheless, a substantial fraction of synapses originates outside the microcircuit and is replaced by stochastic input. Our model reduces random input by including all vision-related areas.
The model combines simple single-neuron dynamics with complex connectivity and thereby allows us to study the influence of the connectivity itself on the network dynamics. The connectivity map customizes that of the microcircuit model to each area based on its architecture and adds inter-areal connections. By a mean-field method (Schuecker et al., 2015), we refine the connectivity to fulfill the basic dynamical constraint of nonzero and non-saturated activity.
The ground state of cortex features asynchronous irregular spiking with low pairwise correlations (Ecker et al., 2010) and low spike rates (∼ 0.1 − 30 spikes/s) with inhibitory cells spiking faster than excitatory ones (Swadlow, 1988). Our model reproduces each of these phenomena, bridging the gap between local and global brain models, and relating the complex structure of cortex to its spiking dynamics.
Results
The model comprises 32 areas of macaque cortex involved in visual processing in the parcellation of Felleman & Van Essen (1991), henceforth referred to as FV91 (Table S1). Each area contains an excitatory and an inhibitory population in each of the layers 2/3, 4, 5 and 6 (L2/3, L4, L5, L6), except area TH, which lacks L4. The model, summarized in Table 1, represents each area by a 1 mm 2 patch.
Area-specific laminar compositions
Neuronal volume densities provided in a different parcellation scheme are mapped to the FV91 scheme and partly estimated using the average density of each layer across areas of the same architectural type ( Figure 1A). Architectural types (Table 4 of Hilgetag et al., 2015) reflect the distinctiveness of the lamination as well as L4 thickness, with agranular cortices having the lowest and V1 the highest value. Neuron density increases with architectural type.
When referring to architectural types, we also use the term 'structural hierarchy'. We call areas like V1 and V2 at the bottom of the structural (or processing) hierarchy 'early', and those near the top 'higher' areas.
We find total cortical thicknesses of 14 areas to decrease with logarithmized overall neuron densities, enabling us to estimate the total thicknesses of the other 18 areas ( Figure 1B). Quantitative data from the literature combined with our own estimates from published micrographs (Table S5) determine laminar thicknesses ( Figure 1C). L4 thickness relative to total cortical thickness increases with the logarithm of overall neuron density, which predicts relative L4 thickness for areas with missing data. Since the relative thicknesses of the other layers show no notable change with architectural type, we fill in missing values using the mean of the known data for these quantities and then normalize the sum of the relative thicknesses to 1. Layer thicknesses then follow from relative thickness times total thickness (see Table S6).
Finally, for lack of more specific data, the proportions of excitatory and inhibitory neurons in each layer are taken from cat V1 (Binzegger et al., 2004). Multiplying these with the laminar thicknesses and neuron densities yields the population sizes (see Experimental procedures).
Each neuron receives synapses of four different origins ( Figure 1D). In the following, we describe how the counts for these synapse types are computed (details in Experimental procedures).
Scalable scheme of local connectivity
We assume constant synaptic volume density across areas (Harrison et al., 2002). Experimental values for the average indegree in monkey visual cortex vary between 2,300 (O' Kusky & Colonnier, 1982) and 5,600 (Cragg, 1967) synapses per neuron. We take the average (3,950) as representative for V1, resulting in a synaptic density of 8.3 · 10 8 synapses mm 3 . The microcircuit model of Potjans & Diesmann (2014) serves as a prototype for all areas. The indegrees are a defining characteristic of this local circuit, as they govern the mean synaptic currents. We thus preserve their relative values when customizing the microcircuit to area-specific neuron densities and laminar thicknesses. The connectivity between populations is spatially uniform. The connection probability averages an underlying Gaussian connection profile over a disk with the surface area of the simulated area, separating simulated local synapses (type I) formed within the disk from non-simulated local synapses (type II) from outside the disk ( Figure 1D, E). In retrograde tracing experiments, Markov et al. (2011) found the fraction of labeled neurons intrinsic to each injected area (F LN i ) to be approximately constant, with a mean of 0.79. We translate this to numbers of synapses by assuming that the proportion of synapses of type I is 0.79 for realistic area size. For the 1 mm 2 model areas, we obtain an average proportion of type I synapses of 0.504.
Layer-specific heterogeneous cortico-cortical connectivity
We treat all cortico-cortical connections as originating and terminating in the 1 mm 2 patches, ignoring their spatial divergence and convergence. Two areas are connected if the connection is in CoCoMac ( Figure 1F) or reported by Markov et al. (2014a). For the latter we assume that the average number of synapses per labeled neuron is constant across projecting areas ( Figure 1G). To estimate missing values, we exploit the exponential decay of connectivity with distance (Ercsey-Ravasz et al., 2013). We first map the data from its native parcellation scheme (M132) to the FV91 scheme (see Experimental procedures) and then perform a least-squares fit ( Figure 1H). Combining the binary information on the existence of connections with the connection densities gives the area-level connectivity matrix ( Figure 1I).
Next, we distribute synapses between the populations of each pair of areas ( Figure 1K). The pattern of source layers is based on CoCoMac, if laminar data is available. Fractions of supragranular labeled neurons (SLN ) from retrograde tracing experiments yield proportions of projecting neurons in supra-and infragranular layers (Markov et al., 2014b). To predict missing values, we exploit a sigmoidal relation between the logarithmized ratios of cell densities of the participating areas and the SLN of their connection (as suggested by Beul et al. 2015; Figure 1J). Following Markov et al. (2014b), we use a generalized linear model for the fit and assume a beta-binomial distribution of source neurons. Since Markov et al. (2014b) do not distinguish infragranular layers further into L5 and L6, we use the more detailed laminar patterns from CoCoMac for this purpose, if available. We exclude L4 from the source patterns, in line with anatomical observations (Felleman & Van Essen, 1991), and approximate cortico-cortical connections as purely excitatory (Salin & Bullier, 1995;Tomioka & Rockland, 2007).
We base termination patterns on anterograde tracing studies collected in CoCoMac, if available, or on a relationship between source and target patterns (see Experimental procedures). Since neurons can receive synapses in different layers on their dendritic branches, we use laminar profiles of reconstructed cell morphologies (Binzegger et al., 2004) to relate synapse to cell-body locations. Despite the use of a point neuron model, we thus take into account the layer specificity of synapses on the single-cell level. In contrast to laminar synapse distributions, the resulting laminar distributions of target cell bodies are not highly distinct between feedforward and feedback projections.
Brain embedding
Inputs from outside the scope of our model, i.e., white-matter inputs from non-cortical or non-visual cortical areas and gray-matter inputs from outside the 1 mm 2 patch, are represented by Poisson spike trains. Corresponding numbers of synapses are not available for all areas, and laminar patterns of external inputs differ between target areas (Felleman & Van Essen, 1991;Markov et al., 2014b). Therefore, we determine the total number of external synapses onto an area as the total number of synapses minus those of type I and III, and distribute them with equal indegree for all populations.
Refinement of connectivity by dynamical constraints
Parameter scans based on mean-field theory (Schuecker et al., 2015) and simulations reveal a bistable activity landscape with two coexisting stable fixed points. The first has reasonable firing rates except for populations 5E and 6E, which are nearly silent (Figure 2A), while the second has excessive rates ( Figure 2B) in almost all populations.
Depending on the parameter configuration, either the low-activity fixed point has a sufficiently large basin of attraction for the simulated activity to remain near it, or fluctuations drive the network to the high-activity fixed point. To counter this shortcoming, we define an additional parameter κ which increases the external drive onto 5E by a factor κ = K ext,5E /K ext compared to the external drive of the other cell types. Since the rates in population 6E are even lower, we increase the external drive to 6E by a slightly larger factor than that to 5E. When applied directly to the model, even a small increase in κ already drives the network into the undesired high-activity state ( Figure 2B). Using the stabilization procedure described in Schuecker et al. (2015), we derive targeted modifications of the connectivity within the margins of uncertainty of the anatomical data, with an average relative change in total indegrees (summed over source populations) of 11.3% ( Figure S1B). This allows us to increase κ while retaining the global stability of the low-activity state. In the following, we choose κ = 1.125, which gives K 6E,ext /K ext = 1.417 and the external inputs listed in Table S11, and g = −11, ν ext = 10 spikes/s, yielding reasonable firing rates in populations 5E and 6E ( Figure 2C). In total, the 4.13 million neurons of the model are interconnected via 2.42 · 10 10 synapses.
The stabilization renders the intrinsic connectivity of the areas more heterogeneous. Cortico-cortical connection densities similarly undergo small changes, but with a notable reduction in the mutual connectivity between areas 46 and FEF. For more details on the connectivity changes, see Schuecker et al. (2015).
Community structure of anatomy relates to functional organization
We test if the stabilized network retains known organizing principles by analyzing the community structure in the weighted and directed graph of area-level connectivity. The map equation method (Rosvall et al., 2010) reveals 6 clusters ( Figure 3). We test the significance of the corresponding modularity Q = 0.32 by comparing with 1000 surrogate networks conserving the total outdegree of each area by shuffling its targets. This yields Q = −0.02 ± 0.03, indicating the significance of our clustering. The community structure reflects anatomical and functional properties of the areas. Two large clusters comprise ventral and dorsal stream areas, respectively. Ventral area VOT is grouped with early visual area VP. Early sensory areas V1 and V2 form a separate cluster, as well as parahippocampal areas TH and TF. The two frontal areas FEF and 46 form the last cluster. Nonetheless, the clusters are heavily interconnected ( Figure 3). The basic separation into ventral and dorsal clusters matches that found in the connection matrix of Figure 1 Construction principles of the model. (A) Laminar neuron densities for the architectural types in the model. Type 2, here corresponding only to area TH, lacks L4. In the model, L1 contains synapses but no neurons. Data provided by H. Barbas and C. Hilgetag (personal communication) and linearly scaled up to account for undersampling of cells by NeuN staining relative to Nissl staining as determined by repeat measurements of 11 areas. (B) Total thickness vs. logarithmized overall neuron density and linear least-squares fit (r = −0.7, p = 0.005). (C) Relative laminar thickness (see Table S5) vs. , and g = −11, νext = 10 spikes/s, κ = 1.125 with the modified connectivity matrix (C). The color bar holds for all three panels. Areas are ordered according to their architectural type along the horizontal axis from V1 (type 8) to TH (type 2) and populations are stacked vertically. The two missing populations 4E and 4I of area TH are marked in black and firing rates < 10 −2 Hz in gray. Bottom row: Histogram of population-averaged firing rates for excitatory (red) and inhibitory (blue) populations. The horizontal axis is split into linear-(left) and log-scaled (right) ranges.
connectivity matrix, but there are also important differences. For instance, our clustering groups areas STPa, STPp, and 7a with the dorsal instead of the ventral stream, better matching the scheme described by Nassi & Callaway (2009).
Area-and population-specific activity in the resting state
The model with cortico-cortical synaptic weights equal to local weights displays a reasonable ground state of activity but no substantial inter-area interactions ( Figure S2). To control these interactions, we scale cortico-cortical synaptic weights w cc onto excitatory neurons by a factor λ = J E cc /J and provide balance by increasing the weights J I cc onto inhibitory neurons by twice this factor, J I cc = λ I λJ = 2λJ. In the following, we choose λ = 1.9. Simulations yield irregular activity with plausible firing rates ( Figure 4A-C). Irregularly occurring population bursts of different lengths up to several seconds arise from the asynchronous baseline activity ( Figure 4G) and propagate across the network.
The firing rates differ across areas and layers and are generally low in L2/3 and L6 and higher in L4 and L5, partly due to the cortico-cortical interactions ( Figure 4D). The overall average rate is 14.6 spikes/s. Inhibitory populations are generally more active than excitatory ones across layers and areas despite the identical intrinsic properties of the two cell types. However, the strong participation of L5E neurons in the cortico-cortical interaction bursts causes these to fire more rapidly than L5I neurons. Pairwise correlations are low throughout the network ( Figure 4E). Excitatory neurons are more synchronized than inhibitory cells in the same layer, except for L6. Spiking irregularity is close to that of a Poisson process across areas and populations, with excitatory neurons consistently firing more irregularly than inhibitory cells ( Figure 4F). Higher areas exhibit bursty spiking, as illustrated by the raster plot for area FEF ( Figure 4C). Figure 3 Community structure of the model. Clusters in the connectivity graph, indicated by the color of the nodes: Early visual areas (green), dorsal stream areas (red), areas VP and VOT (light blue), ventral stream (dark blue), parahippocampal areas (brown), and frontal areas (purple). Black, connections within clusters; gray, connections between clusters. Line thickness encodes logarithmized outdegrees. Only edges with relative outdegree> 10 −3 are shown. Average time scale per architectural type indicated by triangles and overall trend by black curve. Area MDP (architectural type 5) has a time scale of 2 ms because it is uncoupled from other areas due to the lack of incoming connections.
Intrinsic time scales increase with structural hierarchy
We tested whether the model accounts for the hierarchical trend in intrinsic time scales observed in macaque cortex (Murray et al., 2014). Indeed, autocorrelation width in the model increases from early visual to higher areas. In early visual areas including V1, the autocorrelation decays with τ < 2.5 ms, indicating near-Poissonian spiking ( Figure 5A).
In higher areas, autocorrelations are broader with decay times ∼ 10 2 ms. The long time scales reflect bursty spike patterns of single-neuron activity (Figure 4), caused by the low neuron density in higher areas and thus high indegrees (2014), we find the time scale of area LIP to exceed that of MT, albeit by a small amount.
Structural and hierarchical directionality of spontaneous activity
To investigate inter-area propagation, we determine the temporal order of spiking ( Figure 6A) based on the correlation between areas. We detect the location of the extremum of the correlation function for each pair of areas ( Figure 6B) and collect the corresponding time lags in a matrix ( Figure 6C). In analogy to structural hierarchies based on pairwise connection patterns (Reid et al., 2009), we look for a temporal hierarchy that best reflects the order of activations for all pairs of areas (see Experimental procedures). The result ( Figure 6D) places parietal and temporal areas at the beginning and early visual as well as frontal areas at the end. The first and second halves of the time series yield qualitatively identical results ( Figure S3). Figure 6E shows the consistency of the hierarchy with the pairwise lags. To quantify the goodness of the hierarchy, we counted the pairs of areas for which it indicates a wrong ordering. The number of such violations is 190 out of 496, well below the 230 ± 12 (SD) violations obtained for 100 surrogate matrices, created by shuffling the entries of the original matrix while preserving its antisymmetric character.
This indicates that the simulated temporal hierarchy reflects nonrandom patterns. The propagation is mostly in the feedback direction not only in terms of the structural hierarchy, but also spatially: activity starts in parietal regions, and spreads to the temporal and occipital lobes ( Figure 6F). However, activity troughs in frontal areas follow peaks in occipital activity and thus appear last.
Emerging interactions mimic experimental functional connectivity
We compute the area-level functional connectivity (FC) based on the synaptic input current to each area, which has been shown to be more comparable to the BOLD fMRI than the spiking output (Logothetis et al., 2001). The FC matrix exhibits a rich structure, similar to experimental resting-state fMRI ( Figure 7A, B, see Experimental procedures for details). In the simulation, frontal areas 46 and FEF are more weakly coupled with the rest of the network, but the anticorrelation with V1 is well captured by the model ( Figure S4). Moreover, area MDP sends connections to, but does not receive connections from other areas according to CoCoMac, limiting its functional coupling to the network.
Louvain clustering (Blondel et al., 2008), an algorithm optimizing the modularity of the weighted, undirected FC graph (Newman, 2004), yields two modules for both the simulated and the experimental data. The modules from the simulation differ from those of the structural connectivity and reflect the temporal hierarchy shown in Figure 6C. Cluster 1S merges early visual with ventral and two dorsal regions with average level in the temporal hierarchy of h = 0.47 ± 0.13 (SD). Cluster 2S contains mostly temporally earlier areas (h = 0.33 ± 0.25 (SD)) merging parahippocampal with dorsal but also frontal areas. The experimental module 2E comprises only dorsal areas, while 1E consists of all other areas including also eight dorsal areas.
The structural connectivity of our model shows higher correlation with the experimental FC (r Pearson = 0.34) than the binary connectivity matrices from both a previous (Shen et al., 2015) and the most recent release of CoCoMac (r Pearson = 0.20), further validating our weighted connectivity matrix. For increasing weight factor λ, the correlation between simulation and experiment improves ( Figure 7D). For λ = 1, areas interact weakly, resulting in low correlation between simulation and experiment ( Figure S2). For intermediate cortico-cortical connection strengths, the correlation of simulation vs. experiment exceeds that between the structural connectivity and experimental FC ( Figure 7C), indicating the enhanced explanatory power of the dynamical model. From λ = 2 on, the network is prone to switch to the high-activity state ( Figure S5). Thus, the highest correlation (r Pearson = 0.47 for λ = 1.9) occurs just below the onset of a state in which the model visits both the low-activity and high-activity attractors. Areas are ordered according to a clustering with the Louvain algorithm (Blondel et al., 2008) applied to the simulated data (top row) and to the experimental data (bottom row), respectively (see Experimental procedures). (C) Alluvial diagram showing the differences in the clusters for the structural connectivity (left), the simulated FC (center) and the experimentally measured FC (right). (D) Pearson correlation coefficient of simulated FC vs. experimentally measured FC for varying λ with λI = 2 (triangles) and λI = 1 (dot, cf. Figure S2). Dashed line, Pearson correlation coefficient of structural connectivity vs. experimentally measured FC.
Discussion
In this work, we present a full-density spiking multi-scale network model of all vision-related areas of macaque cortex.
An updated connectivity map at the level of areas, layers, and neural populations defines its structure. Simulations of the network on a supercomputer reveal good agreement with multi-scale dynamical properties of cortex and supply testable hypotheses. Consistent with experimental results, the local structure of areas supports higher firing rates in inhibitory than in excitatory populations, and a laminar pattern with low firing rates in layers 2/3 and 6 and higher rates in layers 4 and 5. When cortico-cortical interactions are substantial, the network shows dynamic characteristics reflecting both local and global structure. Individual cells spike irregularly with increasing intrinsic time scales along the visual hierarchy and activity propagates in the feedback direction. Functional connectivity in the model agrees well with that from resting-state fMRI and yields better predictions than the structural connectivity alone. These features are direct consequences of the multi-scale structure of the network.
The structure of the model integrates a wide range of anatomical data, complemented with statistical predictions.
The cortico-cortical connectivity is based on axonal tracing data collected in a new release of CoCoMac (Bakker et al., 2012) and recent quantitative and layer-specific retrograde tracing (Markov et al., 2014b,a). We fill in missing data using relationships between laminar source and target patterns (Felleman & Van Essen, 1991;Markov et al., 2014b), and statistical dependencies of cortico-cortical connectivity on distance (Ercsey-Ravasz et al., 2013) and architectural differentiation Hilgetag et al., 2015), an approach for which Barbas (1986); Barbas & Rempel-Clower (1997) laid the groundwork. The use of axonal tracing results avoids the pitfalls of diffusion MRI data, which strongly depend on tractography parameters and are unreliable for long-range connections (Thomas et al., 2014).
Direct comparison of tracing and tractography data moreover reveals that tractography is particularly unreliable at fine spatial scales, and tends to underestimate cortical connectivity (Calabrese et al., 2015b).
Our model customizes the microcircuit of Potjans & Diesmann (2014) based on the specific architecture of each area, taking into account neuronal densities and laminar thicknesses. A stabilization procedure (Schuecker et al., 2015) further diversifies the internal circuitry of areas. Neuronal densities in the model decrease up the structural hierarchy, in line with an observed caudal-to-rostral gradient (Charvet et al., 2015). Combined with a constant synaptic volume density (O'Kusky & Colonnier, 1982;Cragg, 1967) this yields higher indegrees up the hierarchy.
This trend matches an increase in dendritic spines per pyramidal neuron (Elston & Rosa, 2000;Elston, 2000;Elston et al., 2011), also used in a recent multi-area population rate model (Chaudhuri et al., 2015). The local connectivity can be further refined using additional area-specific data.
We find total cortical thickness to decrease with logarithmized total neuron density. Similarly, total thicknesses from MR measurements decrease with architectural type (Wagstyl et al., 2015), which is known to correlate strongly with cell density . In our data set, total and layer 4 thickness are also negatively correlated with architectural type, but these trends are less significant than those with logarithmized neuron density. Laminar and total cortical thicknesses are determined from micrographs, which has the drawback that this covers only a small fraction of the surface of each cortical area. For absolute but not relative thicknesses, another caveat is potential shrinkage and obliqueness of sections. It has also been found that relative laminar thicknesses depend on the sulcal or gyral location of areas, which is not offset by a change in neuron densities (Hilgetag & Barbas, 2006). However, regressing our relative thickness data against cortical depth of the areas registered to F99 revealed no significant trends of this type ( Figure S6). Laminar thickness data are surprisingly incomplete, considering that this is a basic anatomical feature of cortex. In future, more systematic estimates from anatomical studies or MRI may become available. Total thicknesses have already recently been measured across cortex (Calabrese et al., 2015a;Wagstyl et al., 2015), and could complement the dataset used here covering 14 of the 32 areas. However, when computing numbers of neurons, using histological data may be preferable, because shrinkage effects on neuronal densities and laminar thicknesses partially cancel out.
In the model, we statistically assign synapses to target neurons based on anatomical reconstructions (Binzegger et al., 2004). On the target side, this yields similar laminar cell-body distributions for feedforward and feedback projections despite distinct laminar synapse distributions, mirroring findings in early visual cortex of mouse (De Pasquale & Sherman, 2011). Prominent experimental results on directional differences in communication patterns are based on LFP, ECoG and MEG recordings (van Kerkoerle et al., 2014;Bastos et al., 2015;Michalareas et al., 2016), which mostly reflect synaptic inputs. In future, these findings may be integrated into the stabilization procedure to better capture such differential interactions. While this is expected to enhance the distinction between average connection patterns for feedforward and feedback projections, known anatomical patterns suggest that a substantial fraction of individual pairs of areas deviate from a simple rule (Felleman & Van Essen, 1991;Krumnack et al., 2010;Bakker et al., 2012). The cortico-cortical connectivity may be further refined by incorporating the dual counterstream organization of feedforward and feedback connections (Markov et al., 2014b), or by taking into account different numbers of inter-area synapses per neuron in feedforward and feedback directions (Rockland, 2004).
In the resulting connectivity, we find multiple clusters reflecting the anatomical and functional partition of visual cortex into early visual areas, ventral and dorsal streams, parahippocampal and frontal areas, showing that the model construction yields a meaningful network structure. Moreover, the graded structural connectivity of the model agrees better with the experimentally measured resting-state activity than the binary connectivity from CoCoMac.
The network exhibits an asynchronous, irregular ground state across the network with population bursts due to inter-area interactions. Population firing rates differ across layers and inhibitory rates are generally higher than excitatory ones, in line with experimental findings (Swadlow, 1988;Fujisawa et al., 2008;Sakata & Harris, 2009). This can be attributed to the connectivity, because excitatory and inhibitory neurons are equally parametrized and excitatory neurons receive equal or stronger external stimulation compared to inhibitory ones. Laminar activity patterns vary across areas due to their customized structure and cortico-cortical connectivity.
Intrinsic single-cell time scales in the model are short in early visual areas and long in higher areas, on the same order of magnitude as found experimentally (Murray et al., 2014). The long time scales in higher areas are related to bursty firing associated with the high indegrees in these areas, but only occur in the presence of cortico-cortical interactions. Thus, the model predicts that the pattern of intrinsic time scales has a multi-scale origin. Systematic differences in synaptic composition across cortical regions and layers (Zilles et al., 2004;Hawrylycz et al., 2012) may also contribute to the experimentally observed pattern of time scales.
Inter-area interactions in the model are mainly mediated by population bursts of different lengths. The degree of synchrony accompanying inter-area interactions in the brain is as yet unclear. Obtaining substantial corticocortical interactions with low synchrony may be possible with finely structured connectivity and reduced noise input.
When neurons are to a large extent driven by a noisy external input, a smaller percentage of their activity is determined by intrinsic inputs, which can decrease their effective coupling (Aertsen & Preißl, 1990). One way of reducing the external drive while preserving the mean network activity may be for the drive to be attuned to the intrinsic connectivity (Marre et al., 2009). Stronger intrinsic coupling while maintaining stability may be achieved for instance by introducing specific network structures such as synfire chains (Diesmann et al., 1999) or other feedforward structures, subnetworks, or small-world connectivity (Jahnke et al., 2014); population-specific patterns of short-term plasticity (Sussillo et al., 2007); or fine-tuned inhibition between neuronal groups (Hennequin et al., 2014).
The synchronous population events propagate stably across multiple areas, predominantly in the feedback direction. The systematic activation of parietal before occipital areas in the model is reminiscent of EEG findings on information flow during visual imagery (Dentico et al., 2014) and the top-down propagation of slow waves during sleep (Massimini et al., 2004;Nir et al., 2011;Sheroziya & Timofeev, 2014). Our method for determining the order of activations is similar to one recently applied to fMRI recordings (Mitra et al., 2014). It could be extended to distinguish between excitatory and inhibitory interactions like those we observe between V1 and frontal areas ( Figure S4). In the network, cortico-cortical projections target both excitatory and inhibitory populations, with the majority of synapses terminating on excitatory cells. Stronger cortico-cortical synapses to enhance inter-area interactions require increased balancing of cortico-cortical inputs to preserve network stability. This is similar to the "handshake" mechanism in the microcircuit model of Potjans & Diesmann (2014) where interlaminar projections provide network stability by their inhibitory net effect.
The pattern of simulated interactions between areas resembles fMRI resting-state activity. The agreement between simulation and experiment peaks at intermediate coupling strength, where synchronized clusters also emerged most clearly in earlier models (Zhou et al., 2006;Deco & Jirsa, 2012). Furthermore, optimal agreement occurs just below a transition to a state where the network switches between attractors, supporting evidence that the brain operates in a slightly subcritical regime (Deco & Jirsa, 2012;Priesemann et al., 2014).
Time series of spiking activity reveal broad-band transmission between areas on time scales up to several seconds.
The low-frequency part of these interactions is comparable to fMRI data, which describes coherent fluctuations on the order of seconds. The long time scales in the model activity may be caused by eigenmodes of the effective connectivity that are close to instability or non-orthogonal (Hennequin et al., 2012). A potential future avenue for research would be to distinguish between such network effects and other sources of long time scales such as NMDA and GABA B transmission, neuromodulation, or adaptation effects.
For tractability, the model represents each area as a 1 mm 2 patch of cortex. True area sizes vary from ∼ 3 million cells in TH to ∼ 300 million cells in V1 for a total of around 8 · 10 8 neurons in one hemisphere of macaque visual cortex, a model size that with recent advances in simulation technology (Kunkel et al., 2014) already fits on the most powerful supercomputers available today. Approaching this size would reduce the negative effects of downscaling . Overall, our model elucidates multi-scale relationships between cortical structure and dynamics, and can serve as a platform for the integration of new experimental data, the creation of hypotheses, and the development of functional models of cortex. source and target neurons drawn randomly with replacement (allowing autapses and multapses) with area-and population-specific connection probabilities Weights fixed, drawn from normal distribution with mean J and standard deviation δJ = 0.1J; 4E to 2/3E increased by factor 2 (cf. Potjans & Diesmann, 2014); weights of inhibitory connections increased by factor g; excitatory weights < 0 and inhibitory weights > 0 are redrawn; cortico-cortical weights onto excitatory and inhibitory populations increased by factor λ and λ I λ, respectively Delays fixed, drawn from Gaussian distribution with mean d and standard deviation δd = 0.5d; delays of inhibitory connections factor 2 smaller; delays rounded to the nearest multiple of the simulation step size h = 0.1 ms, inter-areal delays drawn from a Gaussian distribution with mean d = s/v t , with distance s and transmission speed v t = 3.5 m/s (Girard et al., 2001); and standard deviation δd = d/2, distances determined as described in Supplemental Experimental Procedures, delays < 0.1 ms before rounding are redrawn D: Neuron and synapse model Name LIF neuron Type leaky integrate-and-fire, exponential synaptic current inputs Subthreshold dynamics Table S3) F: Measurements Spiking activity In the following, we detail how we derive the structure of the model (summarized in Table 1), i.e., the population sizes, the local and cortico-cortical connectivity and the external drive.
Numbers of neurons
We estimate the number of neurons N (A, i) in population i of area A in three steps: 1. We translate neuronal volume densities to the FV91 scheme from the most representative area in the original scheme (Table S4). For areas not covered by the data set, we take the average laminar densities for areas of the same architectural type. Table 4 of Hilgetag et al. (2015) lists the architectural types, which we translate to the FV91 scheme according to Table S4. To the previously unclassified areas MIP and MDP we manually assign type 5 like their neighboring area PO, which is similarly involved in visual reaching Galletti et al., 2003), and was placed at the same hierarchical level by Felleman & Van Essen (1991).
2. We determine total and laminar thicknesses as detailed in Results.
3. The fraction γ(v) of excitatory neurons in layer v is taken to be identical across areas. For the laminar dependency, values from cat V1 (Binzegger et al., 2004) are used with 78% excitatory neurons in layer 2/3, 80% in L4, 82% in L5, and 83% in L6.
The resulting number of neurons in population i of area A is where v i denotes the layer of population i, S(A) the surface area of area A (cf . Table S7), D(A, v i ) the thickness of layer v i , and E, I the pool of excitatory and inhibitory populations, respectively. Table S8 gives the population sizes corresponding to the modeled 1 mm 2 area size.
Local connectivity
The connection probabilities of the microcircuit model (Potjans & Diesmann, 2014, form the basis for the local circuit of each area. The connectivity between any pair of populations is spatially uniform. However, we take the underlying probability C for a given neuron pair to establish one or more contacts to decay with distance according to a Gaussian with standard deviation σ = 297 µm (Potjans & Diesmann, 2014). We approximate each brain area as a flat disk with (area-specific) radius R and assign polar coordinates r and θ to each neuron.
The radius determines the cut-off of the Gaussian and hence the precise connectivities. The average connection probability is obtained by integrating over all possible positions of the two neurons: π 0 exp − r 2 1 + r 2 2 − 2r 1 r 2 cos(θ 1 − θ 2 ) 2σ 2 r 1 r 2 dθ 1 dr 1 dθ 2 dr 2 , with C 0 the connection probability at zero distance. This can be reduced to a simpler form (Sheng, 1985), Averaged across population pairs, C 0 is 0.143 (computed from Eq. 8 and Table S1 in Potjans & Diesmann, 2014).
Note that Potjans & Diesmann (2014) only vary the position of one neuron, keeping the other neuron fixed in the center of the disk (Eq. 9 in that paper). Henceforth, we denote connection probabilities computed with the latter approach with the subscript PD14 and use primes for all variables referring to a network with the population sizes of the microcircuit model.
The parameters of the microcircuit model are reported for a 1 mm 2 patch of cortex, corresponding to R = 1/π mm, which we call R 0 . For each source population j and target population i, we first translate the connection probabilities of the 1 mm 2 model to area-dependent R via with c A (R) an area-specific conversion factor, which is larger for areas with smaller neuron densities because of the assumption of constant synaptic volume density. It is computed as in F99 space. On the target side we use the coordinates of the injection sites registered to the F99 atlas available via the Scalable Brain Atlas (Bakker et al., 2015) to identify the equivalent area in the FV91 parcellation (cf. Table S9). There is data for 11 visual areas in the FV91 scheme with repeat injections in six areas, for which we take the arithmetic mean. To map data on the source side from M132 to FV91, we count the number of overlapping triangles on the F99 surface between any given pair of regions and distribute the F LN proportionally to the amount of overlap, using the F99 region overlap tool at the CoCoMac site (http://cocomac.g-node.org). To estimate values for the areas not included in the data set, we use an exponential decay of connectivities with distance (Ercsey-Ravasz et al., 2013), As a next step, we determine the distribution of connections across source and target layers. On the source side, the laminar projection pattern can be expressed as the fraction of supragranular labeled neurons (SLN ) in retrograde tracing experiments (Markov et al., 2014b). To determine the SLN entering into the model, we use the exact coordinates of the injections to determine the corresponding target area A in the FV91 parcellation, and for each pair of areas we take the mean SLN across injections. To map the data from M132 to FV91, we weight the SLN by the overlap c B,β between area β in the former and area B in the latter scheme and the F LN to take into account the overall strength of the connection, We estimate missing values using a sigmoidal fit of SLN vs. the logarithmized ratio of overall cell densities of the two areas ( Figure 1J). A relationship between laminar patterns and log ratios of neuron densities was suggested by Beul et al. (2015). Following Markov et al. (2014b), we use a generalized linear model and assume the numbers of labeled neurons in the source areas to sample from a beta-binomial distribution (e.g. Weisstein, 2005). This distribution arises as a combination of a binomial distribution with probability p of supragranular labeling in a given area, and a beta distribution of p across areas with dispersion parameter φ. With the probit link function g (e.g. McCulloch et al., 2008), the measured SLN relates to the log ratio of neuron densities for each pair of areas as where and SLN are vectors and {a 0 , a 1 } are scalar fit parameters. To fit SLN vs. log ratios of cell densities, we map the FV91 areas to the Markov et al. (2014b) scheme with the overlap tool of CoCoMac (see above) and compute the cell density of each area in the M132 scheme as a weighted average over the relevant FV91 areas. For areas with identical names in both schemes, we simply take the neuron density from the FV91 scheme. Figure 1J shows the result of the SLN fit in R (R Core Team, 2015) with the betabin function of the aod package (Lesnoff & Lancelot, 2012). In contrast to Markov et al. (2014b), who exclude certain areas when fitting SLN vs. hierarchical distances in view of ambiguous hierarchical relations, we take all data points into account to obtain a simple and uniform rule.
As a further step, we combine SLN with CoCoMac data. The data sets complement each other: SLN provides quantitative data on laminar patterns of incoming projections for about one quarter of the connected areas. CoCoMac has values for all six layers, but limited to a qualitative strength ranging from 0 (absent) to 3 (strong) which we take to represent numbers of synapses in orders of magnitude (see Supplemental Experimental Procedures). Whether or not to include a layer in source pattern P s is based on CoCoMac (Felleman & Van Essen, 1991;Barnes & Pandya, 1992;Suzuki & Amaral, 1994b;Morel & Bullier, 1990;Perkel et al., 1986;Seltzer & Pandya, 1994) if the corresponding data is available (45 % coverage); otherwise, we include L2/3, L5 and L6 and exclude L4 (Felleman & Van Essen, 1991). We model cortico-cortical connections as purely excitatory, a good approximation to experimental findings (Salin & Bullier, 1995;Tomioka & Rockland, 2007). If a layer is included in the source pattern, we assign a fraction of the total outgoing synapses to it according to the SLN . Since the SLN do not further distinguish between the infragranular layers 5 and 6, we use the rough connection densities from CoCoMac for this purpose when available, and otherwise we distribute synapses in proportion to the numbers of neurons. On the target side, we determine the pattern of target layers P t from anterograde tracer studies in CoCoMac (Jones et al., 1978;Rockland & Pandya, 1979;Morel & Bullier, 1990;Webster et al., 1991;Felleman & Van Essen, 1991;Barnes & Pandya, 1992;Distler et al., 1993;Suzuki & Amaral, 1994b;Webster et al., 1994) , and we distribute synapses among the layers in the termination pattern in proportion to their thickness.
Since we use a point neuron model, we have to account for the possibly different laminar positions of cell bodies and synapses. The data of Binzegger et al. (2004) deliver three quantities that allow us to relate synapse to cell body locations: first, the probability P(s cc |c B s ∈ v) for a synapse in layer v on a cell of type c B (e.g., a pyramidal cell with soma in L5) to be of cortico-cortical origin; second, the relative occurrence P(c B ) of the cell type c B ; and third, the total numbers of synapses N syn (v, c B ) in layer v onto the given cell type. We map these data to our model by computing the conditional probability P(i|s cc ∈ v) for the target neuron to belong to population i if a cortico-cortical synapse s cc is in layer v. This probability equals the sum of probabilities that a synapse is established on the different Binzegger et al. subpopulations making up our populations, where The numerator gives the joint probability that a cortico-cortical synapse is formed in layer v on cell type c B , and the denominator is the probability of a cortico-cortical synapse in layer v, computed by summing over cell types, N syn,CC (v, c B ) represents the number of cortico-cortical synapses in layer v on cell type c B , which can be directly determined from the data. Combining these equations, we obtain the number of cortico-cortical (type III) synapses from excitatory population j of area B to population i of area A (cf. Figure 1K): if j ∈ I P s but no CoCoMac data available if no CoCoMac data available .
Z i is an additional factor which takes into account that cortico-cortical feedback connections preferentially target excitatory rather than inhibitory neurons (Johnson & Burkhalter, 1996;Anderson et al., 2011). Figure S1 shows the resulting connection probabilities between all population pairs in the model.
External, random input
Since quantitative area-specific data on non-visual and subcortical inputs are highly incomplete, we use a simple scheme to determine numbers of external inputs: For each area, we compute the total number of external synapses as the difference between the total number of synapses and those of type I and III and distribute these such that all neurons in the given area have the same indegree for Poisson sources. In area TH, we compensate for the missing granular layer 4 by increasing the external drive onto populations 2/3E and 5E by 20 %. With the modified connectivity matrix yielded by the analytical procedure described in Schuecker et al. (2015), we set κ = 1.125 to increase the external indegree onto population 5E by 12.5 % and onto 6E by 42 % to elevate the firing rates in these populations. Table S11 lists the resulting external indegrees.
Network simulations
We performed simulations on the JUQUEEN supercomputer (Jülich Supercomputing Centre, 2015) with NEST version 2.8.0 (Eppler et al., 2015) with optimizations for the use on the supercomputer which will be included in a future release. All simulations use a time step of 0.1 ms and exact integration for the subthreshold dynamics of the LIF neuron model (reviewed in Plesser & Diesmann, 2009). Simulations were run for 100.5 s (λ = 1.9), 50.5 s (λ ∈ [1.8, 2.0, 2.1]), and 10.5 ms (λ ∈ [1., 1.5, 1.7, 2.5]) biological time discarding the first 500 ms. Spike times were recorded from all neurons, except for the simulations shown in Figure 2A,B, where we recorded from 1000 neurons per population.
Analysis methods
We investigate the structural properties of the model with the map equation method (Rosvall et al., 2010). In this clustering algorithm, an agent performs random walks between graph nodes with probability proportional to the outdegree of the present node and a probability (p = 0.15) of jumping to a random network node. The algorithm detects clusters in the graph by minimizing the length of a binary description of the network using a Huffman code.
To assess the quality of the clustering, we use a modularity measure which extends a measure for unweighted, directed networks (Leicht & Newman, 2008) to weighted networks, analogous to Newman 2004, Instantaneous firing rates are determined as spike histograms with bin width 1 ms averaged over the entire population or area. In Figure 4G, Figure S2G, and to determine the temporal hierarchy, we convolve the histograms with Gaussian kernels with σ = 2 ms. Spike-train irregularity is quantified for each population by the revised local The temporal hierarchy is based on the cross-covariance function between area-averaged firing rates. We use a wavelet-smoothing algorithm (signal.find peaks cwt of python scipy library (Jones et al., 2001) with peak width ∆ = 20 ms) to detect extrema for τ ∈ [−100, 100] and take the location of the extremum with the largest absolute value as the time lag.
Functional connectivity (FC) is defined as the zero-time lag cross-correlation coefficient of the area-averaged synaptic inputs with the normalized post-synaptic current P SC j (t) = exp[−t/τ s ], the population firing rate ν j of source population j, indegree K ij , and synaptic weight J ij of the connection from j to target population i containing N i neurons.
The clustering of the FC matrices was performed using the function modularity louvain und sign of the Brain Connectivity Toolbox (BCT; http://www.brain-connectivity-toolbox.net) with the Q * option, which weights positive weights more strongly than negative weights, as introduced by Rubinov & Sporns (2011).
Macaque resting-state fMRI
Data were acquired from six male macaque monkeys (4 Macaca mulatta and 2 Macaca fascicularis). All experimental protocols were approved by the Animal Use Subcomittee of the University of Western Ontario Council on Animal Care and in accordance with the guidelines of the Canadian Council on Animal Care. Data acquisition, image preprocessing and a subset of subjects (5 of 6) were previously described (Babapoor-Farrokhran et al., 2013). Briefly, 10 5-min resting-state fMRI scans (TR: 2 s; voxel size: 1 mm isotropic) were acquired from each subject under light anaesthesia (1.5 isoflurane). Additional processing for the current study included the regression of nuisance variables using the AFNI software package (afni.nimh.nih.gov/afni), which included six motion parameters as well as the global white matter and CSF signals. The global mean signal was not regressed.
The FV91 parcellation was drawn on the F99 macaque standard cortical surface template (Van Essen et al., 2001) and transformed to volumetric space with a 2 mm extrusion using the Caret software package (http://www.nitrc. org/projects/caret). The parcellation was applied to the fMRI data and functional connectivity computed as the Pearson correlation coefficients between probabilistically-weighted ROI timeseries for each scan .
Correlation coefficients were Fisher z-transformed and correlation matrices were averaged within animals and then across animals before transforming back to Pearson coefficients. The thickness data is the same as in Figure 1. Cortical depth data obtained from F99 surface statistics available through the Caret Software (Van Essen, 2012). Values for each area are averaged across cortical surface and both hemispheres. The data is obtained using the F99 Sulcal depth tool on http://cocomac.gnode.org and can be directly accessed via these two links: http://cocomac.g-node.org/cocomac2/services/f99_ sulcal_depth.php?atlas=FV91&shape=Depth-Right&mode=avg&output=tsv&run=1 and http://cocomac.g-node.org/ cocomac2/services/f99_sulcal_depth.php?atlas=FV91&shape=Depth-Left&mode=avg&output=tsv&run=1. Area-averaged firing rates in V1 for four different settings of λ. The simulation for λ = 2.5 was run for 10 s biological time only. Fromλ = 2 on, the network spontaneously enters a high-activity state. For λ = 2.5, the network is in this state from the outset.
Supplemental Experimental Procedures
Cortical areas in the model Table 4 of Hilgetag et al. (2015) to the modeled areas in the parcellation scheme of Felleman & Van Essen (1991). Entries marked with a star are used to translate the overall neuron density and cortical thickness which are not available in the finer of the two parcellations used by Hilgetag et al. (2015). Derivation of the conversion factor c A (R) for the local connectivity The indegrees of the microcircuit model (Potjans & Diesmann, 2014) K ij (R) are adapted to the area-specific laminar compositions of the multi-area model with an area-specific factor c A (R), where i, j denote single populations in the 1 mm 2 patch of the cortical area. The total number of synapses local to the patch (type I) is the sum over the projections between all populations of the area: We thus obtain c A (R) by determining N syn,I . To this end, we use retrograde tracing data from Markov et al. (2011) consisting of fractions of labeled neurons (F LN ) per area as a result of injections into one area at a time. The fraction intrinsic to the injected area, F LN i , is approximately equal for all 9 areas where this fraction was determined, with a mean of 0.79. For areas modeled with reduced size, this fraction is smaller because, in that case, synapses of both type I and II contribute to the value of 0.79 ( Figure 1E). We approximate the increasing contribution of type I synapses with the modeled area size as the increase in indegrees averaged over population pairs, N syn,I (R)/N syn,tot (R) N syn,I (R full )/N syn,tot (R full ) where in the last step we use (4). Using N syn,I (R full )/N syn,tot (R full ) = F LN i , we obtain N syn,I (R) = N syn,tot (R) where N syn,tot (R) = ρ syn πR 2 D with D the total thickness of the given area. The conversion factor can thus be obtained with We substitute this into (4) for the modeled areas where R = R 0 and obtain the population-specific indegrees for type I synapses: K ij,I : =K ij (R = R 0 )
Processing of CoCoMac data
We use a new release of CoCoMac, in which mappings from brain regions in other nomenclatures were scrutinized to ensure a consistent transfer of connections into the FV91 name space. The CoCoMac database provides information on laminar patterns on the source side from retrograde tracing studies as well as on the target side from anterograde trac-ing studies. The data was extracted by using the following link, which specifies all search options: http://cocomac. g-node.org/cocomac2/services/connectivity_matrix.php?dbdate=20141022&AP=AxonalProjections_FV91&constrai &origins=&terminals=&square=1&merge=max&laminar=both&format=json&cite=1 Furthermore, we obtained the numbers of confirmative studies for each area-level connection with the following link: http://cocomac.g-node.org/cocomac2/services/connectivity_matrix.php?dbdate=20141022&AP=AxonalProjec
FV91&constraint=&origins=&terminals=&square=1&merge=count&laminar=off&format=json&cite=1
To process these data, we applied the following steps: • A connection is assumed to exist if there is at least one confirmative study reporting it.
• A connection from layer 2/3 is modeled if CoCoMac indicates a connection from either or both of layers 2 and 3.
• In the database, some layers carry an 'X' indicating a connection of unknown strength. We interpret these as '2' (corresponding to medium connection strength).
• We take connection strengths in CoCoMac to represent numbers of synapses in orders of magnitude, i.e., the relative number of synapses N ν syn in layer ν of area A with connection strength s(ν) is computed as N ν syn = 10 s(v) / v ∈A 10 s(ν ) .
Mapping of synapse to cell-body locations
Detailed calculation in section Experimental procedures. The numbers are listed in Table S10. Table S10 Conditional probabilities P(i|scc ∈ v) for the target neuron to belong to population i if a cortico-cortical synapse scc is located in layer v, computed with (8) applied to the data set of Binzegger et al. (2004). Empty cells signal zero probabilities.
Analytical mean-field theory
In Schuecker et al. (2015), analytical mean-field theory is derived describing the stationary population-averaged firing rates of the model. In the diffusion approximation, which is valid for high indegrees and small synaptic weights, the dynamics of the membrane potential V and synaptic current I s are described by (Fourcaud & Brunel, 2002) τ where the input spike trains are replaced by a current fluctuating around the mean µ with variance σ with fluctuations drawn from a random Gaussian process ξ(t) with ξ(t) = 0 and ξ(t)ξ(t ) = δ(t − t ). Going from the single-neuron level to a description of populations, we define the population-averaged firing rate ν i due to the population-specific input µ i , σ i . The stationary firing rates ν i are then given by (Fourcaud & Brunel, 2002) 1 which holds up to linear order in τ s /τ m and where γ = |ζ(1/2)|/ √ 2, with ζ denoting the Riemann zeta function (Abramowitz & Stegun, 1974).
Algorithm for the temporal hierarchy
To determine a temporal hierarchy for the onset of population bursts, we determine the peak locations τ AB of the cross-correlation function for each pair of areas A, B. We then define a scalar function for the deviation between the distance of hierarchical levels h(A), h(B) and peak locations, To determine the hierarchical levels, we minimize the sum of f (A, B) over all pairs of areas,
S =
A,B f (A, B) , using the optimize.minimize function of the scipy library (Jones et al., 2001) with random initial hierarchical levels. We verified that the initial choice of hierarchical levels does not influence the final result. We obtain hierarchical levels on an arbitrary scale, which we normalize to values h(A) ∈ [0, 1] ∀A. | 2016-04-15T08:05:14.000Z | 2015-11-30T00:00:00.000 | {
"year": 2015,
"sha1": "ae3babbf421ffa17853938f61cd09674a562e630",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1006359&type=printable",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "292da7cda481f24c23197d525942895e6812de1e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
226348679 | pes2o/s2orc | v3-fos-license | Exploring Teachers ’ Knowledge and Students ’ Status about Dyscalculia at Basic Level Students in Nepal
Dyscalculia refers to a specific and lifelong difficulty in learning mathematics. Dyscalculia has been observed among students from even basic levels of mathematical studies, and its effects regarding mathematical learning are serious. This study explores teacher knowledge and student status of dyscalculia at a basic level schools in Nepal. It was constructed by using the descriptive survey design. The study consists of 150 basic level school teachers and 500 students from Ilam Municipality, Ilam by using simple random sampling. To explore the teachers ’ knowledge about dyscalculia a mathematics learning difficulty test questionnaire has been used. Similarly, the status of dyscalculic students was measured by a dyscalculia screening test. The teachers were found to have inadequate knowledge regarding dyscalculia. The association between the teachers ’ knowledge and the demographic variables of gender, school type, and educational qualifications on dyscalculia were not found significant, except teaching experience. Consequently, the study revealed 6.8 percent of students were dyscalculic. Therefore, the concerned authority is recommended to improve teacher knowledge regarding dyscalculia for the proper identification, guidance, and intervention of the dyscalculic learner.
INTRODUCTION Background of the Study
Mathematics is considered to be a difficult subject due to its abstract nature. Learning difficulty of mathematics is a global issue. It is considered an important and necessary subject in school education due to its everyday uses. Especially in mathematics and science, many students believe that it requires inborn ability or even brightness to achieve success, rather than persistence, well approaches, getting support from others, and learning over time (Hong & Lin-Siegler, 2012). Therefore, it has long been given special attention in school education. However, not all expected outcomes in mathematics could be achieved to date, and negative student attitudes towards learning mathematics also could not be reduced. In school level education, teachers have a very important role to motivate students and create positive attitudes towards learning mathematics. They can assist students in overcoming their learning difficulties through intensive educational intervention. These difficulties or disabilities present in the learners' characteristics include cognitive and neuropsychological profiles, low linguistic skills, a lack of prerequisite knowledge and skills for mathematics learning, and learning difficulties or disabilities (Sharma, 2020). Hence, there exists a need for specialized instruction and proper intervention that goes beyond existing classroom instruction to reduce negative attitudes toward learning mathematics and improve performance. These interventions should focus on sound practice and best delivery of the intended outcomes. They should be effective, efficient, elegant, and must be based on sound principles of mathematics learning, reflecting the characteristics of the difficulty and focused on the practices that deliver the outcomes envisioned (Sharma, 2020).
In the context of Nepal, learning disability must be considered in terms of managerial practices and instructional priority in schools. Around 97 thousand Kunwar & Sharma / Exploring Teachers' Knowledge and Students' Status about Dyscalculia at Basic Level Students in Nepal 2 / 12 children having disabilities in Nepal are studying together with normal children in schools, while the number of children without school education due to disabilities is unknown (DoE, 2014). Students who have learning disabilities in mathematics are treated as normal students. As reported by the national population census report (CBS, 2014), 1.93 percent of the total population has some kind of learning disability in Nepal. The students with learning disabilities at basic level (grade 1 to 8) are 2.13 percent of the total enrollment in 2017 (MoE, 2018). In 2015 the constitution of Nepal provisioned free and compulsory primary education, and free secondary education, as fundamental rights, as well as the right to free education for disabled persons. Likewise, Nepal adopted the Disability Rights Act and an Inclusive Education Policy for persons with disabilities in 2017. The policy included provisions to educate all children with disabilities separately and without discrimination. Nepal's Disability Rights Act (2017) has provisions for special teacher training for those who educate children with disabilities to enhance knowledge and skills as well as to promote their access to quality education. It has also focused on developing specialized teachers for discipline and classroom management. Nevertheless, a huge mass of students is at the underperforming level, especially in mathematics (NASA, 2019) and thus have suffered from decreased mathematics achievement for some years. However, there is a need to enlighten the attitudes of students towards those with dyscalculia or disabilities to support their better performance.
Overview of Learning Difficulty and Dyscalculia
The term 'learning difficulty' is used to describe the general learning problems of students in the academic field. Likewise, the term dyscalculia also denotes learning problems in mathematics. It is a specific term used for mathematics learning disabilities. However, these two terms are not the same, but both terms are used in the field of education. Learning difficulties are treated as situational, not global, situated outside the child and resulting from specific causes, such as physical, educational, emotional, or environmental factors. Effective educational intervention for learning difficulties will improve their basic academic skills such as reading, writing, and mathematics and will result in the improvement of their achievement levels. On the other hand, dyscalculia is the term used when the cause of the learning disability is situated in the child's own cognitive development and is of neurological origin. It is lifelong and global and can be improved with welltargeted support and intervention (Hornigold, 2015). It is a specific kind of learning difficulty where the learners find difficulties in a specific area of learning; for example, reading or operating numbers or symbols. The child may have no particular problems in other areas. A child with a mathematics difficulty may perform well in reading, writing and speaking in other subject.
Learning Difficulty
Learning difficulties in mathematics have different forms, like difficulty in acquiring learning procedures, conceptual processes of fundamental concepts, or both together. Some students may have difficulty in any topic of arithmetic, algebra, or geometry (Chinn, 2016). Some students may exhibit the common mathematical difficulties in numerical and arithmetic deficiencies like counting and calculation (Hornigold, 2015). The environmental factors, such as low attendance, inappropriate way of teaching, lack of practice, poor curriculum, low standard of mastery of the subject matter, etc) create much higher difficulty in learning mathematics during the education (Sharma, 2020). Similarly, others have mathematics learning difficulties involving lagging in learning numbers, confusion in digits of numbers, difficulty in problem solving, understanding mathematical language, and forgetting the basic concepts of mathematics (Courtade, Test, & Cook, 2015).
Contribution to the literature
• This study helps to create the equal opportunities in education to those learners who have learning disabilities due to dyscalculia through providing special support and feedback in order to provide the essential mathematical knowledge and skills. • This study helps to distinguish between specific and general learning difficulty in mathematics that may cause due to the neurological conditions and other external factors. • This study contributes to the literature about effective content delivery and special support to the students in mathematics to address the specific as well as general problems regarding teaching learning mathematics. • The study provides the base line information of the teachers' knowledge about dyscalculic children and their status at primary schools in Nepal and other international context. • The study suggests providing a basis for the educational policy makers and planners, administrators and teachers to reduce the problems related to teachers and students regarding dyscalculia in the holistic approach.
/ 12
Students' learning difficulties can be overcome through the appropriate intensive educational intervention used readily for a certain time. Learning difficulties are considered as resulting from specific causes, such as physical, educational, emotional, or environmental factors and can improve through effective educational intervention. Individuals who exhibit learning difficulties may not be intellectually impaired; rather, their learning problems may be the result of an inadequate design of instruction in curricular materials (Carnine, Jitendra, & Silbert 1997). Mathematical difficulties refer to the poor mathematics achievement of the children caused by a variety of factors from poor instruction to environmental factors, but which are hypothesized to be due to an inherent weakness in mathematical cognition not attributable to socio cultural or environmental causes (Soares, Evans & Patel, 2018). At some points, during the mathematics learning, some common mathematics difficulties may occur, such as remembering number facts and time tables, fractions, decimals, percentages and calculation, etc. Mostly, such difficulties can be defeated via additional support and a proper intervention. Such difficulty in mathematics does not necessarily mean dyscalculia (Hornigold, 2015). However, the use of appropriate methods and different effective models make the students' learning effective and favorable (Shalev, 2004).
Dyscalculia
The term 'dyscalculia' has Greek as well as Latin origins. The prefix 'dys' in Greek means 'badly', where as 'calculia', i.e. 'calculare' in Latin means to count (Khing, 2016). This indicates dyscalculia means to count badly however it seems to be more complex. The term dyscalculia was originally defined by the Czechoslovakian researcher Kosc (1974) as a difficulty in mathematics as a result of impairment to particular parts of the brain involved in mathematical cognition, but without a general difficulty in cognitive function. The term dyscalculia is used to describe specific difficulties with mathematics and is not a lack of intelligence, rather a difficulty to acquire the essential concepts that underpin skills in performing mathematical procedures (Glynis, 2013). Researchers have generally agreed that dyscalculia is taken as brain related condition, genetic cause, environment, brain differences, and working memory (Hornigold, 2015). It is a subject combining many different areas of study. An aspect has not been understood can nevertheless have an effect on other areas. According to Hornigold (2015) approximately 25% of students in a class are supposed to struggle with mathematics difficulties at different points in their studies. The usual difficulties in mathematics are: recalling number facts, time tables, backward counting, decimals and percentages, time telling, and calculations related to money and fractions. Most of such difficulties can be overcome with additional support and intensive intervention.
Mathematics cannot be separated from the particular cognitive processes in operation whenever minds are applied to a mathematical task (Sharma, 2020). Many people have mixed feelings about mathematics. Many students regard mathematics as a boring and disengaging subject (Colgan, 2014) and thus hate mathematics, and try to avoid it, which is a cause of mathematics anxiety. Mathematics is often expressed as a difficult subject that is inaccessible, uninteresting, and not for cool or engaging people, and not for girls (Boaler & Dweck, 2016).A huge number of students in a widespread range have difficulties in understanding the complex concepts of mathematics (Brown et al., 2008). Likewise, there are several learner types that have an "extreme difficulty in mathematics" (Butterworth, 2005). Mathematics can be a very interesting, fun, and thought provoking subject for those learners who can enjoy their subject (Fu Sai, & Chin Kin, 2017). Mathematics can also be a frustrating subject for many children who have problems with computation and application (Chinn, 2015). Thus, children with dyscalculia do not like to learn mathematics, and do not have fun with mathematical learning.
Dyscalculia is a specific learning difficulty affecting a person's mathematical learning capability. It is a neurological based disorder of mathematics abilities (Wadlington & Wadlington, 2008). In the recent time, a strong correlation between dyscalculia and neurobiology have been begun to find by the researcher (Kucian & Von Aster, 2015;Soares & Patel, 2015). The term dyscalculia is frequently used as a synonymous term for learning disabilities in mathematics or arithmetic learning disorder (Devine, et al., 2013;Soares & Patel, 2015). The prevalent range of dyscalculia is between 3-6 percent, (Kucian & von Aster 2015) and the number of prevalence among females is greater than males, however there are opposing findings. Likewise, Hornigold (2015) states that around 6 percent of the population has dyscalculia with both girls and boys affected equally (Hudson & English, 2016). However, the recent report stated by Sharma (2020) claims that the occurrence of specific learning difficulty (dyscalculia) in the population of school age children is about 6-8 percent, which conforms to Ardilla & Roselli (2002). It shows that the percent of dyscalculic learners are increasing in the recent years. Dyscalculia is also known as 'difficulty with numbers', 'being bad at mathematics' or 'number blindness'. It is definitely a difficulty with numbers but should be considered a much deeperrooted problem than merely being bad at mathematics (Hornigold, 2015). It is further stated that dyscalculia is a specific difficulty with numbers, not with every branch of mathematics and can be improved with special support and intervention. The dyscalculic children have two types of problems, related to mathematical computation and reasoning (Khing, 2016). Mathematical computation related problems affect an individual's ability to solve mathematical calculations such as addition, subtraction, multiplication, and division problems. Such mathematical problems usually begin in basic level and continue through secondary level. However, this is a lifetime trouble the effects of which should not be ignored (Hornigold, 2015). Mathematical problems related to reasoning affect an individual while solving problems related to mathematical reasoning. People with dyscalculia have difficulty with operation of numbers and abstract concepts of time and direction. Dyscalculia is not only a result of improper teaching strategy, logical and sensory deficiency (Rubinsten & Henik 2009;Rubinsten & Tannock 2010); medical circumstances, cultural characteristics (Shalev & Von Aster 2008) and deprive of motivation that may also have an effect on learning (Geary, 2006). The dyscalculic students have positive effects due to the aspects like particular intervention strategies together with individualized teaching (Butterworth, 2005;Re et al. 2014), the multisensory strategy (Attwood, 2010) and make different in appraisal (Little, 2009). Similarly, mathematical concepts can be taught effectively to students with mental disabilities via the use of computer software (Soykan & Ozdamli, 2017) and computer assisted programs can help the students to increase the students' ability in reading (Akbari, Soltani-Kouhbanani & Khosrorad, 2019).
Teachers' Knowledge about Dyscalculia
Teachers' knowledge about dyscalculia and dyscalculic students is essential for effective teaching. However, while it is not the result of improper pedagogy, proper methods of knowledge transformation used with such students having disability are necessary to provide remarkable intervention (Paula, Paulo, & Cadime, 2016). The teachers at basic level have a vital role to identify the dyscalculic students' difficulties earlier and to provide them support for the intensive intervention. Teachers with adequate knowledge of detecting dyscalculic students, and of the intervention strategies, help the students to achieve at their ability level. On the other hand, appraisal and remediation of dyscalculic students are strongly associated with their personal capabilities and the weak points must be established before conducting any remedial attempt. Timely screening the dyscalculic students can have two-way benefits. On one hand they can be facilitated through the well-tailored intervention from suitably qualified teachers (Hornigold, 2015), On the other hand, they can be treated with multi-sensory teaching, using all three channels (visual, auditory and kinesthetic) simultaneously by the same class teacher. The use of different channels and methods with proper materials can help for better learning. As Hornigold (2015) stated, the more ways the information is presented, the more likely we are to remember it. Thus, the knowledge of the teacher about dyscalculia helps the learner to plan a detailed intervention program in a timely manner and that helps to support and alleviate their specific needs successfully.
Significance of the Study
Students with dyscalculia have specific mathematical learning difficulties in solving basic mathematical operations. Students with such specific mathematics learning problems show persistent and extreme difficulty in mathematics but functions well in other areas. Dyscalculia is a heterogeneous learning impairment affecting numerical and/or arithmetic functioning at behavioral, psychological, and neuronal levels (Kucian & Von Aster, 2015). Additionally, it is further stated that a person suffered from disability may struggle for numerous effort to master a wide range of basic mathematical skills such as counting, numerical operations, arithmetic, transcoding between words, digits and quantities, and spatial number representation. Dyscalculia affects the learner more in early stages and during engagement with fundamental concepts of mathematics learning (Hornigold, 2015). As early as the first-grade students may start displaying negative attitudes towards learning mathematics and gradually develop mathematics anxiety. Moreover, schools have not given special attention to classroom delivery and the teaching learning strategies for the students with mathematics learning difficulty (Khing, 2016). On the other hand, students' performance in mathematics gradually decreases as students move to the upper grades.
All walks of life require the use of numerical information for grasping context, informing others, and resolving situations quickly. There are, however, a large number of students who may be struggling to learn mathematics, especially arithmetic, and who struggle with even the most basic numerical calculations and operations. The low achievement of students in mathematics in different grades at school level education is a serious issue. In such situations the number of students with dyscalculia might be one of the causes of this low achievement in mathematics. No researches in the field of mathematics learning disabilities, especially regarding dyscalculia, have been conducted in Ilam, Nepal. Thus, this study reveals information about the basic level teachers' knowledge about dyscalculia and the status of dyscalculic students at basic level schools in Ilam Municipality so that the teachers could help students with dyscalculia to overcome their difficulties and enjoy, rather than suffer, the time they spend in the mathematical activity.
The study can help concerned teachers, school headmasters, and educational planners and administrators to run and support dyscalculic students and implement the intensive educational interventions necessary to assist the students with learning disabilities that are missing due to the lack of knowledge and understanding of mathematics learning difficulty or dyscalculia, lack of support and other resources, and the perceived barriers that impact classroom instruction and supports (Graves, 2018). Likewise, this study provides a base for the concerned authorities of the local government to make policy and to train the concerned mathematics teacher with the specialized instruction training as required to meet the various needs of these specific dyscalculic students. This study can be a milestone in mathematics learning and also in the field of special education in Nepal.
Objectives of the Study
The objectives of this study are: 1. To find out the information of the basic level teachers' knowledge about their dyscalculic students.
2. To investigate the teachers' knowledge towards dyscalculic students at basic level in relation to gender, school type, educational qualification, and teaching experiences.
3. To identify the number of dyscalculic students studying at basic level.
Hypotheses of the Study
The null hypotheses of the study were formulated as: 1. Whether the teachers of basic level have low levels of knowledge about dyscalculia.
2. Whether the demographic variables of gender, school type, educational qualification and teaching experiences of basic level school teachers have no significant effects on their level of knowledge about dyscalculia.
3. Whether the number of dyscalculic students studying at basic level is high.
Research Design
The study has adopted the quantitative survey design to investigate the teachers' knowledge about dyscalculia and the status of dyscalculic students studying at basic level in Ilam Municipality, Nepal. The survey design was used in the light of the nature of this study to accomplish the objectives and the hypothesis of the study.
Population and Sample
In this study, simple random sampling technique was employed to investigate the teachers' knowledge on dyscalculia and the number of dyscalculic students studying in both community and institutional school at basic level in Ilam Municipality, Province No. 1. In the course of the study, 150 basic level school teachers from 48 community and 15 institutional schools, and including both male and female teachers, of Ilam Municipality were selected as the sample for this study. Similarly, out of 150 basic level teachers, 114 teachers were from community schools and the remaining 36 teachers were from institutional schools. Likewise, 500 low performing students in mathematics studying at grade V and VI in Ilam Municipality were selected in order to find the number of dyscalculic students. In the random selection process students with high performance were omitted from the list of candidates. The list of the students for random selection was made with the help of the students' mathematics test scores or grades secured in the preceding class by the students themselves in school. In the sample selection procedure, the priority was given for selecting low performing or weak students in mathematics for the study. It was focused on selecting dyscalculic students, and persons with dyscalculia perform poorly in all areas of mathematics, particularly in the processing of numbers and quantities, in basic arithmetic operations, and in the solving of word problems (Haberstroh & Schulte-Korne, 2019).
Taking permission from the school administrations and the students themselves the survey instrument, a mathematics learning difficulty screening test, was used to collect the data related to dyscalculic students studying at basic level. On the other hand, a mathematics learning difficulty questionnaire was administered to the selected teachers.
Development and Validation of Instruments
In this study, a self-developed Mathematics Learning Difficulty Test (MLDT) questionnaire was used to measure the basic level school teachers' knowledge towards dyscalculia of the basic level students. The MLDT was formed by using the five factors related to the knowledge dimensions of dyscalculia. The factors of MLDT are: meaning and concept of dyscalculia, causes of dyscalculia, characteristics of dyscalculia, effects of dyscalculia, and intervention strategies of dyscalculia. In the beginning, 25 items containing 5 items from each factor were constructed. All items and factors of the questionnaire were reviewed by educational experts and university mathematics teachers to refine the weightage, adequacy and relevancy of the items in each factor. The questionnaire was translated into Nepali and then administered to a pilot group of 15 basic level school teachers. The pilot group of teachers was out of Ilam Municipality. After the pilot test some overlapping items were omitted and other unusual items were selected and rewritten for a final version. Thus, the final version of the questionnaire consists of 18 items containing all the factors. This final modified version of the questionnaire was also reviewed by the senior mathematics education researchers of Nepal. Thus, some modifications were further made according to their suggestions. Finally, the factor 'meaning and concept of dyscalculia' consists of two items. The remaining all four factors 'causes of dyscalculia', 'characteristics of dyscalculia', 'effects of dyscalculia', and 'intervention strategies of dyscalculia' consist of four items from each factors ( Table 1). The questionnaire consisted of two parts. The first part consisted of demographic variables, namely gender, training, and the teaching experience of the teacher. The second part consisted of 18 items relating to the 5 different factors about dyscalculia. All the items in the questionnaire used a 3-point Likert scale: (3) adequate, (2) moderate, (1) inadequate. The validity of the questionnaire was established with the consultation and review of experts in the related fields.
The Cronbach Alpha was also calculated to determine the reliability of MLDT and was found to be 0.82. The factor-wise alpha values are also given in Table 2. The reliability of the instrument was judged sufficient because the alpha value was well above 0.60 (Nunnally, 1967) which is the minimum requirement. This indicates that the instrument could be used to survey the data. The higher score shows higher teachers' knowledge towards dyscalculia and vice versa. Table 2 shows the reliability of the factors related to the knowledge dimension of dyscalculia.
Similarly, to find out the number of dyscalculic students studying at basic level, a well-tested instrument is required. The dyscalculic students can be screened using different types of tests like computer assisted instruments, quantitative surveys, and qualitative survey instruments. Among them, computer assisted tests can be used as a self-assessment tool by the students themselves, and it is easier and more effective for counting time, as well as visualizing pictures and symbols. However, adopting the computer assisted instrument to screen the basic level students' dyscalculia throughout the country is beyond current capabilities due to a lack of essential resources, technology, and operating skills. Hence, the instrument student dyscalculia screening tests' questionnaire was also developed by the investigator to measure the number of dyscalculic students at basic level. The instrument Dyscalculia Screening Test (DST) questionnaire was based on the five factors related to the knowledge dimensions of dyscalculia (Table 3).
The test items of the instrument DST were constructed considering the five factors given in Table 3. 11, 12, 13, 14, 15, 16, 17 & 20) The test items in DST were related to assess the dot enumeration, number comparison, computational skills, mathematics facts and operations, quantitative reasoning, problem solving, and visual spatial and symbolic abstraction. In the initial stage, the questionnaire was constructed with 32 items. Of these, 23 items were multiple choice type and 9 were of a close ended type related to drawing and writing. For the establishment of validity and reliability the questionnaire was piloted with group of 24 students studying at grade five and six in Ilam municipality. The difficulty level and discrimination index were also maintained using item analysis of the multiple-choice type items. The instrument DST is also a kind of speed test, so the average time to complete the questionnaire was also measured while piloting the test and it was thus fixed to 30 minutes for the test administration. This also conformed to Butterworth (2005); when screening the dyscalculic students of aged 10-14 years the administration time is 15-30 minutes. The questionnaire was also reviewed by the senior high school mathematics teacher and university mathematics professors and finally 6 weak and overlapping items were rejected from the questionnaire. Thus, the final set of the questionnaire consists of 26 items, wherein 17 items were multiple choices and 9 items were related to drawing and writing from different 5 factors as given in Table 4. The instrument DST consists of 32 marks. Similarly, Cronbach alpha was also calculated, and the five factor-wise internal consistencies were also found to be positive (Table 4). Thus, the questionnaire adopted the process of standardization and the content validity was also established with consultation of the subject experts.
Data Analysis Procedures
In this section, the data obtained from the quantitative survey are analyzed using descriptive as well as inferential statistics. The descriptive statistics included percentages, means, standard deviations, and also inferential statistics including the Chi-square test. The data thus collected were analyzed using SPSS Version 22. Frequency and percentage distribution were used to determine the teachers' level of knowledge and find out the students' dyscalculia screening test scores. The Chi-square test was used to associate the basic level school teachers' knowledge in terms of demographic variables such as genders, school types, educational qualifications, and teaching experiences.
Teachers' Knowledge Scores about Dyscalculic Students
The teachers' knowledge scores about dyscalculic students in the given five knowledge domains are shown in Table 5. This table describes the maximum score of the domain, level of knowledge, mean and standard deviation in overall aspects and also domain-wise aspects of knowledge. The maximum score of the overall aspect of knowledge domain was 54 with the mean score 19.82 and SD of 5.38. The majority of the teachers, 406 (54.13%) were found to be at the level of average knowledge. A majority of teachers, 101 (67.33%), 90 (60%), 89 (59.33%) and 78 (52%) have an average level of knowledge about the meaning and concept of dyscalculia, characteristics of dyscalculia, effects of dyscalculia, and intervention strategies of dyscalculia respectively. But the majority of teachers, 92 (61.33%) were found to be poor in the level of knowledge in the domain causes of dyscalculia. No one's knowledge domain was found at a good level of knowledge for the basic level teacher. Only 129 (17.19%) of the respondents were found at good level of knowledge in overall knowledge domain. Thus, the null hypothesis that The scenario of the teachers' knowledge about dyscalculic children at basic level is very poor in regard to the knowledge domain and their level of knowledge. The overall knowledge of 215 (28.66%) basic level teachers were found at the poor level of knowledge about dyscalculia. The overall score of the teachers at the average level of knowledge, 406 (54.13%), confirmed that the basic level teachers had average knowledge about dyscalculia, which proved the results of Kamala & Ramganesh (2013).The present study shows that a majority of teachers have no good level of knowledge regarding dyscalculia. These findings are consistent with the earlier research by Ghimere (2017), which explained that a majority, 79 (52.67%), of the primary school teachers had moderately adequate knowledge and 71 (47.33%) had inadequate knowledge regarding learning difficulty or dyscalculia. The study reported that the primary school students suffering from dyscalculia in Malaysia was 5.5 percent (Wong et al., 2014), as determined by computer assisted screener, whereas Emerson, Babtie and Butterworth (2010); Thompson (2017) found 5 percent dyscalculic children. Similarly, as stated by Fu Sai and Chin Kin, (2017), the teachers in Malaysia have a low level of awareness of dyscalculia as 57.5 % of the teacher did not know what dyscalculia actually is and had limited knowledge of the characteristics of dyscalculia, with the topic of dyscalculia rarely being discussed in their teaching field. Similar findings were also found by Dias et al., (2013) that the participating educators had very little specific knowledge on dyscalculia. The finding of Shari and Vranda (2016); Karasakal (2018) also affirmed that the teachers were found with a lack of awareness about dyscalculia. Consequently, support for mathematics teachers about their gap of required subject matter knowledge, supports for important resources that is needed to provide effective instruction to the students with mathematical learning disabilities is essential (Graves, 2018). Table 6 envisages the outcome of Chi-square analysis to bring out the association between the knowledge of basic level teachers with their demographic variables. The teachers by their gender, the Chi-square test χ 2 = 1.83 and p = 0.176 at 0.05 level of significance, (p > 0.05) did not reveal a statistically significant difference about the knowledge of dyscalculic students. Similarly, the Chisquare test on school type χ 2 = 0.37 and p = 0.541 at 0.05 level of significance, (p > 0.05) and in educational qualification, χ 2 = 0.11 and p = 0.734 at 0.05 level of significance, (p > 0.05) shows that the differences of teachers by school type and educational qualification respectively were not found statistically significant. Thus, the null hypothesis whether the demographic variables of gender, school type, and educational qualification of basic level school teachers have no significant effect on their level of knowledge about dyscalculia is accepted. There is no difference in the teachers' level of knowledge about dyscalculia due to the effects of these variables. However, for the teaching experiences of the teacher at basic level above 5 years of teaching experience, the chi square test χ 2 = 5.99 and p = 0.014 at 0.05 level of significance, (p < 0.05) were found significant. Thus, the null hypothesis is accepted and significantly different on level of knowledge about dyscalculia is determined. Hence the teaching experience has a significant effect on the teachers' knowledge about dyscalculia. The results showed that the intense knowledge about dyscalculia is found in more experienced teachers.
Teachers' Knowledge with Demographic Variables (Gender, School Type, Educational Qualification, and Teaching Experiences)
The findings that the knowledge of basic level teachers on dyscalculia in relation to the demographic variables gender, school type and educational qualification are consistent with the findings of the previous research of Lingeswaran (2013); Ghimere (2017), that the association between the knowledge of primary school teachers and their demographic variables as: gender, educational qualifications, school type, and teaching experience about learning disabilities were found to be statistically significant as their p-values (2016); Wong et al. (2016) that the effect of dyscalculia was found equally in either gender, while the findings of the study were contrary with the results of Alahmadi and El Keshky (2018).
Students Dyscalculia Screening Test Scores
To analyze the status of dyscalculic students, the test scores of DST of basic level students were arranged in the form of continuous series at the interval of 8-unit scores. The total scores were divided into 4 equal intervals with a range of 25% in percentile. As Chinn (2015) stated, the low achievement in mathematics is usually taken to be an achievement level below the 25th percentile. Thus, the 25th percentile score constitutes the low achievement level, and those between the 25th and 75th percentile score constitute average or moderate achievement level, and the score above 75% is considered the high achieving level. In this study the score below 25% is used as the benchmark for categorizing the dyscalculic students. Table 7 shows the position of basic level students' scores at grade V and VI in dyscalculic screening tests conducted by the investigator.
The test scores of the students' dyscalculia screening test in Table 7 shows that 25.6% of students found at the level of high score as defined by the investigator. In the same way, 67.6% of students were found at the level of average scores and 6.8% of students found at the low level of score in the dyscalculia screening test. This dyscalculia screening test scores indicates that majority of the students are in the average performance level. Similarly, nearly one fourth of the students are found at high level and the least number of students (6.8%) are found at the low level in the dyscalculic screening test. These students with low level of performance are categorized as the dyscalculic students. Thus, the null hypothesis of whether the number of dyscalculic students studying at basic level is high is rejected. It is not so high with regard to the international researchers' assertion.
The result of this study is consistent with the result (6.67 %) of Adhikari (2014). This is similar to most international findings about dyscalculic students such as the prevalence range of dyscalculia lies between 3-6 percent, Kucian and Von Aster (2015). Hornigold (2015); (Hudson & English, 2016) state around 6% of the population has dyscalculia with boys and girls affected equally. Sharma (2020) states that the occurrence of specific learning difficulty (dyscalculia) in the population of school age children is about 6-8 percent, which also conformed to Ardilla and Roselli (2002). However, a recent study on the primary level students conducted in India revealed 9% of students were found to have dyscalculia (Jeya & Albina, 2019). Considering these findings, it can be concluded that the range of dyscalculic learners lies at the range of 3-9 percentage. Thus, the result of the students' status about dyscalculia (6.8%) lies on the range as discussed above.
CONCLUSIONS
Mathematics is a cumulative subject consisting of many different branches. If one aspect has not been understood properly, it can have an effect on other areas. Mathematics is considered a difficult subject due to its intrinsic qualities, seemingly abstract nature, and the weak mathematical backgrounds and attitudes towards mathematics of learners. Such difficulties can be overcome with proper extra support and effective intervention. This type of difficulty in mathematics does not necessarily mean dyscalculia. Dyscalculia is a specific learning difficulty affecting a person's mathematical ability throughout life. It is much more deeply rooted than simple mathematical weakness. Affected learners show persistent and extreme difficulty in mathematics. The dyscalculic learner can achieve success through individualized and intensive learning strategies that enable individuals to achieve at their ability level. In the context of Nepal, there are some researches relating to the learning disabilities, however, these do not focus on dyscalculia. Thus, there is an opportunity for the teachers of Nepal to play an important role in detecting and assisting the dyscalculic students, and providing proper differentiated learning strategies that overcome their learning difficulties and help the students to enjoy learning mathematics rather than suffer.
The study on the teachers' knowledge about dyscalculic students in five knowledge domains, or factors, revealed that the majority of the teachers were found to be at the average level of knowledge. Unfortunately, a very low number of teachers were found to be at a good level of knowledge in the overall knowledge domain. This shows that a maximum numbers of students suffering from dyscalculia are neither getting help from the teacher due to the lack of teachers' knowledge about dyscalculic students. The High Level study also found that there is no association among the demographic variables of gender, school type, and educational qualification on the teachers' knowledge of basic level about dyscalculia. However, the teachers' teaching experience was found to be a good predictor on the teachers' knowledge of dyscalculia. This indicates that more experienced teachers have more knowledge about dyscalculia. This also discloses that either most teachers have never attended in-service or pre-service training courses, or the topic 'dyscalculia' has not been introduced in the training courses. In the same way, the teachers may have a poor level of knowledge about dyscalculia as it has not been incorporated in the academic courses. The study also concludes that the numbers of dyscalculic students studying at basic level are found to be in alignment with the international assertion. However, it is necessary to address the problem related to dyscalculia, and more attention should be given to provide essential knowledge to the basic level teachers of Nepal to create proper mathematics learning environments and to enable teachers to help students with dyscalculia to overcome their learning difficulties and make their learning enjoyable so as to support inclusive principles of mathematics education.
In a nutshell, it can be concluded that the condition of the teachers' knowledge on dyscalculia is at an alarming condition. Thus, the concerned authorities need to invest in teachers training about learning difficulties and the learning disability in order to boost up teachers' knowledge and efficacy to identify possible signal of dyscalculia. Furthermore, topics like dyscalculia, learning disabilities, and other recent knowledge should also be incorporated in the content of academic and training courses. | 2020-10-28T19:17:27.427Z | 2020-10-21T00:00:00.000 | {
"year": 2020,
"sha1": "1c54117bb8b37532cb30e389432469a46dd76451",
"oa_license": "CCBY",
"oa_url": "https://www.ejmste.com/download/exploring-teachers-knowledge-and-students-status-about-dyscalculia-at-basic-level-students-in-nepal-8940.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e733d533da87b903854c1b71f35c5c2a0eed8fd3",
"s2fieldsofstudy": [
"Education",
"Mathematics"
],
"extfieldsofstudy": []
} |
16445910 | pes2o/s2orc | v3-fos-license | N-Glycosylation engineering of plants for the biosynthesis of glycoproteins with bisected and branched complex N-glycans
Glycoengineering is increasingly being recognized as a powerful tool to generate recombinant glycoproteins with a customized N-glycosylation pattern. Here, we demonstrate the modulation of the plant glycosylation pathway toward the formation of human-type bisected and branched complex N-glycans. Glycoengineered Nicotiana benthamiana lacking plant-specific N-glycosylation (i.e. β1,2-xylose and core α1,3-fucose) was used to transiently express human erythropoietin (hEPO) and human transferrin (hTF) together with modified versions of human β1,4-mannosyl-β1,4-N-acetylglucosaminyltransferase (GnTIII), α1,3-mannosyl-β1,4-N-acetylglucosaminyltransferase (GnTIV) and α1,6-mannosyl-β1,6-N-acetylglucosaminyltransferase (GnTV). hEPO was expressed as a fusion to the IgG-Fc domain (EPO-Fc) and purified via protein A affinity chromatography. Recombinant hTF was isolated from the intracellular fluid of infiltrated plant leaves. Mass spectrometry-based N-glycan analysis of hEPO and hTF revealed the quantitative formation of bisected (GnGnbi) and tri- as well as tetraantennary complex N-glycans (Gn[GnGn], [GnGn]Gn and [GnGn][GnGn]). Co-expression of GnTIII together with GnTIV and GnTV resulted in the efficient generation of bisected tetraantennary complex N-glycans. Our results show the generation of recombinant proteins with human-type N-glycosylation at great uniformity. The strategy described here provides a robust and straightforward method for producing mammalian-type N-linked glycans of defined structures on recombinant glycoproteins, which can advance glycoprotein research and accelerate the development of protein-based therapeutics.
Introduction
The attachment of a bisecting GlcNAc residue and the formation of tri-and tetraantennary complex N-glycans by N-acetylglucosaminyltransferase III, IV and V are common extensions of oligosaccharides on mammalian glycoproteins.The branched structures are associated with various biological functions including cancer metastasis (reviewed by Zhao et al. 2008) and regulation of T-cell activation (Demetriou et al. 2001).In particular, branching increases the number of polylactosamine (Galβ1,4-GlcNAc-) structures on N-glycans, which are the ligands for galectins resulting in the formation of specific lattices with glycoproteins (Lau and Dennis 2008).In addition, galactosylated tri-and tetraantennary structures can be further elongated by terminal α2,6or α2,3-linked sialic acid.The impact of these branched sialylated N-glycans on protein function has been impressively shown for one of the most prominent biopharmaceutical products, recombinant human erythropoietin (hEPO).hEPO is a glycoprotein hormone with three potential N-glycosylation sites.Structural analysis of recombinant hEPO produced in Chinese hamster ovary (CHO) cells exhibited a number of different sialylated structures; tetraantennary structures represent the major glycoforms (Hokke et al. 1995;Pabst et al. 2007).From a biological point of view, a high content of tetraantennary sialylated oligosaccharide chains is important since there is a positive correlation between the in vivo activity of recombinant EPO and the ratio of tetra-to diantennary oligosaccharides (Takeuchi et al. 1989;Yuen et al. 2003).
Glycoengineering of target proteins and host cells has proven to be a powerful tool for the generation of therapeutically relevant proteins with proper glycosylation (recently reviewed by Rich and Withers 2009).Some of these tailored glycoproteins, including hEPO, exhibit enhanced in vivo activities (Umaña et al. 1999;Egrie et al. 2003;Jeong et al. 2009).Engineering of the N-glycosylation pathway of various ( putative) expression hosts led to remarkable success and resulted in the reconstruction of entire biosynthetic pathways (Aumiller et al. 2003;Hamilton et al. 2006;Castilho et al. 2010).However, comparatively little attempts have so far been made to increase the branching of complex N-glycans.
Overexpression of the mammalian β1,4-N-acetylglucosaminyl transferase III (GnTIII) in different expression hosts (i.e.insect cells, CHO cells, tobacco plants) resulted in the generation of various glycoforms carrying a bisected GlcNAc accompanied by an overall heterogeneous N-glycosylation profile (Umaña et al. 1999;Rouwendal et al. 2007;Okada et al. 2010).The unwanted microheterogeneity of N-glycosylation is a serious limitation of these approaches and indicates the importance of precise subcellular targeting of recombinantly expressed glycosyltransferases for efficient N-glycan processing.For example, targeting of GnTIII to early instead of medial/late Golgi compartments resulted in an increase in incompletely processed bisected hybrid-type structures in CHO cells and in plants (Ferrara et al. 2006;Frey et al. 2009;Karg et al. 2010).Comparable studies that demonstrate the successful overexpression of N-acetylglucosaminyltransferase IV (GnTIV) or V (GnTV) in hosts suitable for the production of recombinant glycoproteins have not been described so far.
Plants are increasingly being recognized as an alternative expression platform for the production of complex therapeutically relevant proteins.Plants are able to carry out posttranslational modifications like N-glycosylation, and the recent development of plant viral-based expression systems allows the efficient expression of recombinant proteins and a very rapid manufacturing process (Marrillonet et al. 2005;Sainsbury and Lomonossoff 2008;Bendandi et al. 2010).Glycoengineering of whole plants has led to the production of therapeutic glycoproteins with a rather uniform human-like N-glycosylation pattern (Schähs et al. 2007;Strasser et al. 2008Strasser et al. , 2009;;Castilho et al. 2010).Moreover monoclonal antibodies expressed in such plants with specifically altered N-glycosylation exhibited enhanced activities (Cox et al. 2006;Schuster et al. 2007;Strasser et al. 2009;Forthal et al. 2010).A crucial achievement in using plants as an expression platform was the generation of mutants that lack plant-specific N-glycosylation, i.e. β1,2-xylosylation and core α1,3-fucosylation.Such plants synthesize human-like N-glycans with two terminal β1,2-linked GlcNAc residues (GnGn structures: GlcNAc 2 Man 3 GlcNAc 2 ) at great uniformity (Koprivova et al. 2004;Strasser et al. 2004Strasser et al. , 2008;;Cox et al. 2006).In all higher eukaryotes, these oligosaccharides are the common core structures for further processing in the Golgi apparatus.Indeed, such GnGn N-glycans served as acceptor substrates for the generation of human-type structures, which are normally absent in plants, i.e. terminal β1,4-galactosylation and core α1,6-fucosylation (Strasser et al. 2009;Forthal et al. 2010).In mammals, GnGn is also the preferred acceptor substrate for the formation of branched or bisected N-glycans (Figure 1; Gleeson and Schachter 1983).These structures are not naturally present in plants due to the lack of the respective glycosyltransferases.
In this study, we aimed to modulate plant N-glycosylation toward the generation of bisected, tri-and tetraantennary complex N-glycans.To this end, we overexpressed human GnTIII, IV and V and modified versions thereof in the glycoengineered Nicotiana benthamiana, lacking plant-specific sugar residues (4XT/FT; Strasser et al. 2008).Co-expression of the three glycosyltransferases with two model glycoproteins, hEPO and hTF, resulted in the efficient attachment of bisecting GlcNAc residues and the formation of tri-and tetraantennary N-glycans.
Generation of recombinant hEPO and human transferrin
In a recent study, we have demonstrated the efficient downregulation of plant-specific glycosylation, i.e. β1,2-xylosylation and core α1,3-fucosylation, in N. benthamiana, a plant species widely used for recombinant protein expression.This was achieved by an RNAi approach, which suppresses the expression of the two respective glycosyltransferases β1,2-xylosyland core α1,3-fucosyltransferase (FT) (4XT/FT line; Strasser et al. 2008).In this study, these 4XT/FT plants, which synthesize mainly human-type GlcNAc 2 Man 3 GlcNAc 2 (GnGn) structures, were used as expression host.For N-glycan modeling, two reporter glycoproteins were chosen, (i) hEPO, with three N-glycosylation sites that are decorated with substantial fractions of branched sialylated N-glycans and (ii) human transferrin (hTF), a serum protein with two N-glycans that are highly sialylated, however in its native form devoid of any branching (Yamashita et al. 1993;Pabst et al. 2007).The cDNAs encoding the reporter glycoproteins were transiently expressed in N. benthamiana using a potent viral-based expression system (magnICON, Marillonnet et al. 2005).hTF was expressed with a C-terminal strep-tag and hEPO was C-terminally fused to an IgG-Fc domain (EPO-Fc).Previous studies have demonstrated an enhanced stability of such EPO-Fc constructs (Bitonti et al. 2004).Due to a conserved N-glycosylation site within the Fc domain, this polypeptide can serve as an additional glyco-reporter.Both hTF and EPO-Fc were cloned into a tobacco mosaic virus (TMV)-based magnICON vector (Figure 2B), and 4XT/FT leaves were infiltrated with appropriate agrobacterium strains.Leaves were harvested 4-5 days post infiltration, and recombinant protein expression was monitored by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and immunoblotting.Cellular fractionation revealed hTF to be efficiently secreted to the intracellular fluid (IF).A clear band migrating at 80 kDa, the expected size of glycosylated hTF, was present in the IF (Figure 3A).This band, which is absent in non-infiltrated leaves, reacted with an anti-strep antibody (data not shown).Protein A-purified EPO-Fc was monitored by SDS-PAGE and revealed a band corresponding to the expected size of 55 kDa (Figure 3B).This band reacted with anti-EPO and anti-Fc antibodies (data not shown).An additional 30 kDa band was also present on SDS-PAGE (Figure 3B), which reacted only with the anti-Fc antibody (data not shown).Tryptic digestion and subsequent mass spectrometry (MS) analysis revealed that the 30 kDa protein band refers to free Fc, an observation already made earlier upon expression of EPO-Fc in genetically modified chickens (Penno et al. 2010).
N-Glycosylation profile of EPO-Fc and hTF expressed in 4XT/FT N-Glycan composition of purified EPO-Fc was determined by liquid-chromatography-electrospray ionization-MS (LC-ESI-MS) analysis.MS spectra revealed that all three N-glycosylation sites of EPO are occupied by almost exclusively GnGn structures (Figure 4B and data not N-Glycosylation profile of the hTF glycopeptide 2 (Q 603 QQHLFGSNVTDCSGNFCLFR 623 ) (C). "i" refers to the presence of putative isoforms of the same mass that cannot be distinguished by MS.Peak labels were made according to the ProGlycAn system (www.proglycan.com).
Branching of plant N-glycans
shown).This is also the predominant glycoform on the N-glycosylation site of the Fc fragment (Figure 4A).In addition to GnGn, the EPO glycopeptides displayed also minor peaks corresponding to oligomannosidic N-glycans (M8 and M9) and to fucose and galactose containing structures (GnGnF, GnAF, GnA).These peaks either arise from a slight leakiness of the silencing of FT (GnGnF) or are Lewis A epitope containing peaks, which have been previously described to be highly abundant on Physcomitrella patents produced hEPO (Weise et al. 2007).Indeed, purified EPO-Fc reacts on immunoblots with antibodies directed against core α1,3-fucose and Lewis A epitopes (data not shown).For N-glycan analysis of hTF, the band was extracted from the gel and tryptic-digested polypeptides were subjected to LC-ESI-MS.The MS spectra revealed a predominant peak assigned as GnGn structure for both N-glycan sites (Figure 4C and data not shown).
We also investigated whether plant-derived hEPO contains an O-linked oligosaccharide attached to Ser 126 .MS analysis of peptides derived from the tryptic digestion of purified EPO-Fc did not show any evidence for the presence of O-linked glycan structures (data not shown).This result is consistent with recent studies showing that N. benthamiana lack the machinery for the formation of mucin-type O-glycosylation (Daskalova et al. 2010).
Generation of binary vectors for expression of glycosyltransferases
To obtain the attachment of bisecting GlcNAc residues and the formation of tri-and tetraantennary N-glycans on the recombinant glycoproteins, the corresponding human enzymes GnTIII, IV and V were transiently expressed using binary vectors.These vectors allow low-to-moderate protein expression, which is usually sufficient to achieve efficient modification of the N-glycosylation pattern (Strasser et al. 2009).As sub-Golgi targeting of recombinantly expressed glycosyltransferases has profound implications on the final glycosylation pattern of the glycoproteins, GnTIII, IV and V were fused to different Golgi targeting signals.The native cytoplasmic tail, transmembrane domain and stem (CTS) region, responsible for their sub-Golgi targeting in mammalian cells, was replaced by different CTS domains derived from plant N-glycan processing enzymes and the rat α2,6-sialyltransferase (ST), a well-known trans-Golgi targeting sequence in plants (Boevink et al. 1998;Wee et al. 1998;Schoberer et al. 2010; Figure 2A).
Generation of bisecting GlcNAc containing N-glycans on hEPO and hTF
In order to obtain the optimal amounts of bisected oligosaccharides and at the same time avoid interference of GnTIII activity with endogenous plant N-glycan processing enzymes as observed in previous studies, we evaluated different sub-Golgi targeting sequences.The catalytic domain of human GnTIII was fused to the CTS regions from early/ medial (Golgi α-mannosidase II, GMII), medial (Arabidopsis alpha1,3-fucosyltransferase, FUT11; β1,2-xylosyltransferase, XT) and from one late acting enzyme (ST; Figure 2A).The resulting chimeric fusion proteins ( GMII GnTIII, FUT11 GnTIII, XT GnTIII and ST GnTIII) were expressed together with EPO-Fc.MS spectra from the recombinantly expressed glycoproteins displayed the presence of significant amounts of bisecting complex N-glycans (GnGnbi) on the EPO glycopeptides by using ST GnTIII, FUT11 GnTIII and XT GnTIII (Figure 5B and C and Supplementary data, Figure S1).GnGnbi was also the predominant peak when ST GnTIII was co-expressed with hTF.Targeting of GnTIII to an early stage of the biosynthetic pathway ( GMII GnTIII) was less effective in the formation of GnGnbi structures on EPO-Fc.Note, in some cases, minor amounts of the bisected complex glycoforms present on the EPO glycopeptide, but not on hTF, were also fucosylated (GnGnbiF).In contrast, the GnGn structure of the Fc glycopeptide was only slightly processed toward GnGnbi structures (Figure 5A and Supplementary data, Figure S1).
Generation of triantennary N-glycans on hEPO and hTF
To produce triantennary complex N-glycans in human GnTIV (isozyme A), which adds a GlcNAc residue to the α1,3-mannose in β1,4-linkage (Figure 1), was co-expressed with the reporter proteins, and their N-glycans were analyzed by LC-ESI-MS.Apart from the native form, three different CTS-GnTIV fusions were generated potentially targeting the enzyme to different sub-Golgi compartments (Figure 2A).The native GnTIV ( full GnTIV) and the ST GnTIV chimeric fusion proteins did not significantly change the Fc glycosylation profile.However, the constructs were able to modify to some extent the N-glycans from EPO resulting in a mixture of glycoforms that assign to di-and triantennary complex N-glycans (Supplementary data, Figure S2).On the other hand, co-expression of either XT GnTIV or FUT11 GnTIV resulted in the formation of high amounts of structures that carry an additional GlcNAc residue on EPO and hTF glycopeptides (Figure 6 and Supplementary data, Figure S2).Even the Fc glycopeptide was found to carry significant amounts of triantennary complex N-glycans (Figure 6A).In addition, some of the minor peaks corresponding to fucosylated oligosaccharides displayed the incorporation of additional GlcNAc residues in both EPO and hTF.
To initiate branching at the α1,6-mannosyl-arm of GnGn human GnTV was transiently expressed together with EPO-Fc and hTF.The catalytic domain of GnTV was fused to the CTS region of FUT11 ( FUT11 GnTV) since this targeting sequence resulted in highly efficient formation of triantennary complex N-glycans when fused to GnTIV (Figure 6).Upon FUT11 GnTV expression, the formation of structures corresponding to [GnGn]Gn on EPO and hTF glycopeptides was detected (Figure 7B and C).A small amount of fucosylated [GnGn]Gn was also identified on EPO glycopeptides (Figure 7B).In contrast to that, the N-glycosylation site on Branching of plant N-glycans the Fc fragment did not show any peaks corresponding to putative branched complex N-glycans when co-expressed with FUT11 GnTV (Figure 7A).
Generation tetraantennary N-glycans on hEPO and hTF
The expression of GnTIV and GnTV has shown that plant complex N-glycans can be modified toward the formation of triantennary structures.To elongate both arms simultaneously, we co-expressed FUT11 GnTIV and FUT11 GnTV with EPO-Fc or hTF and performed LC-ESI-MS.The glycopeptides from EPO and hTF exhibited a predominant peak corresponding to a tetraantennary complex N-glycan structure ([GnGn][GnGn]) and minor amounts of triantennary structures (GnGnGn i ) as well as GnGn (Figure 8B and C).Consistent with the previous data for the expression of FUT11 GnTV, the Fc glycopeptide carried mainly GnGn structures, accompanied with minor fractions of oligomannosidic structures.In order to evaluate whether the three mammalian glycosyltransferases can act in a synchronized mode, we co-expressed FUT11 GnTIV and FUT11 GnTV with different GnTIII constructs ( GMII GnTIII, XT GnTIII, FUT11 GnTIII and ST GnTIII).Co-expression of ST GnTIII, which targets GnTIII to a late Golgi compartment, resulted in the formation of significant amounts of complex tetraantennary glycans also carrying a bisecting GlcNAc [GnGn][GnGn]bi on EPO and hTF (Figure 9).This oligosaccharide structure was also identified on the EPO glycopeptides when GnTIII was fused to a targeting signal for medial-Golgi location ( XT GnTIII and FUT11 GnTIII) but was not detected when GMII GnTIII was co-expressed with FUT11 GnTIV and FUT11 GnTV (Supplementary data, Figure S3).
Structural identification of bisected and triantennary complex N-glycans
The transfer of a GlcNAc residue causes approximately a 203 Da mass shift of the respective peaks in MS spectra.We have used the hTF samples derived from co-expression with GnTIII, GnTIV and GnTV (Figures 5C, 6C and 7C) to identify the newly generated peaks by co-elution with known standards using chromatography on porous graphitic carbon with detection by ESI-MS (Pabst et al. 2007;Stadlmann al. 2008).The peaks derived from the different GlcNAc modification reactions displayed co-elution with the respective standard peaks (Figure 10), confirming the successful generation of bisected and triantennary complex N-glycans in glycoengineered ΔXT/FT plants.
Discussion
In this study, we show the efficient formation of GnGn, triand tetraantennary N-glycans and/or the incorporation of a bisecting on two glycoproteins, hEPO and hTF.This was achieved by the overexpression of the respective mammalian enzymes in the expression host ΔXT/FT, a glycosylation mutant with a targeted down-regulation of XT and FT expression (Strasser et al. 2008).The expression of EPO-Fc and hTF in ΔXT/FT without additional mammalian glycosyltransferases resulted in the formation of virtually exclusively GnGn structures on all glycosylation sites particularly on hTF, which confirms the versatile utility of this N-glycosylation mutant as potential expression host for recombinant glycoproteins.The GnGn glycoform constitutes the pivotal intermediate for the formation of complex-type N-glycans in all higher eukaryotes.Upon the expression of mammalian β1,4galactosyltransferase (GalT) and core α1,6-fucosyltransferasere in ΔXT/FT, we could show the efficient generation of β1,4-galactosyled and core α1,6-fucosylated N-glycans on recombinantly expressed IgG (Strasser et al. 2008;Forthal et al. 2010).Here, we extended our efforts in humanizing the plant N-glycosylation pathway toward the generation of bisected and branched complex N-glycans, which are usually not synthesized in plants.
Previous studies have shown that the expression of GnTIII in wild-type plants resulted in the attachment of a bisecting GlcNAc (Rouwendal et al. 2007;Frey et al. 2009;Karg et al. 2010).However, as the native human GnTIII is very likely targeted to a medial-Golgi compartment overlapping mainly with FT, but also with N-acetylglucosaminyltransferase II (GnTII) and XT different incompletely processed glycoforms were generated (Rouwendal et al. 2007;Frey et al. 2009).This illustrates that the presence of a bisecting GlcNAc blocks further processing of N-glycans in plants just as in mammalian cells (Schachter 1986).Consistent with this finding, we found high levels of GlcNAc 2 Man 3 GlcNAc 2 peaks in addition to the presence of significant amounts of bisected complex N-glycans upon the expression of GnTIII targeted to an early sub-Golgi compartment ( GMII GnTIII).This result indicates that either GnTIII is not very active when fused to the GMII-CTS region or the bisecting GlcNAc is transferred to GlcNAc 1 Man 3 GlcNAc 2 which blocks further processing by GnTII.
The importance of proper sub-Golgi targeting of glycosyltransferases for appropriate N-glycan modification in plants has also been emphasized in other studies.Interference with the endogenous plant N-glycan processing pathway resulted in the generation of aberrant structures.For example, when native human GalT was expressed in plants, galactosylated and incompletely processed N-glycans were generated (Palacpac et al. 1999;Bakker et al. 2001) and a CTS-GalT fusion that directed the enzyme to an early stage of the pathway led to an increase in incompletely processed N-glycans (Bakker et al. 2006).However, targeting GalT to a late stage of the pathway using rat ST-CTS resulted in the generation of fully processed β1,4-galactosylated diantennary N-glycans (Strasser et al. 2009).Here, we demonstrate that GnTIV generates triantennary structures particularly efficiently upon targeting the enzyme to medial-Golgi compartments using FUT11-and XT-CTS sequences.Such oligosaccharides were synthesized at reduced levels when the full-length version of the human enzyme was used, indicating improper subcellular targeting of the native human GnTIV in ΔXT/FT plants.
Apart from the generation of tri-and tetraantennary complex N-glycans on EPO, we could also generate these structures on a glycoprotein (hTF), which normally does not contain branched oligosaccharides (Pabst et al. 2007), showing that different glycoproteins can be furnished with novel non-native N-glycan structures.However, the oligosaccharide of the Fc domain in the EPO-Fc fusion was not very efficiently branched and modified with a bisecting GlcNAc residue.In addition, we show the efficient generation of bisected tetraantennary oligosaccharides, which are not commonly found on mammalian glycoproteins.This is achieved by the sequential transfer of GlcNAc residues with GnTIII acting at the final stage in order to prevent blocking of GnTIV and V once the bisected GlcNAc is added.Our results may serve as an example that fine-tuning of the intracellular Branching of plant N-glycans targeting of glycosyltransferases facilitates the generation of naturally rare structures.
In this study, we describe a robust and straightforward for producing glycoproteins with a tailor-made mammalian-type N-glycosylation pattern in plants at great homogeneity.Notably, due to the diverse endogenous N-glycosylation repertoire, such homogenous N-glycosylation pattern can hardly be achieved by any mammalian cell-based expression system.Glycoengineered plant-made glycoproteins allow the analysis of the impact of different glycoforms on glycoproteins more in detail and may advance the development of glycoprotein-based therapeutics.We have recently demonstrated the efficient in planta formation of human-type α2,6-sialylated N-glycans on recombinant proteins (Castilho et al. 2010).Together with the results described here, it appears possible to use glycoengineered plants in the near future for the generation of glycoproteins with branched and sialylated N-glycans, i.e. the structures needed for optimal efficacy of important therapeutic products such as EPO.
Material and methods
Binary vectors for expression of mammalian glycosyltransferases GnTIII: For different targeting of the human β1,4-mannosyl-β1,4-N-acetylglucosaminyltransferase (GnTIII), the catalytic domain and part of the putative stem region (comprising amino acids 35-531) were fused to the CTS region of different enzymes.For late Golgi targeting, the catalytic domain was fused to the CTS region of the rat ST.First, the catalytic domain was polymerase chain reaction (PCR) amplified from cDNA of HepG2 (human hepatocellular liver carcinoma) cells with the primer pair GnTIII F1/R1 (Supplementary data, Table S1).The PCR product was digested with XbaI/BamHI and cloned into pPT2M binary vector (Strasser et al. 2005).Then, the α2,6-sialyltransferase CTS region (comprising amino acids 1-52) was PCR amplified from plasmid pGA482rST (Wee et al. 1998) with the primer pair ST 1F/R1.The PCR product was digested with XbaI/XhoI and cloned into the plasmid containing the GnTIII catalytic domain.The resulting plant expression vector was named ST GnTIII (Figure 2).The other GnTIII expression vectors were generated as described in Supplementary data, Methods.
GnTV: To amplify the fragment encoding a part of the stem region and the catalytic domain of human α1,6-mannosyl-β1,6-N-acetylglucosaminyltransferase (GnTV), total RNA was isolated from baculovirus-infected Spodoptera frugiperda Sf21 cells heterologously expressing a secreted human GnTV form (kindly provided by Lukas Mach) using the SV Total RNA Isolation System (Promega, Madison, WI).Reverse transcriptase-PCR was performed with the clone-specific primers pVTBacHis 1/2 and the cDNA was subcloned into pCR4 Blunt-TOPO vector (Invitrogen).To assemble the FUT11 GnTV fusion construct, the pFUT11 plasmid was used as a template to amplify the FUT11-CTS region with the primers FUT11 F1/FUT11-GnTV R1.In parallel, the GnTV fragment (comprising amino acids 31-741) was amplified from the pCR4 Blunt-TOPO clone using the primers FUT11-GnTV F1/GnTV R2.The two overlapping amplification products were mixed together and used as template in a third PCR using the primers FUT11 F1/GnTV R2.The assembled product was digested with XbaI/XhoI and ligated into pPT2M.All binary vectors were transformed into Agrobacterium tumefaciens strain UIA 143.
MagnICON-based constructs for overexpression of glycoproteins cDNA of the glycoproteins was cloned into the magnICON TMV-based module vector (TMV3': pICH21595, Bayer BioScience NV Research, Ghent, Belgium) containing two BsaI sites designed for directional cloning of the target gene (Marillonnet et al. 2004(Marillonnet et al. , 2005;;Giritch et al. 2006; Figure 2).The TMV5'α ( pICH20999) module includes the signal peptide (SP) from the barley α-amylase sequence to target proteins to the secretory pathway.A binary vector ( pICH14011) expressing the recombinase was used to allow in planta assembly of the two virus modules.
The EPO-Fc fusion was obtained by overlap extension PCR as follows: the cDNA encoding human IgG-Fc (amino acids 20-243) was amplified from clone pCEP4-Fc (f-star GmbH, Vienna, Austria) with primers Fc-EPO/Fc R1.EPO cDNA lacking the sequence that encodes the SP (amino acids 28-194) was amplified from the Ultimate ORF clone (IOH44362, Invitrogen, Carlsbad, CA) with primers EPO F5/ EPO-Fc.Overlapping PCR products were used as a template for a third PCR to assemble the EPO-Fc fusion with primers EPO F5/Fc R1.The PCR product was digested with BsaI and ligated into the TMV3' vector digested in the same way.
The cDNA-encoding hTF without the SP (amino acids 21-697) was amplified from clone IRATp970E0766D (ImaGenes, Berlin, Germany) with the primer pair hTF F5/R5 containing the sequence for the Strep-tag at the C-terminus and cloned into the BsaI digested TMV3' vector.All viralbased vectors were transformed into the A. tumefaciens strain GV3101 pmp90.
Plant material and transient protein expression
Nicotiana benthamiana 4XT/FT plants (Strasser et al. 2008) were grown in a growth chamber at 22°C with a16 h light/8 h dark photoperiod.Five-to-six-week-old plants were used for agroinfiltration experiments (Strasser et al. 2008;Castilho et al. 2010).To express the reporter proteins (EPO-Fc and hTF), the TMV3' vector containing the respective cDNA was co-infiltrated with the corresponding 5′ vector containing the SP and the binary vector containing the recombinase (Marillonnet et al. 2005).Binary vectors containing the cDNA of the mammalian glycosyltransferases were co-infiltrated with the viral-based vectors (OD 600 of 0.15-0.2for all agrobacteria).purification Agroinfiltrated leaves (200-300 mg) were homogenized in liquid nitrogen and resuspended in 600 µL pre-cooled extraction buffer (100 mM Tris-HCl, pH 6.8, 40 mM ascorbic acid, 500 mM NaCl, 1 mM EDTA), incubated on ice for 10 min and subsequently cleared by centrifugation (9000 × g for 20 min at 4°C).The supernatant was incubated for 1.5 h at 4°C with 15-20 µL rProteinA Sepharose™ Fast Flow (GE Healthcare, Uppsala, Sweden) previously washed with 1× phosphate-buffered saline (PBS).After a brief spin down, the supernatant was discarded and the sepharose was washed three times with 1× PBS using Micro Bio-Spin chromatography columns (Bio-Rad, Hercules, CA).To extract the EPO-Fc fusion protein from the column, 20 µL of 2× Laemmli buffer (125 mM Tris-HCl, pH 6.8, 20% glycerin, 4% SDS; 10% mercaptoethanol, 0.1% bromphenol blue) was applied, incubated for 5 min at 95°C and centrifuged for 1 min (9000 × g).The samples were then directly used for SDS-PAGE.
Isolation of IF
Two to three infiltrated leaves were immersed in buffer solution (100 mM Tris-HCl, pH 7.5, 10 mM MgCl 2 , 2 mM EDTA) and subjected to vacuum (2×5 min).IF was collected by low-speed centrifugation (900 × g for 15 min).The IF was mixed with 2× Laemmli buffer and incubated for 5 min at 95°C prior to SDS-PAGE.
Analysis of glycopeptides N-Glycan analysis of the reporter proteins was carried out by LC-ESI-MS of tryptic glycopeptides as described previously (Stadlmann et al. 2008;Strasser et al. 2008)
Structural identification of N-glycans
The identification of the GlcNAc linkage to β1,4-, α1,3and α1,6-mannosyl residues as found on bisected and triantennary N-glycans was carried out by chromatography on porous graphitic carbon with detection by ESI-MS (Pabst et al. 2007).
The elution order of free, reduced N-glycans was compared with that of specific standards prepared using glycosidase digests of asialo-EPO, agalacto-EPO and human IgG N-glycans (Pabst et al. 2007;Stadlmann et al. 2008).
Fig. 3 .
Fig. 3. Coomassie blue stained SDS-PAGE of plant-derived hTF present in the IF of ΔXT/FT leaves infiltrated with the hTF magnICON constructs, the position of hTF is marked by an arrow; (−) negative control: IF collected from leaves infiltrated with magnICON provectors without additional sequences; M, protein marker (A).Protein A purified EPO-Fc (indicated by an arrow); the bands at position 30 kDa represent free Fc; M, protein marker (B).
Fig. 4 .
Fig. 4. Mass spectra of tryptic glycopeptides of EPO-Fc and hTF expressed in N. benthamiana 4XT/FT line.N-Glycosylation profile of the Fc glycopeptide 2 (T 289 KPREEQYNSTYR 301 ) (A) and the EPO glycopeptide 2 (G 77 QALLVNSSQPWEPLQLHVDK 97 ) in the EPO-Fc fusion protein (B).N-Glycosylation profile of the hTF glycopeptide 2 (Q 603 QQHLFGSNVTDCSGNFCLFR 623 ) (C). "i" refers to the presence of putative isoforms of the same mass that cannot be distinguished by MS.Peak labels were made according to the ProGlycAn system (www.proglycan.com).
Fig. 5 .
Fig. 5. N-Glycosylation profiles of EPO-Fc and hTF co-expressed with ST GnTIII in 4XT/FT mutants.Fc glycopeptide 2 (T 289 KPREEQYNSTYR 301 ) (A); EPO glycopeptide 2 (G 77 QALLVNSSQPWEPLQLHVDK 97 ) in the EPO-Fc fusion protein (B) and hTF glycopeptide 2 (Q 603 QQHLFGSNVTDCSGNFCLFR 623 ) (C). "i" refers to the presence of putative isoforms of the same mass that cannot be distinguished by MS.The peak assigned as GnGn could also contain a bisected structure lacking one of the two β1,2-linked GlcNAc residues (e.g.MGnbi).
Fig. 10 .
Fig. 10.Isomer assignment of bisected and triantennary complex N-glycans.N-Glycans of hTF generated upon co-expression of GnTIII (A), GnTIV (B) and GnTV (C), respectively, were enzymatically released, reduced and subjected to LC-ESI-MS with a carbon column.The elution position of different N-glycan standards is indicated by arrows.LC-MS data are shown as selected ion chromatograms for glycans with 0, 2 or 3 GlcNAc residues on the non-reducing side, for masses of 913.4,1319.4 and the doubly charged ion at 761.8, respectively.
A
Castilho et al.
. Briefly, the SDS-PAGE bands corresponding to the EPO-Fc fusion protein ( 55 kDa) and hTF ( 80 kDa) were excised from the gel, S-alkylated, digested with trypsin and subsequently analyzed by LC-ESI-MS.During this procedure, four glycopeptides are generated for the EPO-Fc fusion protein: two glycopeptides for the Fc part are due to incomplete digestion (glycopeptide 1, E 293 EQYNSTYR 301 ; glycopeptide 2, T 289 KPREEQYNSTYR 301 ) and another two for EPO since two out of the three N-glycosylation sites (Asn 24 and Asn 38 ) are found on the same glycopeptide (glycopeptide 1, E 21 AENITTGCAEHCSLNENITVPDTK 45 ; glycopeptide 2, G 77 QALLVNSSQPWEPLQLHVDK 97 ).For hTF, the two glycosylation sites at Asn 432 and Asn 630 are discriminated by two glycopeptides (glycopeptide 1, C 421 GLVPVLAENY NKSDNCEDTPEAGYFAVAVVKK4 53 ; glycopeptide 2, Q 603 QQHLFGSNVTDCSGNFCLFR 623 ). | 2018-04-03T05:57:23.525Z | 2011-02-11T00:00:00.000 | {
"year": 2011,
"sha1": "d2ccd6d9f3e1f1c26fe93f511926f0d11ff2388d",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/glycob/article-pdf/21/6/813/16654653/cwr009.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d2ccd6d9f3e1f1c26fe93f511926f0d11ff2388d",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
263298942 | pes2o/s2orc | v3-fos-license | EBV-induced T-cell responses in EBV-specific and nonspecific cancers
Epstein-Barr virus (EBV) is a ubiquitous human tumor virus associated with various malignancies, including B-lymphoma, NK and T-lymphoma, and epithelial carcinoma. It infects B lymphocytes and epithelial cells within the oropharynx and establishes persistent infection in memory B cells. With a balanced virus-host interaction, most individuals carry EBV asymptomatically because of the lifelong surveillance by T cell immunity against EBV. A stable anti-EBV T cell repertoire is maintained in memory at high frequency in the blood throughout persistent EBV infection. Patients with impaired T cell immunity are more likely to develop life-threatening lymphoproliferative disorders, highlighting the critical role of T cells in achieving the EBV-host balance. Recent studies reveal that the EBV protein, LMP1, triggers robust T-cell responses against multiple tumor-associated antigens (TAAs) in B cells. Additionally, EBV-specific T cells have been identified in EBV-unrelated cancers, raising questions about their role in antitumor immunity. Herein, we summarize T-cell responses in EBV-related cancers, considering latency patterns, host immune status, and factors like human leukocyte antigen (HLA) susceptibility, which may affect immune outcomes. We discuss EBV-induced TAA-specific T cell responses and explore the potential roles of EBV-specific T cell subsets in tumor microenvironments. We also describe T-cell immunotherapy strategies that harness EBV antigens, ranging from EBV-specific T cells to T cell receptor-engineered T cells. Lastly, we discuss the involvement of γδ T-cells in EBV infection and associated diseases, aiming to elucidate the comprehensive interplay between EBV and T-cell immunity.
Introduction
Epstein-Barr virus (EBV), also known as human herpesvirus 4 (HHV-4), is a highly prevalent g-herpesvirus that infects an overwhelming 90% of the adult population worldwide (1).Since its discovery in 1964 from a Burkitt lymphoma cell line, extensive research has been conducted to investigate its association with cancer (2).In 2020, EBV-associated cancers accounted for an estimated 239,700 to 357,900 new cases and caused 137,900 to 208,700 deaths globally (3).EBV is considered the primary etiological agent associated with multiple epithelial and lymphoid cancers of variable fractions, including nasopharyngeal carcinoma (NPC), gastric carcinoma (GC), Hodgkin lymphoma (HL), Burkitt lymphoma (BL), Diffuse large B-cell lymphoma (DLBCL) and Extranodal NK/T-cell lymphoma, Nasal type (ENKTL-NT).In addition, EBV reactivation can lead to uncontrolled B-cell proliferation in immunocompromised individuals, including post-transplant lymphoproliferative disease (PTLD) in hematopoietic stem cell transplant (HSCT) or solid organ transplant (SOT) recipients and B-cell lymphoma in AIDS patients (4)(5)(6).
Despite its ubiquity, most people remain asymptomatic throughout their lifetime, owing to the potent host immune system, especially its cellular immunity, which keeps the virus at bay.However, when cellular immunity is compromised or dysregulated, the virus can replicate unchecked, leading to EBVassociated B-cell malignancies (7).These malignancies express EBV antigens that T cells can specifically target (8).Over the last two decades, the encouraging outcomes of adoptive cell therapy using EBV-specific T cells in treating PTLD have sparked significant research interest.Many clinical trials have been launched to explore their potential application in treating other EBV-related malignancies (9).Recent studies find that EBV latent membrane protein 1 (LMP1), upon ectopic expression in EBV-unrelated cancers, can upregulate TAAs and induce a robust TAA-specific CD4+ CTL response (10), indicating that beyond its oncogenic implications, EBV also has the potential for therapeutic applications in cancer treatment.This review aims to advance our understanding of the roles of T-cell immunity across both EBV-related and unrelated cancers and provide insights to devise more effective immune-based cancer prevention and treatment strategies.
Biology of EBV and EBV-associated cancers
The transmission of EBV occurs through oral means and involves the infection of epithelial cells of the oropharynx, followed by replication and spread to B cells, which are major sites for EBV infection in humans.While EBV predominantly targets B lymphocytes and epithelial cells, it can sporadically infect other human cell types, including T cells and natural killer cells, albeit infrequently (11-13).EBV life cycle is complex and is composed of latent and lytic infections.Only nine proteins contributing to B cell transformation and tumorigenesis are expressed during latent infection.These include six EBV nuclear antigens (EBNA-1, -2, -3A, -3B, -3C, and -LP) and three latent membrane proteins (LMP-1, -2A, and -2B).The latent cycle can be subdivided into four patterns, namely latency III, II, I, and 0, characterized by gradually restricted viral gene expression patterns to evade immune surveillance.Ultimately, EBV establishes persistent residence in memory B cells, characterized by the absence of viral antigen expression (latency 0), thereby evading T-cell recognition and acting as a viral reservoir.The latent-lytic switch is a particularly significant event in the EBV life cycle, but its mechanism remains elusive.EBV can transition to the lytic cycle periodically, resulting in viral replication, shedding, and subsequent transmission (8,11,12,14).
During lytic infection, EBV expresses more than 80 lytic proteins that facilitate the generation of new viral particles (8).The viral lytic cycle is divided into three temporal and functional stages: immediate early (IE), early (E), and late (L).IE gene products are transcription factors in charge of turning on the cascade of expression of lytic genes.Among these proteins, the immediate early proteins BZLF-1 and BRLF-1 act as triggers of the EBV lytic cycle (15).E genes encode enzymes with DNA replication function, and L genes are mostly viral structural proteins.
Several lytic genes are somewhat expressed during latent states.For instance, BHRF1, commonly associated with the virus lytic cycle, remains constitutively expressed as a latent protein in vitro within growth-transformed cells and might contribute to virusassociated lymphomagenesis in Wp-restricted BL (16).Additionally, BALF1, expressed with early kinetics during the lytic cycle, is found in latently infected epithelial and B cells (15).While dispensable for lytic replication and B cell transformation, BALF1 might facilitate efficient transformation, potentially in vivo (15).
Under specific circumstances (17), such as immunosuppression like HIV or immunosuppressive therapy (18), concurrent infections such as CMV, HPV, or coronavirus (19,20), disruptions in cellular equilibrium like hypoxia (21), or psychological stressors like familial and socio-economic instability (22), EBV can switch from latency to lytic infection, termed viral reactivation, contributing to the dissemination of the virus and its potential to cause various diseases and complications.
In EBV-associated cancers, latent EBV proteins are crucial for tumor pathogenesis, and their expression can classify tumors into distinct categories (Figure 1).In type III latency cancers, cells infected with EBV express a full array of latent proteins, including six EBV nuclear antigens (EBNA1, 2, 3A, 3B, 3C, LP), two latent membrane proteins (LMP1, 2), BamH1-A right frame 1 (BARF1), several small noncoding RNAs, various micro-RNAs, and EBV-encoded small RNAs.All EBNA3 family proteins are highly immunogenic and can be effectively targeted and cleared by T cells in immunocompetent individuals (8, 23, 24).Consequently, type III latency malignancies can primarily be seen in innate or acquired immunodeficient individuals, such as PTLD of HSCT or SOT recipients and B-cell lymphoma in AIDS patients.Type III latency can also be seen in EBV-transformed B cell lymphoblastoid cell lines (LCLs) cultured in vitro.
Type II latency tumors mainly include NPC, GC, some cases of HL, and NKT.These tumors express EBNA1, LMP1, LMP2, and BARF1 and have intermediate immunogenicity.
Type I latency, marked by sole EBNA1 expression, is seen in BL and exhibits constrained immunogenicity.
Apart from latent antigens, some lytic cycle transcripts are also found in certain tumors, which encode molecules known to contribute to tumor growth (25).Among these transcripts, BZLF1 and BRLF1 are the IE transcription factors that master-regulate EBV reactivation/lytic expression.Notably, the expression of some of the immediate early genes, such as BZLF1, in the absence of other lytic genes, particularly those encoding late structural proteins, thereby precluding the formation of infectious viral particles, is termed the abortive lytic cycle.Specifically, it is known as the prelatent abortive lytic cycle when it occurs just after infection.The abortive lytic cycle has been well-documented in pre-latent cells (26)(27)(28)(29)(30) and established tumors (31-34).Furthermore, evidence derived from mouse models (35,36) supports the notion that the abortive lytic cycle facilitates cell-to-cell viral dissemination and contributes to viral-induced tumorigenesis.
EBV-specific T cell immunity in EBV-related cancers 3.1 EBV-positive lymphoma in immune-deficient host
In the context of immunocompromised SOT or HSCT recipients, PTLD predominantly arises, characterized by the presence of six EBNA and two LMP antigens denoting Type III latency.The EBNA3 antigens within PTLD demonstrate notable immunogenicity, forming a foundation for potential adoptive cell therapies targeting these specific antigens.
Front-line therapies for PTLD post-HSCT or SOT commonly involve reducing immunosuppression, often coupled with rituximab and occasionally augmented by chemotherapy.However, cellular therapy remains the primary option in cases of inadequate response or relapse.The rich diversity of EBV antigens expressed in these tumors facilitates the efficacy of adoptive therapy using virusspecific cytotoxic T cells (CTLs).Clinical trials across global centers have successfully employed CTL preparations, sourced either autologously or from third-party donors, for PTLD treatment or prevention, with a strong record of safety and efficacy.These antigenspecific T cells are primed via in vitro exposure to LCLs.The potent immunogenicity of the EBNA3 family proteins makes them the principal targets of CD8 T-cell immunity (8, 23, 24).CD4+ T cells, though less frequent, also contribute to tumor control (10, 37, 38).
CD4+ T-cell effectors are crucial in limiting early-stage EBV-induced B-cell proliferation, and some direct target EBV-transformed LCLs (37).Notably, EBV-specific T cell products enriched with CD4+ T cells correlate with improved clinical outcomes (38).Furthermore, the expansion of T cells through LCL generates CD4+ T cells specific to nonviral cellular antigens (39,40), known as TAAs (10), upregulated by LMP1 in EBV-infected cells.
EBV-positive tumors in the immunocompetent host
Unlike PTLD, which expresses a full array of EBV latent antigens (latency III), most EBV-associated cancers exhibit limited expression of EBV latent antigens in relatively immunocompetent hosts (Figure 1).Immunodominant proteins such as EBNA2, 3A, 3B, 3C, and -LP are absent, redirecting immune attention towards remaining target antigens, such as EBNA1 in BL, EBNA1, LMP1, and LMP2 in HL, and primarily EBNA1 and LMP2 in NPC, GCa, ENKTL, and DLBCL.Efficient recognition of these EBV antigens by T cells is crucial for targeting and eliminating infected cells.
Traditionally considered immunologically inert, EBNA1 has a glycine-alanine repeat (GAr) region that shields it from proteasome breakdown and MHC I presentation (41).However, studies of CD8 + T cells targeting specific EBNA1 epitopes are also reported (42,43).These T cells can recognize naturally expressed native EBNA1 protein within EBV-transformed LCLs, inhibiting LCL proliferation (44), suggesting that the GAr domain within EBNA1 does not confer complete protection from MHC class I presentation.In vitro models suggest that HL, NPC, and T/NKL cells retain MHC class I antigen processing capabilities and can be recognized by CD8+ T cells specific to LMP2 (45-49).
In contrast, BL is deficient in MHC class I processing (50) but exhibits MHC class II expression (51), allowing recognition by EBNA1-specific CD4+ T cells ex vivo and in murine models (52,53).Besides MHC molecules, HLA polymorphism, which influences antigen presentation and immune recognition, is strongly associated with disease risk (54)(55)(56)(57).For example, the HLA-A01 allele increases the risk of EBV-positive HL, whereas HLA-A02 has a protective effect (58).Despite EBV-specific T cells being restricted by various HLA alleles, the emergence of EBVpositive tumors cannot be solely attributed to antigen-specific blindness in the T cell repertoire.T-cell population deficiencies and attenuated T-cell responses are plausible contributors (59, 60).This is particularly evident in endemic BL, where Plasmodium falciparum and EBV act as co-factors in cancer development (61).Malaria stimulates the proliferation of latently infected B cells through viral reactivation (53).Meanwhile, T-cell control of EBVinfected B cells is lost during P. falciparum malaria (59, 60), possibly contributing to an increased risk of incidence of BL.Furthermore, EBV-positive cancers employ diverse strategies to evade immune surveillance.The tumor microenvironment (TME) within EBV-associated malignancies, including HL, NPC, and the majority of EBV-positive gastric cancers, is characterized by an "immune hot" phenotype (58,62,63).These tumors display pronounced infiltration of lymphocytes whose specificities and functions remain incompletely elucidated.
EBV-positive HL exhibits distinct characteristics compared to EBV-negative HL.Notably, the signature of EBV+ cHL tissues is enriched in genes characteristic of Th1 and antiviral responses.Furthermore, in pediatric cases of EBV+ cHL, a robust T cell infiltration is evident, exhibiting a cytotoxic/Th1 immune profile (64,65).However, markers of suppression also increase, including LAG-3 and IL-10 (66).Regulatory T cells (Tregs), both natural and induced, are present in higher frequencies, contributing to immunosuppression (66, 67).EBNA1 may upregulate CCL20 expression, promoting the migration and recruitment of Tregs (68).Additionally, active signaling by LMP1 and LMP2 can induce high-level expression of galectin-1 and PD-L1 (69-71).
Undifferentiated NPC is invariably EBV-positive and exhibits a suppressive TME characterized by dysfunctional lymphocyte infiltration.Regulatory CD4+ T cells are elevated in the blood and consistently detected in tumors (72).CD8+ FoxP3+ lymphocytes with suppressive functions are also present (73).Immune checkpoint molecules such as PD-L1, LAG3, galectin 9-TIM3, TIGIT, and CTLA4 are overexpressed (74)(75)(76)(77).Recently, an epithelial-immune dual feature of NPC cells has been identified, characterized by upregulated MHC II gene expression.This dual feature correlates with CD8+ T cell exhaustion and a suppressed TME, ultimately associated with poor prognosis (78).
Despite the diverse repertoire of immunomodulatory mechanisms employed by EBV-positive cancers, adoptive transfer of EBV-specific T cells has demonstrated clinical efficacy in patients with PTLD, HL, NPC, and T/NKL (85)(86)(87)(88).The therapeutic effect of EBV-specific T cells not only destroys tumor cells and reduces tumor burden but may also induce the release of potentially antigenic debris from tumor cells, thereby stimulating an immune response against nonviral cellular antigens.This phenomenon, known as epitope spreading (85), expands the range of targeted antigens for T-cell recognition and response.However, the origin of these cellular antigens, whether from epitope spreading or as a consequence of LMP1 signaling-induced upregulation of TAAs on B cells (10), warrants further investigation.
HLA susceptibility
The human leukocyte antigen (HLA) complex, located within the major histocompatibility complex (MHC) on chromosome 6p21.3,plays a vital role in antigen presentation to the immune system.The MHC region encompasses three subregions: HLA class I, crucial for CD8+ T-cell cytotoxicity induction; HLA class II, involved in CD4+ helper T-cell responses; and class III, housing non-HLA genes associated with inflammation, leukocyte maturation, and the complement cascade.
HLA's diversity and polymorphism contribute to its ability to recognize and target various pathogens.Growing evidence suggests that HLA variations can influence genetic susceptibility to EBVassociated cancers.Notably, NPC strongly associates with HLA genes in the MHC region (54)(55)(56)(57).In the genomic analysis of NPC patients, a notable frequency of aberrations in MHC class I genes (NLRC5, HLA-A, HLA-B, HLA-C, B2M) has been observed (89).An HLA class I region-specific association suggests the importance of CD8+ T-cell cytotoxicity in NPC etiology (90).HLA associations may vary across racial groups, with specific HLA alleles conferring protective or increased risk effects in different populations.In Southern China and Southeast Asia, where NPC is most prevalent, HLA-A11 and B13 are associated with a protective effect against NPC, whereas HLA-A02 (A0207, A0206), A33, B46, and B58 are linked to an increased risk of NPC (91).
HLA also demonstrates significant links with other EBVassociated cancers, including HL, BL (92), and PTLD (93).For example, the HLA-A01 allele increases the risk of EBV-positive HL, whereas HLA-A02 has a protective effect (58).However, the mechanisms underlying the diverse roles of HLA alleles in cancer susceptibility and immune escape remain incompletely understood.
In addition to classic HLA genes, non-classic HLA genes have been implicated in immune escape.HLA-G, known to inhibit T-cell and NK-cell function, is frequently expressed in NPC tumors and is associated with poor survival outcomes (94).
Due to its strong association with cancer etiology, HLA has potential applications in cancer screening, as demonstrated in improved prediction efficiency for NPC screening when combining HLA class I gene variants with EBV genetic variants and epidemiological risk factors (95).
To advance our understanding of the intricate role of HLA genes and their interplay with T-cell immunity in EBV-associated cancers, larger-scale and comprehensive studies are needed.2).Furthermore, through the ectopic expression of LMP1 on patient-derived tumor B cells to prime T cells, autologous cytotoxic CD4+ T cells can be expanded to target a wide range of endogenous tumor antigens, including TAAs and neoantigens.This innovative approach holds great promise for treating B-cell malignancies and augmenting immune-mediated protection against EBV-unrelated cancers by targeting shared TAAs (96).
EBV-induced T cell responses against TAAs
Several independent studies have also reported a nonviral, cellular antigen-specific component in the human CD4+ T cell response upon EBV-transformed LCL stimulation in vitro (39,97).However, these cellular antigens have not been identified and their classification as TAAs remains to be established.Furthermore, clinical studies have observed the detection of T cells specific for nonviral TAAs in the peripheral blood following cytotoxic T lymphocyte (CTL) infusion, which is associated with clinical responses (85).Nevertheless, whether these T cells arise through epitope spreading or are derived from the therapeutic T cells through LCL stimulation is unclear.Therefore, further investigations are needed to identify TAAs expressed by EBVinfected or transformed B cells and to determine their recognition by T cells in individuals with EBV infection (96).
In addition to B cells, whether LMP1 or other EBV antigens can induce the upregulation of TAAs in epithelial cells has yet to be examined.Furthermore, the exact roles of MHC II molecules in cancer remain subject to debate and investigation.Accumulating evidence indicates that tumor-specific MHC II expression is linked to positive outcomes in many cancer types (98) (e.g., breast cancer (99), colon cancer (100), melanoma (101)).However, an opposing functional aspect of MHC II has also emerged.In HLA-DR+ melanoma, MHC II lessens CD8+ T cell activity by inducing LAG3+ and FCRL6+ TILs (102) or recruiting CD4+ T cells to the tumor (103).In the TC-1 mouse model of HPV-related carcinoma, the absence of MHC II molecules promotes CD8+ T cell infiltration and activation, curbing tumor growth (104).Moreover, a recent study examining NPC using single-cell transcriptomics has revealed a dual epithelial-immune feature of tumor cells, characterized by the expression of immunerelated genes, including MHC II-coding genes (78), which relates to poor prognosis.This distinct trait also links to CD8+ T cell exhaustion and a suppressed tumor environment (78).
EBV-specific T cells in TME: bystanders or not?
Humans can experience common viral infections like CMV, EBV, and influenza.Once recovered, antiviral memory T cells are retained throughout the body to sense reinfection or recrudescence (105, 106) and are endowed with the capacity for rapid response, sustained vigilance, and cytotoxic prowess (107).Although such virus-specific T cells are abundant within tumors, they may not target tumor cells and are therefore regarded as "bystander-T cells" (108).However, emerging evidence suggests that these virusspecific T cells can still be harnessed for cancer immunotherapy (107, 109-111).
One strategy involves antibody-mediated delivery of viral epitopes to tumors (110, 111), achieved by conjugating virusderived epitopes with tumor-targeting antibodies.These antibodies bind to specific tumor cell antigens and release immunogenic virus epitopes when cleaved by tumor-specific proteases.The released peptide then binds to free HLA class I molecules at the tumor cell's surface and can be targeted for destruction by circulating virus-specific CTLs (110, 111).
Another strategy employs viral peptides to mimic a viral reinfection event in memory T cells.Memory T cells can execute a 'sensing and alarm' function upon antigen re-exposure (112), and this form of immunotherapy is termed peptide alarm therapy (PAT) (109).Reactivating virus-specific memory T cells through intratumoral delivery of adjuvant-free virus-derived peptide triggers local immune activation.This delivery translates to antineoplastic effects, which lead to a significant tumor reduction of tumor growth in mouse models of melanoma (107) and improved survival in a murine glioblastoma model ( 109).This approach can reactivate and attract T-cell infiltration into the tumor and transform the immunosuppressive tumor microenvironment into immune-active sites.
EBV-specific T cell-based therapies 6.1 EBVSTs
EBV-specific T cells (EBVSTs) derived from allogeneic or autologous donors can recognize and eliminate cancer cells expressing EBV antigens, highlighting their potential in adoptive cell therapy (Table 1).
Clinical trials in the early stages have demonstrated the effectiveness of adoptive T-cell therapy in treating PTLD, which leverages the restoration of cellular immunity to control EBVassociated PTLD.Initial trials using unmanipulated donorderived lymphocytes in HSCT patients yielded favorable outcomes, with complete regression observed in all 5 patients (113).However, the alloreactive nature of these T cells also led to the development of graft-versus-host disease (GvHD).Subsequent trials focused on generating allogeneic EBVSTs through in vitro stimulation using EBV-transformed LCLs, recombinant viral vectors, or synthetic peptides (86,(114)(115)(116)(117)(118).These trials demonstrated efficacy in preventing and treating PTLD in HSCT recipients, with minimal alloreactivity and reduced production pipeline.Similar strategies have been employed in the context of SOT to address PTLD (119)(120)(121); however, the response rate and persistence of EBVSTs in SOT patients have been limited, likely attributed to high levels of immunosuppression (9).To overcome this challenge, preclinical studies have attempted genetic modifications of EBVSTs to confer resistance against immunosuppressive agents (122-124).
The success of EBVSTs in PTLD has fostered an interest in treating other EBV-associated malignancies, such as NPC and HL.EBVSTs targeting type II latency antigens (EBNA1, LMP1, and LMP2) have shown promising results in clinical trials (85,87,88,117,125), with increased response rates and overall survival observed in patients with NPC and HL compared to those who did not receive adoptive cell transfer.However, it should be noted that the best response rate is still observed in PTLD-post HSCT (Table 2).In addition, emerging evidence indicates that immediate early and other lytic transcripts, including BARF1, could broaden specificity and enhance cytotoxicity for EBV-associated diseases.BARF1-specific T cells have demonstrated the ability to efficiently eliminate NPC cell lines in vitro (127).To improve accessibility and expedite treatment, the establishment of third-party EBVST banks is actively being explored for PTLD (38,126).The use of banked cells from thirdparty donors has broadened the availability of EBVSTs, and the observed response rates indicate the potential effectiveness of this approach in a wider range of patients.Alternatively, a combination of therapies with other immunomodulatory agents, such as checkpoint inhibitors (135) or vaccines (136) may be necessary to ensure clinical impact.
EBV specific T cell receptor engineered T cell therapy
TCR (T-cell receptor) engineered T-cell therapy has emerged as a promising strategy for immune-based treatment (Table 1).TCRs specific to EBNA3A, EBNA3B, LMP1, LMP2, BRLF1, and BMLF1 have been generated from CD8+ T cell clones (129, 137, 138).However, recognition of autologous EBV-transformed LCLs by Tcell lines transduced with these TCRs was weak, partly attributed to the limited expression of latent EBV antigens in LCLs.Nevertheless, the adoptive transfer of TCR transgenic T cells significantly attenuated tumor growth induced by the CNE NPC line in nude mice, demonstrating their efficacy in vivo (139).The interactions between transgenic TCR a and b chains with the endogenous TCR is another possible factor contributing to the constrained killing efficiency (140).To overcome this, chimeric TCRs have been devised.These chimeric TCRs entail the fusion of constant regions derived from mouse TCR with variable domains derived from EBV-specific T cell clones (141).The stability of these modified receptors was enhanced by introducing an additional disulfide bond between the TCR a and b chain constant domains (128,142).Transgenic T cells expressing these chimeric TCRs exhibited improved cytotoxicity against co-incubated EBVpositive NPC cells, effectively suppressing tumor growth in immune-compromised mice (128).Similarly, promising outcomes were observed with an LMP1-specific TCR, as T cells transduced with LMP1-specific TCR rendered a twofold increase in the survival of immune-compromised mice challenged with LMP1-expressing tumor cells (129).
Consequently, despite the limited cytotoxicity towards autologous tumor cells, transgenic T-cell therapy remains a promising strategy in combating EBV-associated malignancies.
Beyond ab: accumulating evidence of a role for gd T-cells
The preceding review primarily focuses on ab T cells, but it is important to note the unique features of gd T cells that make them appealing in various cancer settings.These features include tissue tropisms, MHC-independent antigen presentation, antitumor activity regardless of neoantigen burden (143), and a combination of T and natural killer cell properties (144)(145)(146).In humans, gd T cells can be categorized into Vd1+ and Vd2+ cells, with distinct distributions in mucosal tissues and blood/lymphoid organs, respectively.They play a crucial role in antiviral immune responses in cytomegalovirus (147-150).Emerging evidence suggests that gd T cells also play a role in primary EBV infection and EBV-associated cancers.
During primary EBV infection, there is an observed increase in the frequency of gd T cells in the blood of patients with infectious mononucleosis (IM) (151-153).Pediatric patients have a bimodal innate response to primary EBV infection (154), influenced by a dimorphism in TCRg-chain repertoires (155).Altered gd T cells have also been observed in patients with EBV-associated malignancies, such as NPC, where the impaired functional capacity of gd T cells is observed despite an unchanged frequency (156, 157).In a case involving a cord blood transplant recipient with elevated EBV viremia, the absence of detectable ab T cells was compensated by expansions of cytotoxic Vd1+ gd T cells, resulting in no signs of lymphoproliferative disorder (158).Moreover, early recovery of Vd2+ T cells has been identified as an independent protective factor against EBV reactivation in recipients of allo-HSCT (159).Interventions that induce early reconstitution of autologous gd T cells could hold therapeutic benefits.ab TCR graft depletion (160,161) has demonstrated efficacy in reducing GVHD by facilitating rapid immune reconstitution of NK cells and gd T cells (162)(163)(164).Additionally, reducing immunosuppressants has led to enhanced recovery of Vd2+ T cells and decreased risk of EBV-associated lymphoproliferative disorders in HSCT recipients (159).Notably, long-term persistence of donor-derived Vd1+ T cell clones has been detected in recipients' blood even a decade post-HSCT, with these cells exhibiting expandability in vitro and cytotoxicity against autologous EBV-LCL (165).
While extensive research and clinical trials have explored the therapeutic potential of gd T cells in managing solid tumors and hematopoietic malignancies (166-168), only a limited number of studies have investigated their efficacy in EBV-associated cancers using murine models (Table 1).Adoptive transfer of anti-gd TCR antibody-expanded gd T cells to Daudi lymphoma-bearing nude mice significantly prolonged their survival time (130).In addition, the adoptive transfer of pamidronate-expanded Vg9Vd2-T cells prevented and inhibited EBV-LPD in mouse models (131).Moreover, co-administration of Vd2+ T cells and the EBNA1targeting peptide L2P4 enhanced gd T cell cytotoxicity against NPC in immunodeficient mouse models (132).Additionally, exosomes derived from Vd2+ T cells exhibited the ability to eliminate EBVassociated tumor cells (133), and when combined with radiotherapy, gd-T-Exos demonstrated efficacy in effectively treating NPC by eradicating radioresistant cells (134).
Thus, gd T cells represent an essential component of cellular immunity in regulating primary EBV infection and hold promise in combating EBV-associated malignancies.
Conclusions
Cellular immunity is pivotal in maintaining the delicate equilibrium between the host and EBV.Despite EBV's high prevalence, affecting a significant portion of the global population, most individuals remain asymptomatic throughout their lives, highlighting the critical role of effective immune control.However, EBV-associated malignancies primarily occur in individuals with apparently intact immune function.This raises intriguing questions about the mechanisms and stages at which these tumors manage to evade the surveillance of virus-specific T cells.EBV-associated malignancies express distinct EBV latent antigens, triggering diverse T-cell responses while also employing a range of immune evasion mechanisms, rendering a complex interplay with cellular immunity.Encouragingly, promising clinical responses have been observed from adoptive cell transfer of EBV-specific T cells targeting latent antigens.Recent investigations into early lytic EBV antigens in tumorigenesis provide additional potential targets for therapeutic interventions.Additionally, TCR transgenic therapy offers the possibility of redirecting T cells to recognize EBV antigens and the involvement of gd T cells also merits consideration in EBVassociated diseases.
In cancers not associated with EBV, there usually exists an abundance of EBV-specific memory T cells, which can be leveraged to either activate the immunosuppressive tumor microenvironment or re-directed to target tumor cells.In addition, EBV can activate TAA-specific T-cell responses.These further broaden our understanding of this oncogenic virus and its implications for the fields of cancer biology and therapy.In this regard, a pivotal research goal is to attain a comprehensive grasp of the intricate interplay between cellular immunity and the virus.By harnessing the inherent capabilities of T-cell immunity, we can advance toward more precise and effective interventions in the treatment of EBVassociated and other cancers.
FIGURE 1 EBV
FIGURE 1EBV latency types.Latency III express all EBV-encoded latent proteins and is the most immunogenic.Latency II expresses EBNA1, LMP1, and LMP2, and has intermediate immunogenicity.Latency I only expresses EBNA1 and is poorly immunogenic.Latency 0 abolishes all antigen expression and is seen in memory B cells as a reservoir of the virus.There can be transition latency states with upregulation of latency and lytic genes.EBNA, EBV nuclear antigen; LP, leader protein; LMP, latent membrane protein.
Choi et al. (10) demonstrated in a mouse model that the expression of the EBV signaling protein LMP1 in B cells induces T-cell responses against multiple TAAs.LMP1 signaling enhances the presentation of TAAs on B cells and upregulates the expression of costimulatory ligands CD70 and OX40L, leading to the activation of potent cytotoxic CD4+ and CD8+ T-cell responses against LMP1 (EBV)-transformed B cells (Figure
FIGURE 2 LMP1
FIGURE 2 LMP1 signaling in B cells triggers cytotoxic T-cell resposes against TAAs.(Choi et al., 2021) LMP1 signaling induces substantial cellular gene expression, leading to (i) upregulation of antigen processing and presentation machinery, (ii) enhanced expression of co-stimulatory ligands (CD70, OX40L, etc.), and (iii) overexpression of cellular antigens known to function as TAAs.Collectively, these mechanisms contribute to the effective eradication of LMP1 (EBV)-transformed B cells.TAA, tumor associated antigens.
TABLE 1 EBV
-associated malignancies and their forms of viral latency.
TABLE 2
Summary of EBV-specific T cell-based therapies. | 2023-10-01T15:04:56.018Z | 2023-09-29T00:00:00.000 | {
"year": 2023,
"sha1": "ae2c3df2061f084940264cecd4ce503b99a30ff7",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2023.1250946/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e7bfa3f8fcc32fff0250466e64e94f235948e78",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56440507 | pes2o/s2orc | v3-fos-license | Emission of Pollutants from Glycine–Nitrate Combustion Synthesis Processes
Four ceramic powders were produced using the glycine–nitrate process: lanthanum-doped barium cobaltite, ceria, magnesia, and strontium-doped lanthanum chromite (LSC). Glycine-to-ni- trate ratios from 0.25 to 1 were investigated. During the combustion synthesis process, careful collection of process off-gas was followed by detailed gas analyses to determine product gas composition. All of the synthesis processes produced pure phase ceramic powders, but also produced criteria pollutant emissions levels that were significant enough (up to 4500 ppm of NO x and 9000 ppm of carbon monoxide) to warrant consideration. Equilibrium and chemical kinetic computations are used to determine the implications of the current findings.
I. Introduction
T HE glycine-nitrate process (GNP) of combustion synthesis was detailed by Chick et al. 1 in 1990; it has since been highly valued and heavily used for synthesizing mixed rare-earth oxides. The process itself is fairly simple: a precursor solution containing the desired cation stoichiometry of the target mixedcation oxide is prepared by (1) dissolving metal nitrates in water, (2) adding glycine as a complexing agent and combustion fuel, and (3) heating the solution to remove the water until the precursor solution spontaneously ignites. 1 The ash resulting from the combustion reaction is the desired mixed-metal oxide, usually with ultra-fine particles and high purity. [2][3][4][5][6][7] The GNP synthesis technique has been used for the synthesis of oxides for many applications, notably exotic mixed-metal oxides for use in solid oxide fuel cells and catalysts. 2,4,8 The GNP technique has been billed as ''environmentally compatible'' by previous researchers. 1,6,9 This assertion stems from the reaction equations that have been proposed for the process, which have cations, nitrate, and glycine as reagents/reactants, and metal oxides, nitrogen, water, and carbon dioxide (CO 2 ) as products. These equations assume that the lowest energy products possible are favored and achieved in the reaction process. However, combustion reactions often take place under conditions that are highly unfavorable to the formation of equilibrium products. The duration of exposure to high-temperature environments may be short, leading to kinetic limitations of the combustion reactions. Additionally, depending upon the temperatures at which the overall combustion reaction proceeds, and the rates of temperature increase and quench, products of complete combustion (PCC) may not even be thermodynamically preferred. For example, the thermodynamic equilibrium composition of typical GNP chemistry, at the elevated temperatures at which reactions are expected to proceed, always includes carbon monoxide (CO). 10 The present work was undertaken after GNP reactions carried out in house were noted to form and emit a visible brown haze suspected to be NO 2 . This has also been observed by other researchers, although the species speculated to cause the brown haze was not discussed in their publications. 6
II. Background
The reaction equation that is generally written 5 to describe glycine-nitrate combustion synthesis is This equation assumes that the only products of the reaction are the desired oxide and PCC. The equation can also be written with oxygen as a reagent or product in order to balance the equation.
The reaction equation may also include products of incomplete combustion (PIC). This equation is somewhat more complicated, with additional products as follows: The additional products NO, NO 2 , and CO are of particular interest as they are known to be environmentally harmful PIC. They are toxic and smog-forming chemicals, and their emissions are strictly regulated. Because of their noxious nature, they are also potentially a hazard for laboratory personnel.
If the nitrogen is bound in the fuel, it is possible that nitrogen oxides will form readily and cannot be easily avoided. Because nitrogen is present both as an oxidizer and in the glycine complexing agent and carbon is present in the glycine, it is expected that nitrogen oxides will be formed from both sources as intermediate species, as will other nonequilibrium intermediate nitrogen and carbon compounds. What fraction of these intermediates remains at the completion of the reaction and after quenching of the products is the primary subject of this work.
It has been hypothesized that the glycine nitrate combustion synthesis process proceeds in three major steps. The first step is evaporation of water, dehydrating the precursors. The second step is the decomposition of precursors to form flammable gases such as NO 2 and CO. The third step is a self-sustaining rapid reaction resulting in the production of ceramic powder and gaseous combustion products. 1,11 The formation of the gaseous intermediates NO and NO 2 is potentially problematic. It is well known that the reaction of NO (1) (2) to N 2 and O 2 is typically kinetically limited in combustion reactions, where quenching of the combustion reaction may not allow sufficient time at temperature to allow these reactions to proceed. 10
III. Experimental Procedure
To determine the product gas composition from GNP procedures, a reaction apparatus was set up to continuously sample the gas composition in each reaction studied. Product gases from five varying glycine-to-nitrate group ratios, for four compositions of metal-oxide precursors, were measured using GNP combustion synthesis reactions under air. The standard method of predrying mixtures and hot plate heating until spontaneous combustion occurs-generally reported in the literature 1,9 -was also used in this work.
The gaseous products from each GNP reaction were analyzed using a Horiba PG-250 exhaust gas analyzer (Irvine, CA) that had been calibrated in the highest ranges of CO and NO x (5000 and 2500 ppm, respectively). The Horiba PG-250 uses an infrared absorption technique to measure CO and CO 2 , and a chemiluminescence detector coupled to an ozone generator to measure NO or NO x separately. Simultaneous measurement of both NO and NO x is not possible, and because total NO x includes all species of interest, this instrument mode was used in this work. Sampling of the gaseous products of each GNP reaction was accomplished using a noncooled stainless steel sampling probe and Teflon sampling line through which the gas sample was drawn by a positive displacement pump at a flow rate of 0.6 L/m under ambient pressure. Measured concentrations of CO, CO 2 , and NO x from the exhaust gas analyzer were recorded in real time using a PC with a LabVIEW virtual instrument written for this purpose. A diagram of the set up is shown in Fig. 1.
The metal nitrate precursors chosen were magnesium nitrate for MgO, cerium ammonium nitrate for CeO 2 , barium, lanthanum, and cobalt nitrates for lanthanum-doped barium cobaltite (BLC), and finally strontium, lanthanum, and magnesium nitrates for strontium-doped lanthanum chromite (LSC). The wide variety of metal nitrates and glycine-to-nitrate ratios were chosen to confirm that the characteristic emissions of the GNP reactions are shared across many different precursor formulations, and are not peculiar to any specific formulation.
A wide range of glycine-to-nitrate group ratios ranging from 0.3 to 1.0 were tested to ensure that the results from this work are comparable to other GNP-related work, 1,9 and to investigate the influence of the ratio on PIC formation and emission. The ''stoichiometric'' glycine-to-nitrate ratios for each of the compositions calculated assuming only PCCs in the product gas are 0.37 for ceria, 0.53 for BLC, 0.55 for MgO, and 0.55 for LSC, respectively.
The ceramic powders synthesized were also verified to be consistent with anticipated phases using X-ray diffraction (XRD, Siemens/Bruker D5000, Madison, WI) analysis of calcined product powders.
IV. Results
Contrary to the assumptions of Eq. (1), the levels of PICs measured for all cases were very high. As the gas sample was extracted from a loosely sealed glass beaker, the total mass of the PICs in the product gas cannot be accurately quantified, but there is no question that they are present at a significant level.
In many of the cases, levels of NO x in excess of 2500 ppm were observed (Fig. 2). In these cases, the actual concentration was even higher than the instrument's upper detection limit, leading to measurement saturation for a period of time. In all cases, high concentrations were observed for a period of about 100 s with peak concentrations, followed by a gradual decline back to zero as the process air inside the beaker was exchanged with ambient air (Fig. 3). In cases when the peak concentration exceeded the maximum instrument capability, an estimate of the peak level of NO x emission was developed by linear extrapolation of the linear portions of the rising and falling concentration trends; a period of 10 s before saturation and a window of 20 s following saturation were utilized, with the point of intersection designated the peak concentration. Figure 4 shows that peak NO x production and emission is minimized for ceria, magnesia, and LSC production for glycine-to-nitrate ratios in the range of 0.5-0.8. On the other hand, peak NO x production and emission monotonically decreases with increasing glycine-to-nitrate ratio for BLC production. Figure 5 shows that CO production and emission is typically higher at higher glycine-to-nitrate ratios for all of the GNP reactions tested, with the exception of BLC. Note that CO emissions are significantly higher for ceria and LSC production than for magnesia and BLC.
As detailed in Table I, the reactions that tended to proceed more slowly (i.e., those with a higher glycine-to-nitrate ratio) tended to produce and emit lower peak NO x levels and higher peak CO levels. The exceptions are that the BLC synthesis tended to produce lower CO at high glycine-to-nitrate ratios and magnesia synthesis tended to produce higher NO x levels at higher glycine-to-nitrate ratios. Typical combustion reactions exhibit a similar behavior, with increased CO production and decreased NO x production as the combustible mixture becomes more ''fuel rich.'' The reactions that proceeded the fastest took place when the glycine-to-nitrate ratio was near 0.4, except for ceria, which only showed a decrease in reaction rate with increased glycine. The fastest reaction rate and lowest emissions were All values in ppm (parts per million). Reaction rates were designated ''very rapid'' if the reaction took less than 1 s, ''rapid'' if the reaction took 1-2 s, ''less rapid'' if the reaction took 2-4 s, ''slow'' if the reaction took 4-10 s, and ''very slow'' if it took longer than 10 s to complete. w NO x profile measured did not allow for reasonable extrapolation of true peak NO x values. z CO profile measured did not allow for reasonable extrapolation of true peak CO values. not apparently dependent on the ''stoichiometric'' composition, especially in the case of ceria. The peak NO x values for most of the synthesis processes investigated as exhibited relatively low NO x production under these conditions, although the lowest NO x values tended to appear near glycine-to-nitrate ratios slightly 40.4 (around 0.6, corresponding to a slightly fuel-rich condition). This is counter-intuitive compared with typical combustion processes, under which equilibrium conditions tend to produce more thermal NO x near the stoichiometric condition due to higher flame temperatures. This trend implies that NO x production in the combustion reaction is not a thermal NO x mechanism but rather a mechanism that produces NO x that is not fully reacted in the combustion reaction.
Typical XRD patterns of the ceramic powders after calcining showed that the desired pure phase compounds were achieved in this work. Figure 6 presents a representative XRD pattern for the ceria powder that was prepared by GNP synthesis experiments in this work. Note that the expected phase (JCPDS file # 34-394) was achieved with no evidence of significant impurity or intermediate phases.
While peak production and emission values for NO x and CO indicate the propensity to form and emit these PICs, they do not provide the quantity of the total mass of PICs produced. If the total mass of PIC emissions is significant in comparison with the mass of PCCs (i.e., metal oxides, N 2 , CO 2 ), then the relative amounts of precursors required to achieve the desired oxide phase may be affected by the current PIC production discovery. To assess this, an estimate of the mass fraction of NO x and CO emissions has been made for each of the GNP reactions tested by conversion of measured volume fractions to mass fractions using formula weights (Table II). Note that even though the measured concentrations of NO x and CO are significant from an emissions perspective, they do not significantly alter the overall chemistry of the GNP reaction.
V. Discussion
From the results obtained in this work, it may be concluded that the GNP reactions fail to produce only PCC (N 2 and CO 2 ) as commonly reported. On the contrary, high levels of PICs (NO x and CO) are produced and remain as products of typical GNP reactions.
The formation of NO 2 as a product of the decomposition of metal nitrates has been well characterized in both oxidizing and inert environments. 12,13 It was observed that the decomposition of the nitrate groups usually resulted in the formation of NO 2 , a process step that was anticipated to occur in the GNP process as well. However, it is apparent that large quantities of NO 2 , as well as CO, remain after the combustion event is complete. To investigate the reasons for the observed significant amount of unreacted CO and NO 2 in the combustion products, modeling of the chemical equilibrium and chemical kinetics related to GNP reactions was conducted in this work. Chemical equilibrium modeling was conducted using NASA's equilibrium code, 14 Based on these product gases, at the temperatures of reaction reported by Chick et al. 1 of between 1000 and 1700K, equilibrium calculations predict that these gases would never spontaneously produce appreciable concentrations of NO or NO 2 . Note that the CO concentration expected at equilibrium is fairly significant, even at equilibrium, at these temperatures. This implies that the NO x measured in the exhaust must be a retained intermediate species.
Alternately, if the reaction is assumed to produce mostly NO 2 from the nitrate group and CO from the glycine, another reaction equation could be written as Using these product gases and the temperatures reported by Chick et al., 1 it was found that the equilibrium code predicts that the mixture will equilibrate to a mixture of identical composition to the equilibrium concentrations presented above, as expected. Fig. 6. Typical X-ray diffraction pattern and the library fit for ceria powder synthesized in this work by GNP and then calcined for 2 h at 7001C. While these levels of PICs are significant from the standpoint of emissions, equilibrium calculations alone cannot explain the observed high levels of NO x and CO measured in this work (Fig. 7).
The high levels of PICs observed may be due to the kinetics of the reactions during the entire reaction time, and the temperature history associated with the GNP reaction. The reaction time and the temperature history results from a complex set of physical and chemical features of the process including the effects of mixing, heat supply from the hot plate and the reaction heat, heat transfer, and quenching of the reaction.
To model the effects of chemical kinetics during the timetemperature history profile of a GNP reaction, a representative history that includes thermal quenching was developed and simulated in Chemkin s . NO 2 and CO were assumed to be the intermediate species that form by the GNP reactions in Eq. (4). These intermediate species were considered to be the initial reactants in a homogeneous set of chemical reactions subjected to constant pressure, and constant quench rate conditions in Chemkin s . Various quench rates were investigated. Figure 8 presents the results from chemical kinetic calculations that used a quench rate that led to cooling of the reacting mixture from a starting temperature of 1700-500 K in 0.25 s. At temperatures near the initial temperature of 1700 K, NO 2 is quickly reacted to form NO. For the profiles and formulas modeled, all NO 2 present in the initial gas mixtures decomposed to NO within the first time step, and so it is omitted from the profile in Fig. 8. The NO and CO, however, end up being unable to react further to produce the equilibrium products that are desired under these conditions. The NO and CO are said to be ''frozen'' into the gas mixture at these quench rates. For the temperatures reported for most GNP reactions, it is expected that thermal NO x formation rates are very small (insignificant). The thermodynamic driving forces suggest that only very small amounts of NO x can be produced from heating of air at 1700 K. The only NO x production mechanism that can explain the levels that we measured is the fuel-bound nitrogen oxide production mechanism. The formation of NO from NO 2 can occur very quickly at the temperatures we expect in GNP reactions, and reasonable quenching rates for the process can lead to significant quantities of NO that remain in the gaseous GNP products.
The results of the chemical kinetic calculations presented in Fig. 8 suggest that significant NO and CO concentrations can be achieved for these conditions. These levels of NO and CO are higher than those observed during the GNP experiments, but because no measurement of temperature transients was made, it was not possible to simulate the exact temperature profile of the reaction. Note finally that the results of Fig. 8 also show that the reduction of nitric oxide to nitrogen and oxygen and oxidation of CO to CO 2 , while favored by equilibrium at low temperatures, have kinetic rates that are relatively small, leading to the ''frozen'' concentrations as shown. This is consistent with observations made in numerous combustion studies. 10 The observations in this work are also consistent with the need for additional calcining that is typically required to produce pure phase compounds using the GNP. The need for additional calcining indicates that the GNP reactions do not bring the as-synthesized metal oxides to the equilibrium states as typically reported in the literature. If the GNP reactions have fully proceeded to equilibrium conditions throughout the final products, no subsequent calcining treatment would be needed. It has been widely reported that the ash from the GNP combustion reaction contains unreacted carbon species. Therefore, it is not surprising that the gaseous products of combustion are also not those predicted by equilibrium models.
VI. Conclusions
If the GNP technique is to be used on an industrial scale, the potential for producing and emitting nitrogen oxides and CO demonstrated in this study must be addressed. Additionally, laboratory bench-top use of GNP processes should be carried out with care to avoid emission and exposure to any noxious NO, NO 2 , and CO products.
The high levels of PICs observed during the GNP could affect the methodology used to calculate the ''stoichiometric'' ratio for these reactions. If PCC are not produced, the desired metal oxide composition may be significantly altered. In the current experiments, the total mass of PIC emissions was estimated to comprise between 1% and 2% of the products.
Although some trends have been observed for the amount of PICs measured in the combustion process exhaust, there do not appear to be general trends that one can use to predict the levels of PICs quantitatively. The trends appear to be highly dependent on the individual precursor sets, which may merit additional investigation.
In industrial processes that produce large amounts of NO x , the effluent stream can be reduced to N 2 and O 2 by use of selective catalytic reduction treatments or other established methods for the treatment of exhaust gases. It must be recognized that the existing reactions proposed for the glycine-nitrate combustion technique for synthesis of oxide powders has the potential to produce significantly high, even hazardous, levels of PIC that should be controlled by some means. Despite these concerns, GNP approaches remain an attractive means of producing highly uniform, complex oxide ceramic powders with precisely controlled stoichiometry. | 2018-12-18T20:30:42.991Z | 2007-12-01T00:00:00.000 | {
"year": 2007,
"sha1": "710ce825823bd21c172ed198f9cdc5cb2fdf795c",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt9t9220vq/qt9t9220vq.pdf?t=odi3p3",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "f0cf75705f3750b1e5e1fa2bf4fe05f628c882b7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
10818898 | pes2o/s2orc | v3-fos-license | Role of the Trypanosoma brucei natural cysteine peptidase inhibitor ICP in differentiation and virulence
ICP is a chagasin-family natural tight binding inhibitor of Clan CA, family C1 cysteine peptidases (CPs). We investigated the role of ICP in Trypanosoma brucei by generating bloodstream form ICP-deficient mutants (Δicp). A threefold increase in CP activity was detected in lysates of Δicp, which was restored to the levels in wild type parasites by re-expression of the gene in the null mutant. Δicp displayed slower growth in culture and increased resistance to a trypanocidal synthetic CP inhibitor. More efficient exchange of the variant surface glycoprotein (VSG) to procyclin during differentiation from bloodstream to procyclic form was observed in Δicp, a phenotype that was reversed in the presence of synthetic CP inhibitors. Furthermore, we showed that degradation of anti-VSG IgG is abolished when parasites are pretreated with synthetic CP inhibitors, and that parasites lacking ICP degrade IgG more efficiently than wild type. In addition, Δicp reached higher parasitemia than wild type parasites in infected mice, suggesting that ICP modulates parasite infectivity. Taken together, these data suggest that CPs of T. brucei bloodstream form play a role in surface coat exchange during differentiation, in the degradation of internalized IgG and in parasite infectivity, and that their function is regulated by ICP.
Introduction
Clan CA, family C1 cysteine peptidases (CPs) are considered crucial for the growth, differentiation and survival of several pathogenic protozoa (for a review, see Sajid and McKerrow, 2002). In Trypanosoma brucei species, the pathogenic kinetoplastid protozoa responsible for human and veterinary trypanosomiasis in sub-Saharan Africa, the major CP has primary sequence and biochemical characteristics that are broadly similar to those of mammalian cathepsin L (Lonsdale-Eccles and Grab, 1987;Troeberg et al., 1999;Caffrey et al., 2001), and is encoded by a tandem array of 11 nearly identical gene copies . The enzymes in T. b. rhodesiense and T. b. brucei are termed rhodesain and brucipain (or trypanopain) respectively (Lonsdale-Eccles and Grab, 1987;Caffrey et al., 2001). It has been demonstrated that small-molecule inhibitors of CPs kill T. b. brucei in culture as well as in experimentally infected animals (Scory et al., 1999;Troeberg et al., 1999). Importantly, killing of the parasites was correlated with inhibition of brucipain, suggesting that this peptidase plays a crucial role in the biology of the parasite (Troeberg et al., 1999).
Peptidase activity can be regulated at several levels, extending from gene expression to the synthesis of inhibiting proteins. In mammals and plants, CPs are regulated by members of the cystatin family (Abrahamson et al., 2003), which are absent from kinetoplastid protozoa (Ivens et al., 2005). A search for endogenous inhibitors of the parasites CPs resulted in the discovery of a family of inhibitors distinct from cystatins and other groups of peptidase inhibitors, which were named the chagasin family (or Inhibitors of Cysteine Peptidases, ICP) (Monteiro et al., 2001;Sanderson et al., 2003). Chagasin was initially isolated from T. cruzi and is a potent tight-binding inhibitor of Clan CA, family C1 CPs (Monteiro et al., 2001). Chagasin homologues were subsequently identified in other protozoa and in bacteria, and these genes were proven to encode functional CP inhibitors (Rigden et al. 2002;Sanderson et al., 2003;Riekenberg et al., 2005;Pandey et al., 2006). Structure determination of Leishmania mexicana ICP and chagasin revealed that they adopt a type of immunoglobulin (Ig)-like fold not previously reported in lower eukaryotes (Salmon et al., 2006;Smith et al., 2006;Figueiredo da Silva et al., 2007). It was demonstrated that, in T. cruzi, chagasin forms tight binding complexes with the major CP of the parasite, cruzipain (Santos et al., 2005). A fourfold increase in inhibitor expression in transgenic parasites led to a marked reduction in CP activity, resulting in reduced differentiation from non-infective epimastigotes to the infective trypomastigote form, increased resistance to the deleterious effect of a synthetic CP inhibitor, and diminished infectivity of tissue culture trypomastigotes in vitro; these data suggested that chagasin controls endogenous CP activity (Santos et al., 2005). In contrast, deletion of the L. mexicana ICP resulted in reduced infectivity to mice, although the infection of macrophages in vitro was unchanged -suggesting that Leishmania ICP might target the CPs of the host (Besteiro et al., 2004). Recently, it was shown that Entamoeba histolytica expresses two ICP isotypes, which display different inhibitory properties against endogenous CPs and are localized in distinct compartments (Saric et al., 2006;Sato et al., 2006). Similar to the phenotype observed with T. cruzi, overexpression of E. histolytica ICPs in trophozoites led to a marked reduction of CP activity and of enzyme secretion, suggesting that ICPs also regulate endogenous CPs in this parasite (Sato et al., 2006).
In this study, we investigated the role of ICP in T. brucei by analysing parasites genetically manipulated to lack ICP. Our results suggest that T. brucei ICP acts as a regulator of endogenous CP activity, and thus plays a part in modulation of surface coat exchange during differentiation, intracellular proteolysis and parasite infectivity to mice.
Targeted deletion of ICP in bloodstream form (BSF) T. brucei
Targeted deletion of the diploid T. brucei ICP (TbICP) locus was achieved by homologous recombination. The two alleles were sequentially replaced after BSF transfection with linearized targeting constructs pGL1149 and pGL1151, containing selectable markers between ICP 5′ and 3′ flanking regions (FRs) (Fig. 1A). For the first allele, transfection with the pGL1151 construct yielded a population of parasites resistant to hygromycin. This population was used for the second round of transfections with the pGL1149 (blasticidin) construct, and three clones were obtained. The clones were analysed by Southern blot, one of which is presented ( Fig. 1B and C). The TbICP gene was targeted into the tubulin locus of the Dicp mutants to generate lines re-expressing ICP (designated Dicp:ICP) (Fig. 1A, lower panel). A probe to the 5′ FR of TbICP hybridized with a 3.1 kb SphI/StuI DNA fragment containing the ICP gene in wild type (WT) parasites, hygromycinresistant and blasticidin-resistant heterozygotes, but not in Dicp (Fig. 1B). Probe hybridization to DNA fragments of 3.5 and 4.2 kb, corresponding to the replacement of ICP with the blasticidin-or the hygromycin-resistance genes, was observed in the respective heterozygotes, Dicp and Dicp:ICP (Fig. 1B). Hybridization with a probe to the coding region of TbICP revealed the presence of the gene in WT and in heterozygotes, but not in Dicp (Fig. 1C). As expected, the TbICP probe hydridized to a 0.8 kb DNA fragment in the re-expressing cell line, indicating that the ICP gene was re-integrated into the tubulin locus (Fig. 1C).
We were unable to detect ICP expression by Western blot analysis in WT parasites using a variety of different antisera raised against recombinant ICP. Thus, the pres- ence of functional ICP in parasite lysates was assessed by measuring inhibition of CP activity. Taking advantage of the fact that ICP is thermostable (Monteiro et al., 2001), lysates were boiled in order to inactivate endogenous peptidases prior to incubation with papain. We observed that boiled lysates of T. cruzi inhibited papain activity more efficiently than those of WT BSF T. brucei ( Fig. 2A). Considering that recombinant chagasin and ICP inhibit papain with similar potency (Monteiro et al., 2001;Sanderson et al., 2003), these results suggest that the levels of ICP in T. brucei BSF are lower than those of chagasin in T. cruzi. Low expression levels of ICP in T. brucei BSF could account for the lack of detection by Western blotting. As expected, lysates of WT and Dicp:ICP inhibited about 60% of papain activity, while no inhibitory activity was detected in Dicp lysates (Fig. 2B), even when tested at 10-fold higher concentrations (not shown), indicating that functional ICP is absent from Dicp. Lysates from Dicp:ICP inhibited papain slightly less efficiently than lysates of WT parasites (Fig. 2B), suggesting that the levels of ICP expression in the complemented line are not identical to those in WT parasites. Titration of boiled parasite lysates against papain revealed that ICP levels in Dicp:ICP are approximately half of those in WT. We next assessed the amounts of functional CPs in lysates of BSF by enzymatic assays using fluorogenic substrates. The CP activity present in Dicp lysates was threefold higher than in WT or Dicp:ICP (Fig. 3A). The titration of CPs in the lysates of WT revealed that the CP : ICP ratio in T. brucei is approximately 7:1. No alteration in brucipain or cathepsin B-like CP protein expression by Western blot could be detected ( Fig. 3B and C), indicating that the lack of the ICP did not induce changes in the expression and/or turnover of these enzymes and that the increase in CP activity was due to the absence of ICP.
Deletion of TbICP induces alterations in parasite growth
Growth rate analysis of Dicp in culture indicated that it grew more slowly than WT or Dicp:ICP over a 5 day period (Fig. 4A), and this phenotype was reproducible in three independent Dicp clones. Dicp did not have apparent A. T. cruzi epimastigote lysates and T. brucei WT BSF lysates were boiled and tested (8.5 mg protein ml -1 ) for inhibitory CP activity by pre-incubation with 3 nM papain for 20 min. The residual activity of the enzyme was measured using 15 mM Z-Phe-Arg-MCA. B. BSF lysates (50 mg protein) were boiled and tested for the inhibition of papain by pre-incubating with 2 nM papain for 20 min, followed by determination of residual peptidase activity using 5 mM of Z-Phe-Arg-MCA. Asterisk shows scores statistically significant from buffer at P < 0.05. A. Five micrograms of BSF lysates was tested for peptidase activity using 5 mM of Z-Phe-Arg-MCA as a substrate. The activity sensitive to inhibition by 10 mM of E-64, which corresponds to CP activity, is shown. The experiments were performed in quadruplicate and are represented as mean values with standard deviations (SD). The analysis of significance was performed using ANOVA, and the asterisk indicates the scores that are statistically significantly at P < 0.05. B and C. Western blot analysis of BSF lysates (equivalent to 5 ¥ 10 5 parasites per lane) using antiserum to (B) brucipain or to (C) T. brucei cathepsin B. Antibodies to anti-EF1a were used to visualize loading controls (bottom panels).
alterations in morphology or cell cycle progression as compared with WT, suggesting that the reduced growth rate might be due to changes in the parasite's metabolism.
Wild type and ICP mutant cell lines were inoculated into Balb/c mice, and parasite density in the blood was examined from day 3 to day 6 ( Fig. 4B). Surprisingly, Dicp parasites grew better than WT parasites in vivo, reaching a significantly higher parasitemia than the WT or Dicp:ICP lines. In addition, approximately 50% of the mice infected with Dicp died at day 7, while the mice infected with WT survived until day 10 post infection (not shown), indicating that deficiency in ICP increased the parasite's virulence in the mammalian host.
Deletion of TbICP increases the resistance to a synthetic CP inhibitor
Synthetic CP inhibitors have been shown to kill T. brucei BSF in culture (Troeberg et al., 1999), and their trypanocidal effect was associated with the inactivation of the cathepsin L-like CP of the parasite. Considering that Dicp parasites have higher CP activity, we decided to test whether that could have an impact on their sensitivity to synthetic CP inhibitors. We monitored the densities of parasite cultures in the presence or absence of the inhibitor at 12, 24 and 30 h. Because Dicp grows slower in vitro, at the end of this period the culture density of the mutant line was about half that of WT or Dicp:ICP in the absence of the drug. However, in the presence of 0.25 mM of N-Pip-F-hF-VSPh (K11777), the culture densities of the three lines were nearly identical (~2 ¥ 10 5 ml -1 ), suggesting that the growth of WT and of Dicp:ICP, but not that of Dicp, were significantly affected by K11777. In order to verify whether Dicp is refractory to the toxic effects of the drug, we calculated the number of divisions that each line had undergone in 30 h (Fig. 5). We observed that the growth of WT parasites was inhibited by 50% in the presence of 0.25 mM of K11777, while it was necessary to increase drug concentrations 4-fold (to 1 mM) to observe a similar effect in Dicp parasites. Dicp:ICP had similar drug sensitivity to that of WT parasites, confirming that the increased resistance displayed by Dicp was due to lack of ICP. These results show that ICP levels affect BSF sensitivity to the trypanocidal effect of synthetic CP inhibitors, and suggest that ICP modulates the availability of active CPs in the parasite.
Fig. 5.
Dicp has increased resistance to a CP inhibitor. BSF parasites were inoculated at 5 ¥ 10 4 ml -1 in culture medium in the presence of varying concentrations of K11777, and cultivated for 2 days at 37°C. The controls were cultivated in the presence of 0.5% DMSO. The culture densities were monitored at 12, 24 and 30 h, and the numbers of cell divisions by 30 h are given. The experiments were performed in triplicate, two independent times, and are reported as mean and standard deviations of the six replicates. The analysis of significance was performed using two-way ANOVA and the Bonferroni post-test at a significance of 5%. Single asterisks represent scores that are statistically significant at P < 0.05, and triple asterisks show scores statistically significant at P < 0.01. White bars, WT parasites; black bars, Dicp; grey bars, Dicp:ICP.
Deletion of TbICP leads to increased degradation of anti-VSG IgG
The surface of BSF T. brucei is covered by a dense coat of its main surface antigen, the variant surface glycoprotein (VSG), which is attached to the membrane via a glycosylphosphatidylinositol anchor (Ferguson, 1999). VSGs are encoded by a large family of genes/ pseudogenes sequentially and uniquely expressed at a given time point, which enables trypanosomes to evade the host's immune response . VSG is constitutively removed from the parasite's surface by rapid internalization and recycling, a process that mediates the clearance of anti-VSG antibodies from the surface and might contribute to the parasite's persistence in the immune-competent host (Seyfang et al., 1990;O'Beirne et al., 1998;Gruszynski et al., 2003;Engstler et al., 2004). After internalization, VSG is recycled back to the surface, while the antibodies are degraded by intracellular peptidases (O'Beirne et al., 1998;Pal et al., 2003). To address whether ICP could regulate the peptidases responsible for IgG processing in BSF, we assessed the degradation of anti-VSG antibodies following internalization (Fig. 6). In WT parasites, an approximate 50% reduction in the amount of intact IgG was detected after 20 min of chase, and the protein was significantly degraded within 30 min (Fig. 6A, upper panel, left). In contrast, the amount of elongation factor 1 (EF1), used as an endogenous control, was unchanged during the chase, showing that the reduction in IgG was not due to non-specific protein degradation during the preparation of the lysates (Fig. 6, bottom panels). The antibodies were much more rapidly degraded by Dicp parasites, being reduced by 80% within the first 15 min of chase (Fig. 6B, upper panel, left), a phenotype that was partially rescued in Dicp:ICP parasites (Fig. 6C). No difference in the internalization of transferrin-FITC was observed between the three lines (data not shown), ruling out that the differences observed in the amount of IgG detected are due to alterations in the endocytic activity of the transgenic parasites. Furthermore, pretreatment of the cells with two membrane-permeable CP inhibitors, K11777 and E64d, prevented IgG degradation ( Fig. 6A-C, right panels), demonstrating that CPs are the main peptidases contributing to IgG degradation in BSF. The results indicate that the increased IgG degradation by the Dicp parasites was due to higher CP activity in these parasites, suggesting that ICP modulates endogenous CP function in BSF parasites.
Cell surface coat exchange during differentiation from BSF to procyclic forms (PCF)
One important step in the progression of infection by T. brucei is the transformation of short stumpy BSF to PCF after uptake by tsetse flies. The differentiation is characterized by several metabolic and morphologic changes, including the expression of cell surface stageregulated proteins such as procyclin and the removal of the old VSG coat. BSF to PCF differentiation can be induced in vitro by cis-aconitate and low temperature (Ziegelbauer et al., 1990). Because previous studies have correlated CP activity with the differentiation of T. cruzi (Tomás and Kelly, 1996;Santos et al., 2005), we tested whether deletion of ICP would have an impact on T. brucei differentiation in vitro. During the differentiation of synchronous populations of T. brucei enriched in short stumpy BSF, the exchange of the VSG coat to procyclin occurs rapidly and synchronously within 4-24 h (Ziegelbauer et al., 1993;Van Deursen et al., 2001;Gruszynski et al., 2003;. However, coat exchange during in vitro differentiation of exponentially growing cultures of the 427 strain was shown to occur much slower (24-48 h) and asynchronously (Roditi et al., 1989;Mutomba and Wang, 1998). We monitored the appearance of cell surface procyclin and VSG release by flow cytometry in the three lines during 12-48 h after the induction of differentiation by cis-aconitate and temperature drop (Fig. 7). We observed that a higher proportion of the Dicp cell line had cell surface procyclin by 15 h of differentiation ( Fig. 7A and B) than with WT parasites. The appearance of procyclin-positive cells correlated with a decrease in the number of VSG-positive cells (Fig. 7A), and loss of surface VSG was more rapid in Dicp. After 18 h, about half of the Dicp population had replaced VSG with procyclin, while only 20% of WT parasites had exchanged their surface coat. This phenotype was more evident at 24 h, when Dicp had nearly completed surface coat exchange (Fig. 7B), while only half of WT cells were positive for procyclin in the same time period. The coat exchange of parasites re-expressing ICP was similar, but not identical, to that of WT cells, indicating that re-introduction of ICP partially complemented faster coat exchange of Dicp. Importantly, the PCF Dicp mutants also grew slower than WT parasites in vitro, ruling out the possibility that the higher number of procyclin-positive cells might have resulted from accelerated growth of the differentiated Dicp.
It was recently reported that the tyrosine phosphatase TbPTP1 plays a pivotal role in controlling BSF to PCF differentiation (Szoor et al., 2006). A cell-permeable inhibitor of this enzyme, BZ3, induced differentiation of a small subset of BSF parasites in populations grown in asynchronous cultures. It was suggested that the cells sensitive to BZ3-induced differentiation were committed to early events in stumpy formation before morphological differentiation occurred, being defined as stumpy* (Tasker et al., 2000). We treated BSF with BZ3 and accessed procyclin appearance in early time points (3-12 h) as a way to assess whether Dicp populations were enriched in stumpy* forms, thus accounting for the more efficient coat exchange observed in cis-aconitateinduced differentiation. The percentage of procyclinpositive cells was low (< 15%) in the three parasite lines at 12 h (data not shown). Furthermore, although there was a small increase in the proportion of procyclinpositive cells within the Dicp population at 3 h (4% in Dicp versus 1.5% in WT parasites), all three parasite lines had equivalent proportions of procyclin-positive cells by 6 h, indicating that faster coat exchange displayed by Dicp parasites could not be attributed to a higher proportion of stumpy* forms prior to triggering of differentiation with cis-aconitate. Importantly, treatment of parasites with K11777 during differentiation significantly delayed coat exchange of WT parasites ( Fig. 8A and B) and of Dicp The experiments were performed in triplicate on three separate occasions. The graph shows the means plus standard deviations of the nine replicates. The analysis of significance was performed using two-way ANOVA and the Bonferroni post-test at a significance of 5%. Single asterisks represent scores that are statistically significant at P < 0.05, and double asterisks represent scores statistically significant at P < 0.01). White bars, WT parasites; black bars, Dicp; grey bars, Dicp:ICP.
( Fig. 8C and D), confirming that CP activity contributes to the efficiency of coat exchange during T. brucei differentiation. Notably, the effect of the synthetic CP inhibitor in delaying the coat exchange was detected with WT parasites only by 24 h (Fig. 8A and B), while this effect was observed with Dicp parasites in the first 12 h (Fig. 8C and D). This further suggests that the accelerated coat exchange in the mutants was associated with increased CP activity. The parasites remained intact and mobile in the presence of the inhibitor during the assay, arguing against non-specific effects due to toxicity of the drug. Taken together, these results suggest that ICP plays a role in controlling the differentiation process through the modulation of endogenous CP activity.
Discussion
We have used target gene deletion as a strategy to investigate the function of the chagasin-like CP inhibitor, ICP, in T. brucei. Deletion of ICP led to increased CP activity in lysates of BSF parasites, while the expression levels of brucipain and of the cathepsin B-like peptidase were unchanged. In WT T. brucei, the CP : ICP ratio calculated by titration experiments revealed that CPs are in sevenfold excess, confirming that ICP is expressed at much lower levels than the CPs. Comparative analyses of papain inhibition by boiled lysates from different parasite species indicated that lysates of T. brucei have lower papain-inhibitory activity than those of T. cruzi. As recombinant TbICP is thermo-resistant and displays high affinity for papain (Sanderson et al., 2003), these observations suggest that ICP is expressed at low levels in T. brucei compared with chagasin expression in T. cruzi (Monteiro et al., 2001;Besteiro et al., 2004;Santos et al., 2005). Low expression levels might explain why we were unable to detect ICP in parasite lysates by Western blot. Nonetheless, lack of ICP led to a 3-to 4-fold increase in total CP activity, suggesting that, as observed in T. cruzi and in Entamoeba, changes in inhibitor expression have a significant impact on the overall CP content in parasites despite the unfavourable inhibitor-enzyme ratio (Santos et al., 2005;Sato et al., 2006). Because conversion of zymogens of the CPs to active forms is thought to occur by auto-catalysis, the rate of CP zymogen processing and/or sorting could be altered upon ICP deletion, which could be the explanation for these observations. Studies with synthetic CP inhibitors have previously suggested that cathepsin L-like peptidases are the main targets of these compounds in trypanosomes and in Leishmania. Our analyses of the effect of K11777 on the growth of BSF showed that Dicp displays fourfold greater resistance to this drug than WT parasites, a phenotype that could be explained by increased availability of free CPs. The fourfold increase in resistance observed correlates well with the increase in peptidase activity encountered in the Dicp (3 fold), arguing in favour of the hypothesis that a higher amount of the synthetic drug is required to inactivate free CPs in Dicp parasites. Furthermore, it suggests that the availability of active CPs is subject to endogenous control by ICP. Although the mechanism by which synthetic CP inhibitors cause T. brucei death is unknown, in T. cruzi they promote accu- Fig. 8. Involvement of CPs in the differentiation from BSF to PCF. Differentiation of Dicp BSF to PCF in the presence of the irreversible CP inhibitor K11777 was analysed as described in Fig. 7 mulation of CP zymogens in the Golgi, causing disruption of the intracellular traffic and abnormalities in the secretory pathway (Engel et al., 1998). The isolation of resistant T. cruzi epimastigote populations under selective pressure revealed that upregulation of exocytosis promoted the secretion of unprocessed CP precursors, sparing the cells from the deleterious effect of the inhibitor (Engel et al., 2000). Intriguingly, chagasin overexpression in T. cruzi also increased parasite resistance to the same drug (Santos et al., 2005). Although the mechanisms underlying the increase in resistance are unclear, parasites overexpressing chagasin display an increase in the secretion of CP precursors to the flagellar pocket (C.C. Santos and A.P.C.A. Lima, unpubl. data), which might play some part in the resistance. These observations suggest that the balance of CPs and endogenous inhibitors can affect sensitivity to CP inhibitors in multiple ways.
In mammals, T. brucei BSF transform from slender proliferating forms into stumpy non-proliferating forms, which are competent to differentiate to PCF once ingested by the tsetse fly (Turner et al., 1995). Our results showed accelerated coat exchange in conditions of cis-aconitateinduced differentiation in Dicp, and this phenotype was reverted in the presence of synthetic CP inhibitors, suggesting that more efficient coat exchange is mediated by CP activity. Importantly, the use of the tyrosine phosphatase inhibitor BZ3 enabled us to verify that the proportion of cells committed to differentiate prior to induction was similar in the three lines, suggesting that CPs are fulfilling their function at steps subsequent to those that trigger differentiation. After triggering, BSF-specific proteins must be degraded (or released) and significant changes in cell shape occur. It is plausible that lysosomal CPs are required for the massive protein degradation and remodelling that occurs during differentiation, their activity being subjected to regulation by ICP.
The inhibition of VSG release during differentiation by treatment with synthetic CP inhibitors suggests that CPs play a role in surface coat remodelling either directly or indirectly, and provides evidence that ICP negatively modulates endogenous CP activity in BSF. Two main mechanisms are thought to govern VSG release: (i) GPI hydrolysis mediated by an endogenous GPIphospholipase C (GPI-PLC), responsible for constitutive shedding in exponentially growing BSF; and (ii) endoproteolysis mediated by a zinc metallopeptidase (MSP-B) that is upregulated during differentiation (Bangs et al., 1997;Gruszynski et al., 2003;. Proteolytic release of VSG occurs via truncations upstream of the C-terminal anchor, but the cleavage sites differ between VSG variants (Bangs et al., 1997). Endoproteolysis is the main pathway responsible for VSG release during differentiation, and the concerted action of both GPI-PLC and MSP-B are thought to mediate complete shedding of the old VSG coat (Gruszyn-ski et al., 2006). Despite the fact that inhibitors of metallopeptidases significantly block the proteolytic release of VSG during in vitro differentiation of stumpy BSF, it was observed that incubation of parasites with the membranepermeable synthetic CP inhibitor Mu-F-hF-BzPr resulted in a small but noticeable reduction of VSG release (Gruszynski et al., 2003). In addition, a previous independent study using the same CP inhibitor during differentiation of the 427 strain showed that VSG was retained at the surface of PCF parasites, suggesting that CPs might play a role in the release of the VSG coat (Mutomba and Wang, 1998). It is possible that the relative contribution of CPs to VSG release varies among different parasite strains.
During T. brucei infections, a robust immune response is raised against VSG, which is mainly evaded by the parasite due to VSG antigenic variation. In addition, in vitro studies have shown that anti-VSG antibodies bound to the parasite surface are rapidly internalized and degraded, while VSG remains intact (O'Beirne et al., 1998). Although it has not yet been demonstrated that IgG degradation plays a role in immune evasion in vivo, it was proposed that it could make a contribution to the prevention of antibody-dependent destruction of the parasite. We observed that treatment of BSF with membranepermeable CP inhibitors significantly blocked anti-VSG IgG degradation, indicating that these peptidases play a central role in IgG degradation. In agreement with this, Dicp parasites were capable of degrading anti-VSG IgG with much higher efficiency than WT parasites, providing additional evidence that ICP negatively controls CP-mediated intracellular proteolysis in T. brucei. Brucipain is upregulated in stumpy forms (Pamer et al., 1989;Caffrey et al., 2001), and these forms are more resistant to antibody-mediated lysis and differentiate more efficiently than monomorphic forms. This is consistent with our findings that increased CP activity in BSF promotes enhanced IgG degradation and cell differentiation.
In BSF, it was shown that degradation of fluoresceincoupled IgG, measured by changes in fluorescence after internalization, is abolished when the internal pH of endosomal compartments are raised (Pal et al., 2003), supporting the notion that IgG degradation requires the action of peptidases found in an acidic compartment, such as the lysosome. The acidic cathepsin L-like CP of T. brucei (brucipain) is located in the parasite's lysosome (Caffrey et al., 2001), and we have previously shown that this peptidase is the main target of the synthetic inhibitor K11777 in T. gambiense (Nikolskaia et al., 2006). Considering that K11777 blocks IgG degradation in T. brucei, it is possible that brucipain, and not the cathepsin B-like peptidase, is the main CP mediating IgG degradation in BSF. Of note, it was reported that potent inhibitors of brucipain Z-Phe-Tyr(OtBu)-CHN 2 and Z-Phe-Tyr-CHO inhibit lysosomal proteolysis of transferrin, further suggesting that brucipain is largely responsible for lysosomal proteolysis in BSF (Nkemgu et al., 2003). On the other hand, recent studies employing RNAi to address the roles of brucipain and of the cathepsin B-like enzyme in T. brucei suggested that the latter is responsible for transferrin degradation . In view of these findings, we postulate that ICP modulates intracellular proteolysis by means of inactivating brucipain and the cathepsin B-like CP in lysosomes. The relative contribution of ICP in the regulation of each individual CP remains to be investigated. Although the inhibition of T. brucei CPs by ICP has not been studied at the biochemical level, it is known that the affinity of recombinant ICP for human cathepsin B is approximately 5-fold higher than that for human cathepsin L (Sanderson et al., 2003). If a similar inhibition pattern occurs with regards to the parasite CPs, it is possible that ICP could interact with both CPs of T. brucei in vivo.
Finally, we observed that lack of ICP enhanced parasite virulence in vivo, as observed by higher parasitemia in the blood of infected mice and a more rapid onset of death in the animals. Although not directly demonstrated, it is very likely that increased levels of endogenous CPs in the null mutants are responsible for increased virulence. This hypothesis is in agreement with several studies showing that T. brucei CP activity is required for optimal parasite survival in vitro and in vivo (Troeberg et al., 1999;Greenbaum et al., 2004;Fujii et al., 2005;Vicik et al., 2006). The precise biological processes requiring the action of CPs for parasite survival are not fully understood. Even though Dicp is potentially capable of clearing anti-parasite IgG more efficiently than WT parasites, it is unlikely that antibodies play a major role in eliminating parasites during the early stages (3-6 days) of infection. Rather, macrophages are thought to play a protective role during this phase of infection -clearing trypanosomes by phagocytosis and/or by secreting TNF-a and nitric oxide (NO), which are trypanolytic and trypanostatic (Magez et al., 1997;Gobert et al., 1998;Tabel et al., 1999). It has been reported that cruzipain modulates the activation of murine macrophages, downregulating the induction of NO synthase and promoting increased survival of T. cruzi (Stempin et al. 2002). Thus, it is tempting to speculate that T. brucei CPs also play a role in the interaction of BSF and macrophages (and/or other cells of innate immunity), ultimately contributing to increased parasite numbers during early infection. In addition, we have recently demonstrated that CPs are directly involved in the traversal of the blood-brain barrier by T. b. gambiense, revealing an unexpected role of these enzymes in brain pathology (Nikolskaia et al., 2006). By attenuating parasite virulence, ICP expression might be beneficial for the long-term survival of the parasites in their natural hosts.
Constructs for the deletion of ICP
The 5′ and 3′ FRs of the T. brucei ICP gene (Tb927.8.6450) were obtained by polymerase chain reaction (PCR) using the primers OL1609 (CGGCGGCCGCGGTGGAGATTAAAAAAA GAAAAAAGTG)/OL1610 (CGTCTAGAGCAACAAAAATCA ATGACATG) and OL1611 (CGGGGCCCGGTATGTGGA AGTGGAGAAG)/OL1612 (CGGGGCCC GATATCGGCGG GATGGAGTAAACATA) respectively, with genomic DNA of T. brucei EATRO795 as the template. The PCR products were cloned in the TOPO vector for sequencing. The 5′ and 3′ FRs were cloned respectively into the NotI/XbaI and ApaI sites flanking the blasticidin-resistance gene, generating vector pGL1149, or into the same sites flanking the hygromycin-resistance gene, generating vector pGL1151. For re-expression of the T. brucei ICP gene, the open reading frame (ORF) was obtained by PCR using the primers NT90/ NT91 (Sanderson et al., 2003) and cloned into the vector containing a ab-tubulin intergenic region and a phleomycinresistance gene (Helms et al., 2006).
Analysis of the transfectants
The genomic DNA from the tranfectants was isolated using the DNeasy kit (Qiagen) to check for the correct integration of the constructions. A Southern blot was performed using 3 mg of gDNA digested with SphI/StuI overnight at 37°C, electrophoresed in a 0.8% agarose gel and blotted onto Hybond N + membrane (Amersham Pharmacia). The membrane was blocked with 1 M NaCl/1% SDS/100 mg ml -1 Salmon DNA sperm at 65°C for 1 h, and subsequently hybridized with a 779 bp 5′ FR TbICP or 348 bp TbICP as probes, labelled with the random primer kit (Amersham Pharmacia) overnight at the same temperature. The membrane was washed three times with 0.2¥ SSC/0.1% SDS for 15 min and exposed overnight.
Enzymatic assays
Parasites were washed and resuspended in 50 mM sodium acetate, 200 mM NaCl, 5 mM EDTA (pH 5.5), 1% NP-40, incubated on ice for 10 min, followed by centrifugation at Involvement of ICP in T. brucei differentiation 999 10 000 g for 5 min. The protein concentration of the soluble fraction was determined using the Dc-Protein kit (Bio-Rad). Samples of 5 mg protein ml -1 lysates were tested for peptidase activity in 50 mM sodium acetate (pH 5.5), 200 mM NaCl, 5 mM EDTA, and 5 mM DTT using 5 mM of Z-Phe-Arg-MCA as substrate. The initial rates were calculated by linear regression of the substrate hydrolysis curves. The activities sensitive to inhibition by 10 mM of E64 are shown in Fig. 6. Because ICP is highly thermo-stable, the detection of inhibitory activity of TbICP was performed after boiling the lysates (2 mg ml -1 ) for 20 min in order to inactivate endogenous CPs, followed by recovery of the soluble fraction by centrifugation at 10 000 g for 10 min. The presence of inhibitory activity was checked by incubation with papain at the concentrations indicated in the legend to Figs 2A and 2B, in 50 mM Na2PO4, 100 mM NaCl, 5 mM EDTA (pH 6.5), and 2.5 mM DTT, for 20 min at room temperature. The remaining activity was measured by addition of Z-Phe-Arg-MCA at a final concentration of 5 or 15 mM. For the titration of ICP, parasite lysates were normalized for protein concentration and boiled for 20 min. The soluble fraction was recovered, and different amounts (5 independent points) were incubated with 1 nM of papain as described previously (Monteiro et al., 2001). The residual enzyme activity was measured by addition of Z-Phe-Arg-MCA, and the initial velocities were calculated from linear regression of the substrate hydrolysis plot. The equation of the linear regression of the V0 (Y) versus lysate concentration (X) plot was used to calculate the X-value to which Y = 0 (Monteiro et al., 2001).
Analysis of parasite growth in vitro
Bloodstream form WT, Dicp and Dicp:ICP were inoculated into HMI-9 medium supplemented with 10% (v/v) FCS and 10% (v/v) serum plus at a concentration of 5 ¥ 10 3 cells ml -1 . The parasites were cultivated for 5 days, with dilution at the third day to 5 ¥ 10 3 ml -1 , and the growth was estimated by daily counts of the culture using a haemocytometer chamber. The culture densities at days 4 and 5 were multiplied by the day 3 dilution factor before being plotted for the growth curve. The experiments were performed in triplicate. The analysis of significance was performed by ANOVA using GraphPad Prism 4.0, using the Bonferroni post-test comparing all pairs of columns at a significance of 5%.
Sensitivity to N-Pip-F-hF-VSPh
Bloodstream forms were inoculated at 5 ¥ 10 4 ml -1 in HMI-9 containing 10% (v/v) FCS and 10% (v/v) serum plus supplemented with 0.5% DMSO or 0.5% DMSO and variable concentrations of the synthetic irreversible cysteine peptidase inhibitor N-methylpiperazine-urea-phe-homophevinylsulphone-benzene (K11777). The growth was determined by counting the cell density at 12, 24 and 30 h, using a Beckman coulter counter. The experiments were performed in triplicate on three separate occasions. The density of each culture at 30 h was used to calculate the number of divisions that had occurred. The IC50 was determined for the three parasite lines. The analysis of significance was performed by two-way ANOVA using GraphPad Prism 4.0, using the Bonfer-roni post-test comparing all pairs of columns (all groups to each other) at a significance of 5%.
Anti-VSG 221 IgG degradation
Bloodstream form parasites were harvested at mid-log phase of growth and labelled with anti-VSG 221 antibodies on ice for 30 min in HMI-9 at a concentration of 1 ¥ 10 7 ml -1 . Parasites were then washed three times in ice-cold serum-free HMI-9 and incubated at 37°C for 5, 10, 15 or 30 min. Following the incubation period, samples were prepared for Western blot analysis. Rabbit anti-VSG IgG was detected directly using an anti-rabbit IgG-HRP conjugate (Promega) and visualized by addition of SuperSignal West Pico Chemiluminescence substrate (Pierce). For the densitometry, the bands were selected using the Scion Image Program. The intensity of EF1-a was considered 100% for each lane, and the ratio of the intensities of IgG and EF1-a was calculated; the densitometry values are indicated at the bottom of each lane in Fig. 6.
Differentiation to the PCF
Late-log bloodstream forms (8-10 ¥ 10 5 ml -1 ) were harvested and suspended in warm SDM-79 medium containing 10% (v/v) FCS at a final density of 2 ¥ 10 6 ml -1 . Six micromolars of cis-aconitate (Sigma) was added, and the cultures were incubated at 27°C to allow the differentiation. Aliquots were taken at different time periods (12, 15, 18, 24, 36 and 48 h), fixed in 2% paraformaldehyde, and analysed for the presence of VSG or procyclin at the surface by FACS using anti-VSG 221 and anti-procyclin Mab antibodies (Cedarlane Laboratories, Ontario, Canada), both diluted 1:1000 in PBS containing 1 mg ml -1 BSA, followed by incubation with IgG anti-rabbit or IgG anti-mouse Alexa 488 secondary antibodies respectively. The experiments were performed in triplicate on three separate occasions. For BZ3 inhibition assays, bloodstream form parasites were exposed to 150 mM PTP1B inhibitor BZ3 (Calbiochem) in SDM-79 medium containing 10% (v/v) FCS for 3, 6, 9 or 12 h. The expression of procyclin was assayed by flow cytometry using anti-procyclin Mab antibody (1:1000). K11777 was used at the IC50 concentrations for each line.
Mice infections
Cultured bloodstream form parasites (1 ¥ 10 5 ) were inoculated intraperitoneally in Balb/c mice, harvested from blood after 5 days of infection, and used to inoculate intraperitoneally five mice per group at 1 ¥ 10 3 parasites per animal. The subsequent parasitemia was determined by counting the number of parasites in 5 ml of blood samples taken on days 3-6 of infection. | 2016-05-12T22:15:10.714Z | 2007-10-18T00:00:00.000 | {
"year": 2007,
"sha1": "1352bfd1766a69abfd5973129645a9be55983b43",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1365-2958.2007.05970.x",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1352bfd1766a69abfd5973129645a9be55983b43",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
238353905 | pes2o/s2orc | v3-fos-license | Inductive learning for product assortment graph completion
Global retailers have assortments that contain hundreds of thousands of products that can be linked by several types of relationships like style compatibility,"bought together","watched together", etc. Graphs are a natural representation for assortments, where products are nodes and relations are edges. Relations like style compatibility are often produced by a manual process and therefore do not cover uniformly the whole graph. We propose to use inductive learning to enhance a graph encoding style compatibility of a fashion assortment, leveraging rich node information comprising textual descriptions and visual data. Then, we show how the proposed graph enhancement improves substantially the performance on transductive tasks with a minor impact on graph sparsity.
Introduction
Fashion data are interesting for research because of their polymorphism and the complexity of the relations that can be defined among them, i.e. compatibility, transactional, similarity, substitution, etc. Fashion items are considered compatible if they can be worn simultaneously, meaning that the clothing items are part of an outfit. Our work develops on fashion data assembled by H&M. In this context, the compatibility of fashion items is manually determined by experts on item pairs. The assortment, however, is composed by tens of thousands articles and the number of pairs grows quadratically with the number of articles, making exhaustive manual labelling highly impractical. Furthermore, when new products enter in the assortment, they stay disconnected for a rather long period. The lack of exhaustive indications of item compatibility can considerably impact the performance of recommendation systems that are leveraging such information to provide personalized and style-coherent advice to customers. Motivated by this, we tackle the problem of augmenting such sparse item compatibility information with newly discovered compatibility relationships. Existing works have addressed the problem with recurrent models [1,2,3] or with contrastive learning [4,5]. Our approach, instead, leverages inductive learning on graphs [6]. Inductive link prediction, as opposed to transductive link prediction that assumes all nodes to be present at training time, aims at predicting links for new, unobserved, nodes. However, inductive link prediction usually obtains a lower performance on existing nodes. The method we propose is using inductive link prediction to enrich the graph with new links and then train a transductive model on the new graph to maximize the link prediction performance, getting in this way the best of both worlds. Following [7], we represent items as nodes and compatibility as edges of the graph, together with their associated information (node and edge labels). In particular, our items are bound to rich textual and visual information, for which we define an appropriate encoding as node features. We then put forward an inductive learning approach based on the DEAL model [8], that has been extended to exploit the richness of the multimodal node features available in our industrial case study. As a first result, we show how these features positively contribute to relationship inference. The trained inductive model is then applied to produce an enriched graph for a second transductive task, modelling clothing pairing suggestions as a link prediction problem. The empirical analysis shows that the enriched graph yields to substantially improved link prediction performance over the original graph, at the cost of a minor decrease in graph sparsity. This second result is particularly interesting as it shows the effectiveness and efficiency of a pipeline of inductive-transductive methods when dealing with predictive tasks over large-scale sparse graphs.
Inductive-Transductive Graph Processing Pipeline
For inductive learning, we consider DEAL [8], an architecture leveraging two encoders, an attribute-oriented encoder H a and a structure-oriented encoder H s , as well as an alignment mechanism. The aim of the attribute-oriented encoder is to project a node's feature vector from the high-dimensional feature space into a low-dimensional embedding space, while the structure-oriented encoder generates an embedding vector of the node, by considering only the structural information of the graph (no node features). If two graph nodes are connected (positive samples), then their H a and H s embedding vectors will have high similarity. To this end, we measure similarity of the embedding vectors by cosine similarity. A Tight Alignment mechanism [8] is used to maximize the similarity between the embedding vectors produced by both H a and H s for each node. Both encoders are updated during the training process and the embeddings are kept aligned. The attribute-oriented encoder can be realized by MLP or GCNlike modules [8]. In our implementation we use the personalized ranking loss in [8]. Transductive learning is implemented with consolidated deep graph networks. In particular, in our empirical analysis, we confront the performance of three popular methods that well represent three different families of neural approaches for graphs that are Graph SAGE [9] model, GCN [10] and GAT [11]. As anticipated, the focus of this work is to propose and assess inductive learning as a preliminary step to improve transductive task performance on sparsely connected, large-scale graphs. The process comprises a first step where the DEAL [8] inductive model is trained and the best performing model (in validation) is selected. As a second step, we run the best inductive model selected on the original graph to enrich it with the introduction of new edges. For this second step, we define two thresholds: the maximum node degree and a probability of link existence among the nodes. As a final step, we train the transductive models on the new structure. In addition to the pipeline above, we extend DEAL [8] to work on textual, visual, or concatenation of textual and visual features, instead of the tabular features used in [8]. The embedding of the textual and visual information attached to each product in our case study has been obtained by a BERT model [12] pre-trained on English Wikipedia and by a ResNet512 [13] pre-trained on ImageNet [14], respectively.
Item Compatibility Graph
Our study considers two novel industrial proprietary datasets provided by H&M, where each node is associated with a fashion item and the presence of an edge between two nodes denotes style compatibility between the items. These graphs have been compiled from a list of pairwise fashion item compatibility statements compiled by H&M domain experts. This information has been used to build two separate graphs, one for Men and one for Women clothing. Both contain a large number of products, represented by an image, text description, colour, and other tabular data. These fashion graphs have been assembled specifically for this work and this is the first graph-based predictive analysis being performed on such data. We complement our analysis on proprietary data with a publicly available dataset, the Computers network [15], with structural characteristics that are akin to those of our industrial use case. All networks have been represented using the Open Graph Benchmark (OGB) [16] format, and the relevant properties of the aforementioned datasets are given in Table 1. The challenging aspect shared by all datasets is the high level of edge sparsity, nearing 100%, and the nontrivial proportion of disconnected nodes (i.e. with zero degree). The latter is particularly true for the fashion data. This is the key motivational aspect for our approach, as we would like to be able to enrich the graph edges by inductive learning before fitting the target transductive task to the data. With respect to this, Table 1 already reports an anticipation of the results of the inductive enrichment of the graph (marked in bold). One can clearly see a considerable drop in disconnected nodes, with a minor change to the edge sparsity. From a model selection perspective, we split differently the graph adjacency matrix A depending on the task to be performed (inductive or transductive link prediction). In particular, for the inductive link prediction task, we needed to assure that one or both nodes, seen during the training process are not seen during the evaluation process. For this reason, A is split on a node basis. For the transductive task, instead, we partition the network on an edge basis. It is important to mention that negative training edges are sampled uniformly during the training phase, while the validation ones are sampled in advance and are kept fixed for the duration of the model assessment. After the final inductive model is chosen, we set different thresholds for the maximum node degree and a link existence probability. In the case of graph for Men, graph for Women and Computers [15], the thresholds for maximum node degree are set to 5, 2, and 20, respectively, while the thresholds for the probability of link existence are set to 0.85, 0.99 and 0.60, respectively.
Experiments & Results
The data used for training the baselines and our proposed method are images and text descriptions of H&M's assortments. For each dataset, configuration, and task, hyperparameter selection has been performed by using Optuna [17], an hyperparameter optimization software framework. For the inductive link prediction task, we trained DEAL-based [8] models with different architectures and configurations. The results are reported in Table 2. In particular, we consider two attribute-oriented encoder mechanisms: an MLP and a trainable Embedding layer [18]. The performances for both inductive (Table 2) and transductive (Table 3) link prediction are highly improved when using visual or concatenation of visual and textual node features. The best performances on the graph for Men and graph for Women achieved DEAL MLP , while DEAL EMB performed better in the case of Computers [15] for two out of three metrics. The best performing configuration for each graph is used to perform graph enrichment for the successive transductive analysis. In Table 3 we can see how the enriched graph improves substantially the link prediction performance for all three different types of GNN considered (SAGE, GAT, and GCN), with respect to the metrics we considered, namely Accuracy, Receiver Operating Characteristic (ROC) Area Under Curve (AUC) and Average Precision (AP). This improvement in performance may be interpreted by the fact that the enrichment process effectively completes the original graph, making the patterns more regular, general, and therefore easier to learn.
Conclusions
We proposed an inductive learning approach for completing sparse graphs describing item compatibility information, and we have applied our method to both a publicly available benchmark as well as to a novel industrial use case based on the product assortment of a global fashion retailer. The proposed approach consists of two steps. First, we learn an inductive learning model that we use to generate new links for those nodes of the graph that are disconnected or sparsely connected. We then train a transductive model using the enriched graph, showing that we achieve increased link prediction performance. Our hypothesis is that the inductive learning model manages to learn the patterns of the connected nodes and transfer them to the sparsely connected nodes, making the structure of the graph more regular. This makes sense since we know from the process generating the connections in the graph, which is manual and labor intensive, that many possible connections are missing in the original graph.
Future works will study more thoroughly the graph enrichment step, that in this work has been carried out with a very simple methodology, by selecting the nodes with a maximum number of neighbours and a threshold for the inductive probability prediction. A more principled approach could give further performance improvements. | 2021-10-06T01:16:22.047Z | 2021-10-04T00:00:00.000 | {
"year": 2021,
"sha1": "45f7f09e6ea26265a33f689b3212d072af7eab90",
"oa_license": null,
"oa_url": "https://doi.org/10.14428/esann/2021.es2021-73",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "45f7f09e6ea26265a33f689b3212d072af7eab90",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
9991558 | pes2o/s2orc | v3-fos-license | Macular hole formation and spontaneous closure after vitrectomy for vitreomacular traction documented in spectral-domain optical coherence tomography
Background We present a case of a macular hole formation and its spontaneous closure after vitrectomy for vitreomacular traction. To our knowledge, it is the first description of spontaneous closure of the macular hole after vitrectomy for vitreomacular traction. Case presentation A 78-year-old woman presented decreased visual acuity and metamorphopsia in the right eye due to vitreomacular traction. A vitrectomy with internal limiting membrane peeling and an air tamponade was performed in the right eye. Spectral-domain optical coherence tomography was obtained during all visits. Seven days after the vitrectomy, the spectral-domain optical coherence tomography showed a resolved vitreomacular traction and a full-thickness macular hole. Examination after a further three weeks showed that the full-thickness macular hole had spontaneously closed. 5 months later spectral-domain optical coherence tomography showed a normal foveal contour without intraretinal microcystic spaces and a resolution of the photoreceptor and external limiting membrane elevation. Conclusions While performing a vitrectomy for vitreomacular traction posterior hyaloid membrane creates anterior-posterior traction on the fovea, and, during detachment, retinal layer damage occurs in the macular area and a full-thickness macular hole may develop. Removal of the anterio-posterior vitreous traction may play the main role and may help the spontaneous closure of the macular hole after vitrectomy for vitreomacular traction.
Background
Pars plana vitrectomy is a well-established surgical procedure for the treatment of vitreomacular traction (VMT). Despite the high percentage of anatomic successes, some postoperative complications may occur, such as a macular hole [1][2][3].
Although the pathogenesis of macular hole formation after vitrectomy for VMT is not fully understood, new diagnostics methods such as Spectral-domain optical coherence tomography (SD-OCT) have provided additional information about this process.
To our knowledge, the current case is the first description of the macular hole formation and spontaneous closure after vitrectomy for VMT clearly documented step by step in SD-OCT (Spectralis; Heidelberg Engineering, Heidelberg, Germany).
Case presentation
A 78-year-old female presented a visual acuity of 0,04 and metamorphopsia in the right eye that had lasted for 6 months after an uncomplicated phacoemulsification with lens implantation performed in another department. SD-OCT examination showed VMT with an outer lamellar macular hole and an abnormal foveal contour (Figure 1a). A vitrectomy with internal limiting membrane (ILM) peeling and an air tamponade was performed by the author (D.O.). After the complete three-port pars plana vitrectomy, 0.15% trypan blue solution (Membrane Blue Dual-Dorc, Zuidland, The Netherlands) was injected for 60 seconds. After removal of the trypan blue, ILM peeling was performed in the macular area. At the end of the surgery, a fluid-air exchange was performed. Patient received non-supine positioning (NSP) for 5 postoperative days.
Seven days after the vitrectomy, the SD-OCT ( Figure 1b) showed a resolved VMT and full-thickness macular hole (FTMH) with cystoid spaces on the edges, as reported in the literature [3]. After a further three weeks, SD-OCT ( Figure 1c) showed that the FTMH had spontaneously closed. The image shows a normal foveal contour with an elevation of the photoreceptor layer and of the external limiting membrane (ELM) in the fovea region and intraretinal microcystoid spaces. Figure 1d was recorded with SD-OCT 5 months later and showed a normal foveal contour without intreretinal microcystic spaces and a resolution of the photoreceptor and ELM elevation.
Conclusions
The posterior hyaloid membrane may play the main role in forming the FTMH in VMT. The posterior hyaloid membrane creates anterio-posterior traction on the fovea, and, during detachment, retinal layer damage occurs in the macular area and FTMH may develop. In our patient, the posterior hyaloid membrane is still attached to the ILM, which is the reason that FTMH did not appear. Anterio-posterior traction acts as a conglomeration of these two tissues. This traction was pulled up to the retina, so that the edges of the hole were elevated high, but it was stabilized by the ILM and the posterior hyaloid membrane. The posterior hyaloid membrane creates traction on the ILM, and, during surgically induced detachment, the complex of the posterior hyaloid membrane and ILM was probably removed in the macular area and FTMH develops. As suggested by Charles [4], when operating on VMT cases, the posterior vitreous cortex should be delaminated from the fovea prior to any removal of the vitreous to prevent tearing the fovea. FTMH may develop in its natural course and after vitrectomy for VMT [3,5]. In none of these cases did spontaneous closure of the FTHM develop; they required another surgery to close the macular hole.
Eckardt et al. showed that about 91% of macular holes closed 3 days after surgery [6]. Also Jumper et al. presented that macular holes with a diameter < 400 μm were closed 1 day after surgery [7]. Some authors reported that in non-supine positioning (NSP) patients, about 90% of macular holes were closed [8]. This is why after vitrectomy the patient received postoperative NSP only for 5 days. The necessity of face-down positioning (FDP) after vitrectomy with air/gas tamponade for macular hole surgery is still unclear. In our patient, after air absorption seven days after vitrectomy the macular hole remained open [ Figure 1b] and closed later [ Figure 1c]. Based on this case, we can see that neither the air tampoande nor the position after vitrectomy affected the closure of the hole.
In our opinion, there are two possible mechanisms that could cause spontaneous closure of the macular hole. After surgically inducing posterior hyaloid detachment, the edges of the hole were left at the bottom by the resolution of the traction. This could cause a decrease in the distance between the edges of the hole, and put them together by reducing intraretinal cystoid spaces. The release of the mechanical traction may be the main reason for the eventual closure of the macular hole. On the other hand, ILM peeling induces glial cell proliferation across the hole and this mechanism may also help the spontaneous closure of macular hole.
While performing vitrectomy for VMT, the posterior hyaloid membrane creates anterior-posterior traction on the fovea, and, during detachment, retinal layer damage occurs in the macular area and FTMH may develop. Removal of the anterio-posterior vitreous traction may play the main role and may help the spontaneous closure of the macular hole after vitrectomy for vitreomacular traction. ILM peeling may also help the spontaneous closure of a macular hole.
Consent
Written informed consent was obtained from the patient for publication of this Case report and any accompanying images. A copy of the written consent is available for review by the Editor of this journal. | 2017-06-21T23:47:43.012Z | 2014-02-19T00:00:00.000 | {
"year": 2014,
"sha1": "19b331ff72cb9f4f8349fbdc26a5ab0d106b1e46",
"oa_license": "CCBY",
"oa_url": "https://bmcophthalmol.biomedcentral.com/track/pdf/10.1186/1471-2415-14-17",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e82979e9683443db0629b4426fa1fc00a45237a7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229368361 | pes2o/s2orc | v3-fos-license | ENERGY EFFICIENCY OF NITROGEN FERTILIZATION IN DURUM WHEAT AND SORGHUM GRAINS
The objective of this study was to assess the energy efficiency of nitrogen fertilization in durum wheat and sorghum grains in the period 2017-2019. Bulgarian durum wheat variety Predel was studied at a stationary fertilizer trial on soil type Pellic vertisols at the Institute of Field Crops in Chirpan, Bulgaria. Grain sorghum hybrid EC Alize was investigated on the experimental field of the Agricultural University of Plovdiv, Bulgaria, on soil type Mollic Fluvisols. The crops were grown under non-irrigated conditions. The studied nitrogen rates were 0, 60, 120, 180, and 240 kg N.ha. In durum wheat, nitrogen was applied two times: one third at sowing, and the rest as top dressing in the tillering stage. In sorghum, the total nitrogen was applied as pre-sowing fertilization before sowing. The nitrogen fertilizer was applied as NH4NO3. The experimental design was a randomized, complete block design with four replications with a size of experimental plots of 20 m for both crops. The energy efficiency of nitrogen fertilization () was calculated as the ratio between the received energy from additional grain yield of wheat and sorghum, respectively, and the invested energy from fertilization. It was established that energy efficiency of nitrogen fertilization depended on the nitrogen rate and hydro-thermal conditions during the vegetation period of durum wheat and sorghum. The bioenergy coefficient of durum wheat widely varied from 0.79 (N240 in 2018) to 4.44 (N60 in 2017). The average for the period, the highest value of energy efficiency of nitrogen fertilization was obtained at the low rate N60 The higher nitrogen rate of 240 kg N.ha was slightly effective. Under drought conditions during the vegetation period of sorghum, most effective was the application of rates N120 with the highest energy coefficient of 1.23. The application of 180 kg N.ha to sorghum was the most energy efficient under the favorable hydro-thermal conditions in 2018 and 2019, and the average for the period 2017-2019. A low N60 rate in grain sorghum was inefficient from an energy point of view. Durum wheat showed higher energy efficiency of nitrogen fertilization compared to grain sorghum. UDC Classification: 633.1, DOI: https://doi.org/10.12955/pns.v1.126
Introduction
The intensive agricultural technologies and increased yields are accompanied by an increase in the cost of non-renewable or exhaustible energy. One of the most important resources within agriculture is nitrogen (N), and depletion of N resources is an important element in the evaluation of sustainability in agriculture. Energy is directly used in land preparation, tillage operations, sowing, irrigation, harvesting; and indirectly used in inputs such as seed, fertilizers, pesticides and irrigation water. The comparison of energy productivity of different crops can be used as an effective tool to prioritize crops planting in each area. The output energy is obtained in the form of feed, fodder, fruits, vegetables, seed and grain. Therefore, energy efficiency from fertilization needs to be taken into account in sustainable agriculture. Agricultural production efficiency is defined as the ratio between the amount of input energy, including N fertilizers and the energy contained in the obtained products. In an energy crop context, sustainability in crop production could aim at enhanced energy output with maintained or reduced depletion of N resources (Pourazaria et al., 2015). Crop energy accumulated in crops is estimated in mega joules (MJ) and reported in basic production, total production and additional production. The authors reported as the main components of the energy balance of field crop rotations the use of machinery, fuel, irrigation and fertilization, and recommended minimum treatments, lower fertilization rates, timely updating the machines and the use of renewable energy sources (Azarpour, 2012;Meyer-Aurich et al., 2012). The manufacture of mineral fertilizers, package, transport and usage occupy about 45% of the used energy in agriculture (Mudahar and Hignett, 1987). In this context, the used fertilizer is actually equivalent to the input energy in agricultural production. Nitrogen fertilization is a main cost of non-renewable energy sources in agriculture and in terms of insufficient energy resources it is important to find ways to increase its energy efficiency (Hosseinpanahi and Kafi, 2012). A significant increase of grain yield was achieved through the use of both new cultivars and the input of a larger amount of energy in the form of fertilizers, mechanization and pesticides (Faidley, 1992). According to Piringer (2006) in the total energy input of US grown wheat, the share of only nitrogenous fertilizer was 47 %, whereas in a study of Australia, the share of all fertilizers (i.e. nitrogen, phosphorus, and potash) was 47 %. According to various sources the specific energy content in N fertilizers is 58-90 MJ per kg N, in phosphorus is 44 MJ per kg P2O5 and in potassium -2.27 MJ per kg K2O (Mineev, 2004). The efficiency of agriculture productivity is defined as the correlation between the amount of input energy (which could be in the form of N or other fertilizers) and the energy of production (Hulsbergen et al., 1997). Considerable research has been conducted on the energy use pattern of field crops under different management practices in the world. Most of the work related to the energy use pattern for different crops was for wheat (Mirasi et al., 2014;Moghimi et al., 2013) and cotton (Zahedi et al., 2014). The results of long-term studies in Iran show that nearly 80% of the consumer energy in Iran's agriculture is non-renewable (Beheshti et al., 2010). According to Glogova (2013) the highest energy yield for sugar and popcorn maize is realized with the use of fertilizer rate N220P100K80 and in comparison with the control N0P0K0 in the same fertilizer rate the effect of fertilizers is the highest, 35% by sugar maize and 23% by popcorn maize. Ozkan et al. (2004) reported that animal manures have more effective nutritional effects than chemical fertilizers and also their production requires less energy consumption, so that the consumption of one ton of animal manure equals to only 300 MJ ha 1 which is equivalent to only 5 kg N fertilizer. So theconsumption of fertilizers with natural origin helps to much reduce the energy consumption in the production system and increase its productivity. The most common biomass production includes corn, wheat, sugarcane, sugar beet, and sweet sorghum (López-Sandin et al., 2018;Vermerris & Saballos, 2013). However, yields vary according to variety, cultivation conditions (soil, water, climate, pests, and diseases), inputs, and agronomic practices (Mishra et al., 2017). In comparison with other crops, yield has a lower use of inputs due to the favorable combination of its agronomic and technological characteristics, making it one of the best raw materials in the production of sugar and biofuels (Bonin et al., 2016). In Bulgaria, the energy efficiency of nitrogen was studied only for some field crops Rachovski et al., 2010). There are no studies for grain sorghum and for durum wheat. The negative effects associated with increased energy production may be mitigated if renewable energy sources are employed and increased efficiency of the related production processes is attained, so that energy consumption decreases without affecting quality of life (Rocha et al., 2018). The aim of this research was to study the energy efficiency of nitrogen fertilization in sorghum and durum wheat grains and to establish in which crop nitrogen fertilization leads to higher energy efficiency. The results are discussed in an agricultural sustainability perspective.
Data and methodology
The investigation was carried out in Southern Bulgaria in 2017-2019 under non-irrigated conditions. The experimental design for sorghum and durum wheat was a randomized, complete block design with four replications with a size of experimental plots of 20 m 2 . The rates of applied nitrogen fertilization as NH4NO3 for both crops were 0, 60, 120 180 and 240 kg.ha -1 . The nitrogen fertilization was on the background P50K50 fertilization as triple superphosphate and potassium chloride, respectively. Standard farming practices for both crops for the region of Southern Bulgaria were applied. The investigation on the sorghum hybrid EC Alize was carried out on the experimental field of the Agricultural University of Plovdiv. The predecessor was wheat. Total nitrogen was applied presowing. The soil type of the experimental field is alluvial-meadow Mollic Fluvisols (FAO, 2006) with a slightly alkaline reaction pHH2O = 7.80. The content of available nutrients in the soil before sowing of the sorghum was mineral N -27.6 mg N.kg -1 ; available phosphorus (Egner-Ream) 158 mg P2O5.kg -1 and exchangeable potassium 210 mg K2O.kg -1 . The investigation on durum wheat was also carried out on the testing field of the Field Crop Institute, Chirpan, near Plovdiv, at cotton-durum wheat crop rotation. Nitrogen was applied two times -1/3 presowing and 2/3 as early spring dressing. The soil was Pellic Vertisols (FAO, 2006). Soil analysis before the experiment indicated sorbcium capacity -35-50 mequ /100g soil; bulk weight -1.1-1.2 g.cm -3 ; specific gravity -2.6-2.7; organic matter -2.0-2.4; mineral N -30-35 mg N.kg -1 ; available phosphorus and potassium -70-90 mg P2O5.kg -1 and 240-280 mg K2O.kg -1 , respectively. The values of temperature and precipitations during the vegetation period characterized the hydrothermal conditions of 2017 as warm and dry. In contrast, the months of May, June and July of 2018 were very humid. The amount of precipitation exceeded nearly twice the values of the long-term norm. The conditions during the vegetation of 2019 were similar to those in 2018. The energetical efficiency of fertilization was defined by the use of the following indexes (Mineev, 2004): 1. The amount of stored energy in the main agricultural production resulting from fertilization: E = D.R.L, where: E -content of energy in the main production, resulting from fertilization, kg.ha -1 ; Dadditional yield from main production as a result of fertilization, kg.ha -1 ; R -coefficient of reestimation of an agricultural production unit to dry matter; L -content of total energy in 1 kg dry matter from the main production, MJ. The values of parameters L and R for durum wheat are respectively 19.13 MJ and 0.86, and for sorghum -18.34 MJ and 0.86. 2. The consumption of energy (A) of the nitrogen fertilizers input: A = RN x 86.6, (MJ.ha -1 ), where RN is the nitrogen rates in a kg of active matter per hectare. The amounts of energy for nitrogen is 86,6 (MJ for 1 kg active matter). 3. The energy efficiency of the N fertilizers used (η): η = E / A, where: η -energetical efficiency (еnergy use efficiency); E -amount of energy, received in the additional main production by input of N in MJ; Aconsumed energy for input nitrogen fertilizers, MJ. For a statistical estimation of fertilization energy efficiency (η) a test analysis of variance by Duncan (1955) at P < 0.05 was used.
Results and Discussion
The amount of additional yield of durum wheat and sorghum grain resulting from the applied nitrogen fertilization of 60, 120, 180 and 240 kg N.ha -1 depends on the weather conditions during the growing season of the two crops. The lowest additional average yield of 1198 kg.ha -1 of durum wheat was obtained in 2018 (Table 1). The average additional yield of wheat in 2017 and 2018 was higher by 26.7% and 50.1%, respectively. Nitrogen fertilization increased the additional wheat grain yield to the low rate N60. The only exception to this dependence was observed in the high fertilization rate N240 in 2017, when 16.1% lower additional wheat grain yield was obtained. Nitrogen fertilization on durum wheat at a rate of 180 kg N.ha -1 resulted in the highest additional grain yield in 2018 -1700 kg.ha -1 and in 2019 -2450 kg.ha -1 . The additional yield at this rate, average for the study period, was by 89.6, 19.3 and 39.8 % more than the additional yield obtained at fertilization with N60, N120 and N240, respectively. Source: Authors The energy in the additional grain yield follows the pattern of the amount of additional yield resulting from the applied nitrogen fertilization of wheat and the influence of weather conditions during the growing season. The lowest energy in the additional grain yield of wheat -12997 MJ.ha -1 was found to be obtained by fertilization with N60 in 2018 and the highest -40307 MJ.ha -1 at N180 in 2019. On an average of three years, the greatest amount of energy in the additional grain yield was obtained at the rate of N180. The high nitrogen rates N240 reduced the amount of energy in the additional grain yield of durum wheat by 28.5% on average compared to fertilization with N180. Results of Ziaei et al. (2015) showed that total energy inputs of wheat fields of all agricultural activities were 32492 MJ.ha -1 . Total energy outputs for wheat and barley fields were 48517 and 49801 MJ ha -1 , respectively. Based on these results the amount of energy use efficiency for wheat fields were 1.49, and the amount of energy productivity were 0.056. The results of Jadida et al. (2012) revealed that wheat production consumed a total of 37 694.6 MJ/ ha of which fertilizers was 52.8%, followed by diesel fuel (15.3%). The amount of additional grain obtained from sorghum had the lowest average value in 2017 due to very dry conditions during the vegetation of plants ( Table 2). The favorable weather conditions during the vegetation period of 2018 and 2019 led to a higher average additional yield by 62.9% and 33.7%, respectively, compared to 2017. The lowest additional grain yield of sorghum in the range 270 -420 kg.ha -1 was obtained at the low nitrogen levels of N60. Sorghum reacted very positively to nitrogen fertilization. The average for the period, the application of rates N180 and N240 increased the amount of additional yield by 4.33 -4.41 times compared to rate N60. With sorghum, the lowest and highest energy values of the additional yield were obtained at fertilization with 60 kg N.ha -1 in 2019 (4260 MJ.ha -1 ) and N240 in 2018. The resulting energy in additional yield was higher in all variants of nitrogen fertilization in 2018, which was characterized by more rainfall during the growing season of sorghum. On average, over the three-year experimental period, the highest amount of energy in the additional grain yield was 23661 MJ.ha -1obtained at the high rate N240. Our results corroborated that the N level and the year of cultivation exerted important effects on durum wheat and sorghum grain production. Díaz et al. (2018) reported that crop management had important effects on sorghum energy balance. The energy produced varied between 126 and 365 GJ ha -1 depending on crop management, hybrid and growing season. The amount of fertilizer energy input increased in parallel with the value of nitrogen rate (Figure 1). The nitrogen energy input varied from 5208 MJ at the low N60 to 20832 MJ.ha -1 at N240. Ansari et al. (2018) indicate that the average energy consumption in wheat production for nitrogen is 6878 MJ.ha -1 , which is lower than the received our results. The energy efficiency of nitrogen fertilization in durum wheat and sorghum varied depending on the amount of incorporated nitrogen and the energy produced in the additional grain yield (Table 3).
Source: Authors
The energy efficiency of nitrogen fertilization in durum wheat decreased with the increase in the amount of applied nitrogen fertilizer. On average, the use of N120, N180 and N240 reduced the energy efficiency of fertilization by 20.6, 36.8 and 66.3% respectively, compared to N60. The application of increased rate N240 in 2017 and 2018 was not an effective agronomic activity from an energy point of view, due to the values of energy efficiency () lower than one, which indicated that the energy in the additional grain yield was less than the energy input from nitrogen fertilization. From an energy point of view, durum wheat had effective low to moderate nitrogen fertilization. At rates N60-120 the values of coefficient ranged within 2.05 -4.33. The energy efficiency of nitrogen fertilization in sorghum varied within a narrower range compared to durum wheat. The bioenergy coefficient (ŋ) over the experimental period of three years ranged from 0.67 (N240 in 2017) to 1.81 units (N180 in 2018). The severe drought during the sorghum growing period in 2017 reduced the energy efficiency of nitrogen fertilization at the studied rates N60-240.
Fertilization of sorghum at rates N120 and N180 had a high significant energy effect and values of 1.39 and 1.49 of bioenergy coefficient . Low energy efficiency was found at N240. This high rate decreased the energy efficiency of fertilization and its application to sorghum is not suitable in terms of energy. According to Pourazaria et al. (2015) in Central Sweden the N uptake efficiency and yield-specific N efficiency were higher in maize than wheat and ley. The yield N concentration was higher in the perennial ley than the annual crops, and lowest in maize. Energy output per N lost in the harvested product was greater in maize compared to wheat and ley. Khan et al. (2010) found that еnergy efficiency was higher in wheat crop (9.21) compared to rice (6.70) and barley where it was 8.21. Piringer (2006) points out that the benefit-cost ratio remained the highest on rice crop (3.33) compared to wheat (2.82) and barley (2.50). Uhr &Vasileva (2015) indicated that maximum parameters of gross energy yield of wheat grain yields were reported at fertilization with 0.18 t/ha fertilizer nitrogen in cereal predecessor and 0.06 t/ha after cereal, and increasing the fertilizer rate of 0.0 t/ha to 0.18 t/ha nitrogen reduced the difference in the gross energy productivity of legume crops. In the case of Turkey the study showed that the inputs used in agricultural production were not used efficiently and led to many environmental problems (Canakci et al., 2005;Hatirli et al., 2005;Kardoni et al., 2015). Hence, they suggested that sustainable agriculture should be extended, and conscious farming should be provided. According to the report by Moore (2010) to achieve a sustainable system of food production, the amount of energy efficiency and the share of renewable energies should be increased in agricultural systems. A high consumption of non-renewable energies will reduce the energy use efficiency in production systems, because the production of chemicals and the use of machinery as the main index of common systems require large amounts of energy consumption.
Conclusions
The results of this study indicate that energy efficiency of fertilization is dependent on the nitrogen rate and weather conditions during the vegetation period of durum wheat and sorghum. The energy input from nitrogen fertilization should be reduced to increase energy efficiency of durum wheat and sorghum production. The highest value of energy efficiency of nitrogen for durum wheat was obtained at the low rate N60 and with sorghum most effective was the application of rates N120. The higher nitrogen rate of 240 kg.ha -1 was slightly effective. Durum wheat showed higher energy efficiency of nitrogen fertilization compared to grain sorghum. The results are discussed in an agricultural sustainability perspective. | 2020-11-26T09:04:42.103Z | 2020-11-16T00:00:00.000 | {
"year": 2020,
"sha1": "0d105cb14c28d57532866335aa1b49bdb53e9fd1",
"oa_license": "CCBY",
"oa_url": "https://ojs.cbuic.cz/index.php/pns/article/download/126/216",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "16e98b7e87e7a17a2f16caec2c16260128403dc5",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
221378027 | pes2o/s2orc | v3-fos-license | Trigeminal schwannoma presenting with malocclusion: A case report and review of the literature
Background: Trigeminal schwannomas are rare tumors of the trigeminal nerve. Depending on the location, from which they arise along the trigeminal nerve, these tumors can present with a variety of symptoms that include, but are not limited to, changes in facial sensation, weakness of the masticatory muscles, and facial pain. Case Description: We present a case of a 16-year-old boy with an atypical presentation of a large trigeminal schwannoma: painless malocclusion and unilateral masticatory weakness. This case is the first documented instance; to the best of our knowledge, in which a trigeminal schwannoma has led to underbite malocclusion; it is the 19th documented case of unilateral trigeminal motor neuropathy of any etiology. We discuss this case as a unique presentation of this pathology, and the relevant anatomy implicated in clinical examination aid in further understanding trigeminal nerve pathology. Conclusion: We believe our patient’s underbite malocclusion occurred secondary to his trigeminal schwannoma, resulting in associated atrophy and weakness of the muscles innervated by the mandibular branch of the trigeminal nerve. Furthermore, understanding the trigeminal nerve anatomy is crucial in localizing lesions of the trigeminal nerve.
INTRODUCTION
Schwannomas of the trigeminal nerve are rare; they constitute about 0.07-0.3% of all intracranial tumors and 0.8-5% of intracranial schwannomas. [1] Uncommon in pediatrics, the incidence of trigeminal schwannomas is highest in the middle decades of life with 38-40 years being most common. [18] However, in an analysis of 73 cases, Goel et al. found the highest incidence of trigeminal schwannomas to be in young adults between the ages of 21 and 30, 28.7%. [10] Glasauer and Tandon report the incidence of trigeminal schwannomas to be at least 10% in adolescents, [9] while Goet et al. report an incidence of 17.8%. [10] limb weakness (9.6%), and seizures (4.1%). [10] In addition, cranial nerve palsies are common, ranging from V1 to V3, including the sensory and motor divisions to central nervous III, IV, VI, and VII-X. [10,16,21] Furthermore, on physical examination, patients with trigeminal schwannomas may demonstrate diminished facial sensation, weakness of the muscles of mastication, signs of pyramidal, cerebellar, or long tract involvement/dysfunction, exophthalmos, papilledema, or altered sensorium. [16,21] We present a case of a 16-year-old boy with an atypical presentation of a large trigeminal schwannoma: painless malocclusion and unilateral masticatory weakness. is case is the first documented instance; to the best of our knowledge, in which a trigeminal schwannoma has led to underbite malocclusion. We aimed to discuss this case as a unique presentation of this pathology and the key role understanding relevant anatomy holds in diagnosis and understanding of trigeminal nerve pathology.
CASE DESCRIPTION
A 16-year-old boy with a history of the right-sided Bell's Palsy presented with complaints of a severe underbite and excessive drooling on the left side. ese symptoms were progressive over several years to the point that the patient could not comfortably close his mouth. e patient had an unremarkable dental history with no previous oral trauma/procedures. On examination, the patient had facial asymmetry at rest, with the normal function of the facial nerve bilaterally. e facial sensation was intact symmetrically in the V1, V2, and V3 distributions to light touch. He had nystagmus on the right lateral gaze and mild weakness of the left masseter. ere was no other indication of brainstem or cranial nerve dysfunction. e remainder of the physical examination was normal.
A computed tomography (CT) scan maxillofacial without contrast was performed, demonstrating a 5.2 × 6.6 × 4.3 cm intra-axial mass with compression and displacement of the brainstem [ Figure 1]. ere was slight dilatation of the third and lateral ventricles representing mild obstructive hydrocephalus and erosion of the base of the skull involving the carotid canal, foramen rotundum, pterygoid canal, foramen ovale, middle cranial fossa floor, and the internal auditory canal [ Figure 1]. An underbite occlusion was noted with associated atrophy of the left masticator, mylohyoid, and anterior belly of digastric muscles and underdeveloped left mandible body and ramus [ Figure 1].
CT angiography (CTA) of the head and magnetic resonance imaging (MRI) brain with and without contrast was performed for further evaluation. CTA demonstrated bony remodeling of the left sphenoid and petrous portion of the temporal bones related to the mass with external compression (approximately 50% narrowing) of the cavernous and petrous segment of the posteriorly to the roof of the remodeled left petrous temporal bone [ Figure 5]. He was evaluated by occupational therapy, physical therapy, speech-language pathology, and physical medicine and rehabilitation physicians while inpatient and was discharged to inpatient rehabilitation on a postoperative day 16. e patient experienced the left cranial nerve IV, V, VI, and VII neuropathies, manifesting as abnormal left eye adduction and external rotation, absent left-sided corneal reflex, left-sided facial weakness, left-sided lagophthalmos, and dysarthria/slurred speech. On follow-up, the patient continues to show significant improvement in the left cranial nerve VI palsy and with improvement, though some residual impairment of the left cranial nerve IV, V, and VII function. Despite these deficits, he is functionally independent with normal breathing and swallowing function. Imaging at 12-month follow-up showed the stable residual tumor. He is currently enrolled in a community college in good standing.
Clinical suspicion for trigeminal schwannomas should be raised in the presence of slowly progressive symptoms with a predominance of trigeminal nerve-related symptoms, including facial numbness and masticatory muscle wasting. [10] To understand, the signs and symptoms caused by trigeminal schwannomas, it is important to understand the anatomy of the trigeminal nerve and the classification of trigeminal schwannomas.
Anatomy of the trigeminal nerve
Trigeminal nerve anatomy is complex but crucial in understanding the localization of symptomatology and pathology of related lesions. e trigeminal nerve is the largest of the 12 cranial nerves. It originates from the brainstem as four nuclei, three sensories, and one motor. [8] Exiting the brainstem is fibers composing the trigeminal root, of which there are two parts, the sensory and motor roots. [8] Going further distal, the somas of the unipolar sensory neurons, whose axons compose the sensory root, convene at the trigeminal ganglion, which is located within Meckel's cave. [8] Arising from the trigeminal ganglion is the three divisions into which the trigeminal nerve is classically divided: V1 (ophthalmologic division), V2 (maxillary division), and V3 (mandibular division), each of which gives off terminal branches. [8] Of these, V1 and V2 are purely sensory, and V3 is mixed supplying both sensations to the face as well as motor innervation to the muscles of mastication (temporalis, masseter, medial, and lateral pterygoids), tensor tympani, tensor veli palatini, mylohyoid, and anterior belly of the digastric. [8] Each of these trigeminal nerve branches enters/ exits the skull through different pathways (the superior orbital fissure for V1, foramen rotundum for V2, and foramen ovale for V3). [8] In addition, the trigeminal nerve is commonly discussed by segments: brainstem, cisternal, Meckel's cave, cavernous, and peripheral segments. [3] is classification system is most helpful during the radiographic interpretation of the trigeminal nerve. [3] Lesions to the trigeminal nerve arising at any of these segments can cause a characteristic clinical presentation that can be used to further localize the lesion and determine, which imaging modality would be most useful in helping to make the diagnosis. [3] In our case, UTMN points to V3 involvement, with the symptomatology pointing to motor involvement only. Similarly, several trigeminal segments were involved in the lesion in this case, including the cisternal, Meckel's cave, cavernous, and peripheral segments.
Other systems were developed to further classify trigeminal schwannomas by location; Jefferson described Type A, B, or C trigeminal schwannomas, depending on their location in the middle, posterior, or both middle and posterior fossas, respectively. [11] Type D was later added by Samii et al. to describe tumors originating predominately in the extracranial space. [20] e location of trigeminal schwannomas is important to describe/classify as it corresponds to the signs and symptoms with which patients present. For example, Type A and C trigeminal schwannomas, which have middle fossa involvement, originating at the trigeminal ganglion, located in Meckel's cave, commonly present with facial pain. [18,21] Type B or C schwannomas, which have posterior fossa involvement, can compress the brainstem, cerebellum, and cerebellar peduncles, resulting in lower cranial nerve deficits as well as pyramidal, cerebellar, and long tract signs. Type D schwannomas that originate or extend predominately along with V1, V2, or V3 can affect the cavernous sinus segments of V1 or V2, or the peripheral segments of V1-V3. is, in turn, can result in facial sensory deficits, compression of cranial nerves III, IV, and/or VI within the cavernous sinus if there is the involvement of these portions of V1 or V2, proptosis if it extends along the length of V1 as it travels through the superior orbital fissure, and atrophy/ weakness of the muscles of mastication if it involves V3. Our case is best described as a Type D trigeminal schwannoma because it consisted of a predominately extra-axial mass with involvement of multiple cranial foramina, isolated pathology of the motor division of V3 presenting with UTMN, mild obstructive hydrocephalus, and marked compression of the brainstem.
e mandibular division/V3 can be divided into three trunks: the undivided trunk, the anterior trunk, and the posterior trunk. [8] Each trunk gives off a specific branch or branches that innervate certain muscles supplied by V3. Branching from the undivided trunk is the tensor tympani, tensor veli palatini, and medial pterygoid nerves, which supply the muscles, from which they are named. [8] Coming from the anterior trunk is the masseteric and deep temporal nerves, as well as the nerve to the lateral pterygoid, which supply the muscles, from which they are named. [8] Coming from the posterior trunk is the nerve to mylohyoid, which supplies both the mylohyoid muscle and the anterior belly of the digastric. [8] us, knowing the anatomy of the trigeminal nerve, specifically the trunks of V3 and their branches, can allow for better localization of trigeminal nerve lesions when it is causing motor deficits to the muscles innervated by V3. In our case, this helps us identify the main area of symptomatology related to our patient's lesion as consisting of all three of the V3 trunks since muscles innervated by each trunk were affected, as evidenced by physical examination and imaging findings. For example, there was atrophy of the anterior belly of digastric, mylohyoid, and left masticatory muscles seen on imaging. Since these muscles are involved in retracting and elevating the mandible, our patient's underbite may have been a result of the weakness of these motions as well as impaired jaw mechanics due to disruption of normal movements and developmental asymmetry.
CONCLUSION
is case demonstrates a unique presentation of an adolescent patient presenting with a trigeminal schwannoma manifesting with a pure UTMN, resulting in severe underbite malocclusion, which is the first documented instance of this etiology. Although most trigeminal schwannomas present with sensory loss and pain, our patient presented with a severe underbite as a result of the weakness of the muscles innervated by the mandibular branch of the trigeminal nerve.
Due to a complex anatomy, trigeminal nerve tumors can present with a variety of symptoms. Understanding relevant anatomy is key to localizing pathology.
Declaration of patient consent
e authors certify that they have obtained all appropriate patient consent.
Financial support and sponsorship
Nil.
Conflicts of interest
ere are no conflicts of interest. | 2020-08-13T10:02:36.519Z | 2020-08-08T00:00:00.000 | {
"year": 2020,
"sha1": "9311dd14ef43612b740f56ef34a210b9dde074ae",
"oa_license": "CCBYNCSA",
"oa_url": "https://scholarworks.iupui.edu/bitstream/1805/25766/1/SNI-11-230.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "684c70beee5da789df92acd50a1ec14980514d57",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8779802 | pes2o/s2orc | v3-fos-license | Calmodulin Is Required for Vasopressin-stimulated Increase in Cyclic AMP Production in Inner Medullary Collecting Duct*
Calmodulin plays a critical role in regulation of renal collecting duct water permeability by vasopressin. However, specific targets for calmodulin action have not been thoroughly addressed. In the present study, we investigated whether Ca2+/calmodulin regulates adenylyl cyclase activity in the renal inner medullary collecting duct. Rat inner medullary collecting duct suspensions were incubated in the presence or absence of 0.1 nm vasopressin and the calmodulin inhibitors, monodansylcadaverine, W-7, and trifluoperazine, followed by measurement of cAMP. Vasopressin-stimulated cAMP elevation was significantly attenuated in the presence of calmodulin inhibitors. Analysis of transglutaminase 2 knock-out mice confirmed that these compounds were not acting through inhibition of transglutaminase 2 activity. Calmodulin inhibitors also blocked both cholera toxin- and forskolin-stimulated cAMP accumulation. In isolated perfused tubules, W-7 reversibly blocked vasopressin-stimulated urea permeability, a process that requires a rise in intracellular cAMP but does not appear to involve protein trafficking to the apical plasma membrane. These results suggest that calmodulin is required for vasopressin-stimulated adenylyl cyclase activity in the intact inner medullary collecting duct. Reverse transcription-PCR, immunoblotting, and immunohistochemistry revealed the presence of the calmodulin-sensitive adenylyl cyclase type 3 in the rat collecting duct, an isoform previously not known to be expressed in the collecting duct. Long-term treatment of Brattleboro rats with a vasopressin analog markedly decreased adenylyl cyclase type 3 protein abundance, providing an explanation for long-term down-regulation of vasopressin response in the collecting duct. These studies demonstrate the importance of calmodulin in the regulation of collecting duct adenylyl cyclase activity and transport function.
The collecting duct portion of the mammalian renal tubule regulates water and solute transport via the action of the antidiuretic hormone arginine vasopressin (AVP). 1 AVP is re-leased from the posterior pituitary in response to elevated plasma osmolality and binds to V2 receptors on the basolateral surface of the collecting duct epithelium, triggering a G-protein-linked signaling cascade, which leads to an elevation of cAMP and water channel aquaporin-2 (AQP2) vesicle insertion into the apical plasma membrane (1). Recently we demonstrated that calmodulin (CaM), a ubiquitous Ca 2ϩ -binding protein, is required for AQP2 vesicle trafficking in response to vasopressin stimulation (2). Preincubation of isolated perfused rat inner medullary collecting duct (IMCD) with the CaM inhibitors W-7 and trifluoperazine (TFP) blocked AVP-stimulated water permeability. Further investigation revealed that CaM activates myosin light chain kinase and subsequent nonmuscle myosin II-dependent vesicle trafficking of AQP2 (3).
In this paper, we sought to identify a role for CaM in regulating more proximal events in the collecting duct response to vasopressin, which could have an effect on other collecting duct functions including urea and Na ϩ transport. Given that CaM is known to regulate a wide range of cellular processes, it is reasonable to assume that this protein could act at multiple levels in the vasopressin-signaling pathway. One of the major secondary messengers that is increased in response to AVP is cAMP. Elevation of cAMP is required for AQP2 vesicle exocytosis (4) as well as the corresponding increase in collecting duct water permeability (5). Other collecting duct proteins regulated by cAMP include urea transporter UT-A1 (6) and the epithelial sodium channel (7).
Measuring cAMP in enriched IMCD fractions, we found that elevation of cAMP in response to AVP requires CaM. Further analysis suggested that CaM is acting at the level of adenylyl cyclase. This is the first demonstration of CaM-dependent cAMP accumulation in response to AVP in intact IMCD tubules, which supports prior conclusions from studies in cultured LLC-PK 1 cells (8) and mouse outer medulla (9). In addition, we present evidence showing that CaM is required for AVP-mediated urea permeability in isolated perfused IMCD, another process that is cAMP-dependent (10), suggesting that CaM may play a broader regulatory role in the collecting duct than initially thought.
We utilized RT-PCR, immunoblotting, and immunohistochemistry to look for the presence of a CaM-sensitive adenylyl cyclase (AC) isoform in IMCD cells. Of the nine mammalian AC isoforms identified, three have been shown to be calmodulinsensitive: AC1, -3, and -8 (11). AC1 and -8 are expressed mainly in tissues of the central nervous system, whereas AC3 has a broader profile, having been found in olfactory neuroepithelium (12), testes (13), brown adipose tissue (14), and uterus (15). Our studies demonstrated the presence of a single CaMsensitive adenylyl cyclase isoform in IMCD, namely AC3. In the collecting duct, AC3 may act as the target cyclase for Ca 2ϩ /CaM-dependent cAMP accumulation in response to vasopressin.
EXPERIMENTAL PROCEDURES
Animals-Pathogen-free male Sprague-Dawley rats (Taconic Farm Inc., Germantown, NY) were maintained on an autoclaved pelleted rodent chow (413110-75-56, Zeigler Bros., Gardners, PA) and ad libitum drinking water. All experiments were conducted in accordance with an animal protocol approved by the Animal Care and Use Committee of the NHLBI, National Institutes of Health (ACUC protocol number 2-KE-3). Transglutaminase 2 (TG2) knock-out mice and wild-type mixed background mice, a kind gift of Dr. Gerry Melino (University of Roma, Italy) (16), were maintained on the same autoclaved pelleted rodent chow and ad libitum drinking water. Immunoblotting as well as PCR amplification of tail genomic DNA was used to distinguish knock-out from wildtype mice. All mouse experiments were conducted in accordance with animal protocol H-0047 approved by the Animal Care and Use Committee of the NHLBI.
IMCD Suspensions-IMCD suspensions were prepared from inner medulla of rat kidney using the method of Stokes et al. (19) with some modifications (20). Briefly, rats were killed by decapitation, and whole inner medullas were removed and finely minced with a razor blade. Minced tissue was incubated for 90 min at 37°C with gentle agitation in a collagenase/hyaluronidase solution to dissociate individual tubule segments. After incubation, the sample was centrifuged at 80 ϫ g for 30 s to enrich for heavier IMCD structures followed by centrifugation of the supernatant at 1500 ϫ g for 5 min to pellet the lighter non-IMCD fragments. Pellets were resuspended in either bicarbonate buffer (118 mM NaCl, 25 mM NaHCO 3 , 5 mM KCl, 4 mM Na 2 HPO 4 , 1.2 mM MgSO 4 , 2 mM CaCl 2 , 5.5 mM glucose) for measurement of cAMP or Laemmli buffer for immunoblotting.
Measurement of cAMP-Fifty-microliter aliquots of IMCD suspensions were preincubated at 37°C for 10 min with 0.5 mM isobutyl methylxanthine in the presence or absence of various CaM inhibitors followed by incubation with 0.1 nM AVP for 5 min. Samples were pelleted at 1000 ϫ g for 1 min, and the supernatant was discarded. Tissue pellets were lysed by adding 0.2 N HCl and incubating for 20 min at room temperature followed by centrifugation at Ͼ10,000 ϫ g for 10 min. Supernatants were saved for measuring cAMP, and pellets were used to measure protein content (BCA assay, Pierce). cAMP content was measured using a non-radioactive enzyme immunoassay kit (Cayman Chemical, Ann Arbor, MI) based on competitive binding between endogenous cAMP and an exogenous cAMP-tagged acetylcholinesterase tracer. Samples were run in 96-well microtiter plate format and measured at ϭ 414 nm on a plate reader (Labsystems Multiskan MCC/ 340). Absorbance data were analyzed using a spreadsheet program provided by Cayman Chemical, which calculated cAMP content in pmol/ ml. This value was then normalized to total protein, which had been measured previously using the BCA assay. The final value was expressed as fmol of cAMP/g of protein.
Isolated Perfused IMCD Tubules-IMCD segments were microdissected from the mid-region of the inner medulla (40 -70% of the distance from the inner-outer medullary junction to the papillary tip of the rat kidney). The tubules were transferred to a perfusion chamber, mounted on an inverted microscope, cannulated by concentric pipettes, and perfused in vitro. The perfusate and the peritubular bath solutions were identical to the dissection solution, except that in the bath solution, 5 mM creatinine was replaced by 5 mM urea. The urea permeability was determined by measuring the urea flux resulting from the transepithelial urea gradient. The urea concentrations in the perfusate, bath, and collected fluid were measured fluorometrically using a continuous flow ultramicrofluorometer and an enzymatic assay (Infinity urea nitrogen reagent, catalogue number TR12321, ThermoTrace).
RT-PCR-Total RNA was isolated from rat IMCD and brain using the guanidinium thiocyanate/cesium-trifluoroacetic acid method. Potential contaminating genomic DNA was removed from the RNA preparations by a 30-min incubation with DNase I (DNA-free, Ambion). Total RNA (1 g) was reverse transcribed using oligo(dT) and Superscript II RT (Invitrogen) following the manufacturer's recommended protocol. RT-negative controls were performed to assess the presence of possible genomic DNA contamination of RNA samples. PCR primers were designed against the corresponding cDNAs of rat adenylyl cyclases 1-9 to generate products of ϳ200 -500 bp in size. All primers were designed to span at least one intronic region to distinguish possible amplification of genomic DNA. All amplified products were confirmed by sequencing.
Immunoblotting and Immunohistochemistry-Tissue samples were homogenized in isolation solution (10 mM triethanolamine, 250 mM sucrose, pH adjusted to 7.6, Roche protease inhibitor tablet) using a mechanical tissue grinder (Omni International), and total protein concentration was determined by the BCA assay (Pierce) using bovine serum albumin as the standard. Samples were then solubilized in Laemmli buffer (10 mM Tris, pH 6.8, 1.5% SDS, 6% glycerol, 0.05% bromphenol blue, and 40 mM dithiothreitol). 15-50 g of protein was subjected to SDS-PAGE (21) and immunoblotting as described previously (22). Rat kidneys were perfusion fixed, paraffin-embedded, and processed for immunostaining via horseradish peroxidase as described previously (23).
Statistics-Quantitation of changes in cAMP as well as densitometric analysis of protein immunoblots are expressed as the mean Ϯ S.E. (n Ն 3) for each group. Unpaired t tests or analysis of variance were performed as appropriate for the given data set.
Effect of CaM Inhibition on AVP-stimulated cAMP Accumulation in
Rat IMCD-To address the role of CaM in regulating cAMP production in response to vasopressin, we incubated rat IMCD cell suspensions with three different CaM inhibitors: MDC (24), W-7 (25), and TFP (26). Following a 10-min preincubation period with these compounds in the presence of the phosphodiesterase inhibitor isobutyl methylxanthine (0.5 mM), 0.1 nM AVP or vehicle (bicarbonate buffer) was added to the tubules for 5 min. cAMP content was subsequently measured by a non-radioactive enzyme immunoassay as described under "Experimental Procedures." cAMP levels were significantly increased 4 -5-fold in IMCD suspensions incubated with AVP alone (Fig. 1). Preincubation of tubules with MDC (200 M) completely abolished AVP-stimulated cAMP accumulation in IMCD cells (61.2 Ϯ 17.5 versus 529.3 Ϯ 52.7 (AVP alone) fmol cAMP/g of protein) (Fig. 1A). Preincubation of tubules with two other CaM inhibitors, W-7 (25 M) (165.4 Ϯ 33.1 versus 378.5 Ϯ 19.2 (AVP alone) fmol/g) and TFP (30 M) (174.5 Ϯ 37.4 versus 378.5 Ϯ 19.2 (AVP alone) fmol/g), also significantly reduced the increase in cAMP because of AVP treatment. Increasing the concentration of these drugs to 100 M gave an even greater inhibition (Fig. 1B), demonstrating the dose dependence of this phenomenon.
CaM Inhibitors Act at the Level of Adenylyl Cyclase-Although our results indicated that CaM was required for elevation of cAMP by vasopressin, it was unclear at what level CaM was affecting the signaling pathway (e.g. V2 receptor, G s , or adenylyl cyclase). Incubation of IMCD suspensions with 1 g/ml CTX, a potent ADP-ribosyltransferase that causes persistent activation of G s ␣, produced a 4.6-fold increase in cAMP that was blocked by either W-7 (25 M) or TFP (30 M) ( Fig. 2A). The fact that CaM inhibitors block CTX-mediated cAMP elevation rules out the V2 receptor as the site of action of Ca 2ϩ /CaM, and suggests that CaM is probably acting either at the level of G s ␣ or of the adenylyl cyclase responsible for cAMP production in the IMCD.
Forskolin is a direct activator of nearly all known adenylyl cyclase isoforms (11). Treatment of IMCD cells with forskolin (1 M) resulted in a nearly 10-fold increase in cAMP, which was significantly decreased by preincubation with MDC (638.3 Ϯ 45.3 versus 2076 Ϯ 390.4 (forskolin alone) fmol/g) (Fig. 2B). Similar results were obtained when tubules were preincubated with W-7 or TFP prior to stimulation with forskolin (data not shown). These results indicate that CaM is acting beyond the level of G s ␣, viz. on the adenylyl cyclase responsible for most cellular cAMP production itself.
Effect of MDC on cAMP Accumulation in Wild-type and TG2 Knock-out Mice-Previous studies have shown that both MDC and W-7 can inhibit transglutaminase in addition to calmodulin (24,27). Both MDC and W-7 possess a high degree of structural similarity (Fig. 3) and can act as primary amine substrates for cross-linking by transglutaminase. To determine whether these inhibitors are acting through transglutaminase, we obtained TG2 knock-out mice (Dr. Gerry Melino (16)), which lack both transcript and protein for the major isoform expressed in IMCD. 2 IMCD from wild-type mixed background and TG2(Ϫ/Ϫ) mice produced similarly elevated levels of cAMP in response to AVP (Fig. 4). Preincubation with MDC blocked AVP-induced cAMP responses in both wild-type and TG2(Ϫ/Ϫ) mice to a similar extent, suggesting that these inhibitors are not acting through promiscuous inhibition of transglutaminase and that transglutaminase activity is not required for elevation of cAMP by vasopressin.
Effect of CaM Inhibition on AVP-stimulated Urea Permeability in Isolated Perfused Rat IMCD-Both vasopressin-stimu-lated water and urea permeability in IMCD require elevation of intracellular cAMP. Our previous studies have demonstrated a clear role of calmodulin in regulation of water permeability at the level of aquaporin trafficking (2, 3); however, no clear role for calmodulin had been identified for AVP-stimulated urea permeability. To address this, we isolated rat IMCD segments by microdissection and utilized the perfused tubule method described under "Experimental Procedures" to measure changes in collecting duct urea permeability in response to AVP. The transepithelial urea permeability of isolated tubules was increased following the addition of 0.1 nM AVP to the bath solution (45.7 Ϯ 0.9 (basal) versus 116.7 Ϯ 19.2 (AVP) ϫ 10 Ϫ5 cm/s) (Fig. 5). The subsequent addition of W-7 (25 M) dramatically reduced urea permeability to basal levels (AVP ϩ W-7 ϭ 26.3 Ϯ 11.3 ϫ 10 Ϫ5 cm/s). The addition of W-7 prior to stimulation with AVP also reduced urea permeability (14.7 Ϯ 3.8 (W-7) versus 21.7 Ϯ 5.8 (W-7 ϩ AVP) ϫ 10 Ϫ5 cm/s). More importantly, urea permeability increased upon washout of W-7 with fresh AVP solution (AVP washout ϭ 92.0 Ϯ 7.8), demonstrating the reversibility of this process.
Identification of Adenylyl Cyclase Isoforms in Rat IMCD by RT-PCR-To determine which isoforms of adenylyl cyclase are present in rat IMCD, specific primer sets were generated for each AC isoform (1-9) using sequences obtained from the fulllength cDNA of the corresponding gene. Total RNA was extracted from tissue and used for reverse transcription reactions in the presence or absence of reverse transcriptase (Fig. 6, RTϩ and RTϪ, respectively) followed by amplification by PCR. Rat brain was utilized as a positive control for all primer sets, and bands of the appropriate size were present for each AC isoform (Fig. 6, top panel). In all reactions without RT, the band is absent. This eliminates the possibility that bands in RTϩ reactions represent amplified contaminating genomic DNA. Analysis of rat IMCD revealed the presence of the majority of AC isoforms except AC1 and -8 (Fig. 6, bottom panel). Most importantly, the only CaM-stimulated isoform of adenylyl cyclase detectable by RT-PCR was AC3.
AC3 Is Enriched in IMCD Cells-Rat inner medullas were processed as described under "Experimental Procedures" to generate two fractions: IMCD cells and non-IMCD cells. The former is enriched in collecting duct fragments, and the latter possesses mainly thin limb segments and vasa recta. To assess the level of enrichment, protein lysates from each fraction were analyzed by immunoblotting for AQP1, a water channel present in thin descending limbs and vasculature but absent from IMCD. As expected, AQP1 was much more abundant in the non-IMCD fraction. The immunoblot band densities for AQP1 were increased 5.4-fold in non-IMCD fractions compared with IMCD fractions (p Ͻ 0.001; n ϭ 3) (Fig. 7A), suggesting that the IMCD pellet is devoid of a large amount of contaminating non-IMCD material. An affinity-purified rabbit polyclonal AC3 antibody recognized two distinct bands between 160 and 250 kDa that were enriched 2.2-fold in IMCD fractions compared with non-IMCD (p Ͻ 0.05; n ϭ 3) (Fig. 7B). This antibody was raised against the highly divergent COOH terminus of rat AC3 (PAAFPNGSSVTLPHQVVDNP, sequence confirmed by mass spectrometry) and does not cross-react with AC1, -2, -4, -5, -6, or -9 (13). Both the number and size of bands are consistent with immunoblot results of AC3 expression in myenteric ganglia (28). In addition, these two bands, along with a smaller band under 160 kDa, were present in a sample of whole brain homogenate (Fig. 7B). All bands were absent with preadsorption of the AC3-blocking peptide (data not shown). The presence of AC3 protein on immunoblot supports the data generated by RT-PCR analysis and also demonstrates that this isoform is enriched in the collecting duct fraction of the inner medulla.
To further address the presence of AC3 in IMCD, we performed immunohistochemistry on rat inner medullary sections. AC3 was found in all IMCD cells, with a lower level of staining present in thin limb cells (Fig. 8A). This staining was largely ablated by preadsorption of the AC3 antibody with its corresponding blocking peptide (Fig. 8B). As expected, AC6, the major isoform identified previously by RT-PCR in more proximal portions of the collecting duct (29), was also found in IMCD (Fig. 8C). We have demonstrated previously that AC6 protein is present in the IMCD by immunocytochemistry (30). Interestingly, both AC3 and -6 appear to have similar distributions. Collecting duct staining was confirmed using an antibody to the collecting duct-specific marker protein AQP2 (Fig. 8D). The presence of AC3 in IMCD directly demonstrates the presence of a Ca 2ϩ /CaM-stimulated isoform in the IMCD and supports our conclusion that Ca 2ϩ /CaM may act directly on adenylyl cyclase to enhance AVP-stimulated cAMP production.
Effect of Long-term dDAVP Administration on AC3 Expression-A prior study has shown that AC6 expression in the collecting duct is reduced during long-term dDAVP treatment, thought to be the result of a conditioned "negative feedback" response (30). To address whether AC3 is similarly affected by dDAVP, Brattleboro rats were given dDAVP (20 ng/h) or saline (control) via osmotic minipumps for 7 days followed by isolation of inner medulla and immunoblotting. AQP2 was used as a positive control for the effect of dDAVP in collecting duct. AQP2 protein abundance was significantly increased 2.5-fold with dDAVP treatment (100 Ϯ 2.7 control versus 247.7 Ϯ 8.6 dDAVP; p Ͻ 1 ϫ 10 Ϫ5 ) (Fig. 9, bottom panel). AC3 protein abundance decreased 2.9-fold (100 Ϯ 12.3 control versus 34.5 Ϯ 8.6 dDAVP; p Ͻ 0.01) during long-term dDAVP treatment (Fig. 9, top panel). AC6 expression was reduced 5-fold (100 Ϯ 11.7 control versus 19.8 Ϯ 2.9 dDAVP; p Ͻ 0.0002) (Fig. 9, middle panel). This result suggests that AC3 and -6 expression may be subject to similar long-term regulatory influences in IMCD cells. DISCUSSION Vasopressin acts via the V2 receptor to increase water and urea permeability in the inner medullary collecting duct of the kidney. Water and urea are transported via different channels (31). Water transport occurs via AQP2 in the apical plasma membrane and aquaporins 3 and 4 in the basolateral plasma membrane (32). Regulation of water transport occurs via vasopressin-mediated AQP2 trafficking to the apical plasma membrane (33). Urea transport in the IMCD occurs via two urea channels, UT-A1 and -A3, present in the apical and basolateral plasma membrane, respectively. The mechanism of urea transport regulation by vasopressin is not known, although it is believed that urea channels do not traffic to the apical plasma membrane together with AQP2 (34, 35).
The regulation of both water and urea transport in the IMCD depends on activation of adenylyl cyclase activity via the het- 6. RT-PCR analysis of adenylyl cyclase isoforms in rat brain and IMCD. Total RNA was extracted from both brain and enriched IMCD suspensions and used for RT-PCR analysis with specific primers to each AC isoform (1)(2)(3)(4)(5)(6)(7)(8)(9). Reactions without reverse transcriptase (RT Ϫ) are shown. Brain was chosen as a positive control for all primer sets. The CaM-sensitive isoform (AC3) is present in IMCD cells, whereas the other CaM-sensitive isoforms, AC1 and -8, are absent. erotrimeric GTP-binding protein G s (1). The molecular identity of the adenylyl cyclase responsible for vasopressin-mediated increases in cyclic AMP in IMCD has not been addressed previously, although it has been widely assumed that AC6 is responsible for vasopressin-stimulated cAMP increases because it has been demonstrated to be relatively abundant in collecting duct principal cells (29,36). This isoform is inhibited by Ca 2ϩ in the micromolar range (37) and is not calmodulinsensitive (38). AC6 has been localized in the collecting duct by RT-PCR (29) and in situ hybridization (36). Furthermore, increasing intracellular calcium appears to inhibit AVP-stimulated cAMP in the outer medullary part of the collecting duct (29), supporting the conclusion that AC6 is responsible for vasopressin-dependent cAMP production by outer medullary collecting duct principal cells. However, earlier reports strongly suggest that vasopressin-sensitive renal epithelia possess CaM-sensitive AC activity. First, Ausiello and Hall (8) described CaM-stimulated adenylyl cyclase activity in a cell line sensitive to vasopressin (LLC-PK 1 ). A subsequent study in microdissected outer medullary collecting ducts by Takaichi and Kurokawa (9) reported that AVP-sensitive cAMP production was CaM-dependent. Despite these findings, a candidate CaM-sensitive AC has not been found in the renal collecting duct.
In this study, we have identified a role for CaM in regulating the IMCD response to vasopressin at the level of adenylyl cyclase. Utilizing various CaM inhibitors, we were able to block AVP-dependent cAMP accumulation in both rat and mouse IMCD. Isolated perfused tubule experiments demonstrated that CaM is required for AVP-stimulated urea permeability in the collecting duct, a process known to be cAMP-dependent. A recent paper demonstrated that CaM binds to the COOH terminus of the V2 receptor and mediates some of the actions of vasopressin (39). However, in the present study, CaM inhibitors also blocked the rise in cAMP in the presence of either cholera toxin or forskolin, providing strong evidence that CaM acts directly on adenylyl cyclase to increase cAMP production.
What AC isoform could be responsible for CaM-dependent cAMP production in the IMCD? Three isoforms have been reported to be CaM-sensitive, namely AC1, -3, and -8 (11). Among these, we found evidence for the expression of only AC3 in the IMCD. Specifically, we have identified AC3 mRNA by RT-PCR in IMCD suspensions. Neither AC1 nor -8 transcripts were detectable in IMCD cells, even though these transcripts were readily detectable in brain with the same loading and amplification protocol. Furthermore, we found evidence for AC3 protein in IMCD through both immunoblotting and immunocytochemical studies. Ca 2ϩ /CaM have been shown to stimulate AC3 activity in a number of systems including HEK-293 cell membranes (40) and bovine luteal cells (41). To our knowledge, this is the first demonstration of a CaM-stimulated isoform in collecting duct. Immunoblotting and immunohistochemistry revealed that AC3 is enriched in IMCD cells compared with non-IMCD structures in the inner medulla, chiefly vasa recta and ascending and descending thin limbs of Henle. Long-term dDAVP administration in Brattleboro rats reduced the expression of AC3 as well as AC6. A decrease in AC6 expression with dDAVP treatment has been reported previously (30). Reduced adenylyl cyclase expression may reflect a negative feedback mechanism that reduces cAMP after prolonged exposure to vasopressin. Taking into account this functional and expression data, we propose that AC3 is a target of CaM in IMCD cells and is, at least in part, responsible for the rise in cAMP in response to AVP.
Interestingly, our data indicate that both AC3 and the Ca 2ϩinhibited isoform AC6 are expressed in the inner medullary collecting duct. Based on immunocytochemical labeling, it appears that both isoforms are expressed in the same cells. As of yet, it remains unclear what the relative contribution of each isoform is to the overall rise in cAMP during the stimulation of the collecting duct with AVP. The dramatic decrease in AVPstimulated cAMP in the presence of the CaM inhibitors in our study suggests that the contribution of AC3 is quite significant under the conditions of the measurements. Previously, it was demonstrated that stimulation of inner medullary collecting ducts with AVP produces an increase in intracellular Ca 2ϩ (10) via the V2 vasopressin receptor (42,43). The increase in intracellular Ca 2ϩ is oscillatory in nature (44) and is dependent on Ca 2ϩ release via ryanodine-sensitive stores (2). AC3 likely contributes to cAMP accumulation during periods of elevated intracellular Ca 2ϩ when AC6, the Ca 2ϩ -inhibited isoform, is probably in an inactive state. Conversely, AC6 may be the predominant producer of cyclic AMP when Ca 2ϩ is low in the cell. Overall, the two isoforms in combination may provide a "smoothed" cAMP signal in the face of variable intracellular Ca 2ϩ . Another possibility is that AC3 itself may be involved in the generation of Ca 2ϩ oscillations in IMCD cells upon hormonal stimulation. HEK-293 cells stably expressing AC3 produced Ca 2ϩ oscillations with a periodicity of 3-5 min upon stimulation with glucagon, isoproterenol, or forskolin (45).
There is also evidence that Ca 2ϩ /CaM can indirectly inhibit AC3 via activation of a specific calmodulin-dependent kinase, CaM kinase II, which phosphorylates AC3, thereby inhibiting its cyclase activity (46,47). Inhibition of AC3 activity through CaM kinase II-mediated phosphorylation may provide a critical switch in terminating cAMP-mediated signaling in the collecting duct. For instance, inhibition of AC3 may hold relevance in the phenomenon of escape from vasopressin-induced antidiuresis seen in the clinical syndrome of inappropriate antidiuresis, in which cAMP activity and aquaporin-2 expression undergo marked decreases despite high levels of circulating vasopressin (48).
Our RT-PCR studies indicate that other AC isoforms aside from AC3 and -6 are expressed in the IMCD, namely AC2, -4, -5, -7, and -9. Further studies would be required to pinpoint the functional role of these isoforms.
We have identified at least two sites of calmodulin action in the regulation of AQP2 in the IMCD, viz. myosin light chain kinase (3) and AC3 (this study). Given the multiplicity of its actions in cells, calmodulin likely plays other physiologically significant roles in the IMCD. One such role is stimulation of phosphodiesterase activity, which has been demonstrated in prior studies (49,50). The potential attenuation of the cAMP response via CaM-sensitive phosphodiesterase-1 has not been addressed in this study. Inclusion of the phosphodiesterase inhibitor isobutyl methylxanthine prior to measurement of cAMP precluded evaluation of CaM-sensitive phosphodiesterase activity.
In conclusion, we have demonstrated CaM-dependent cAMP accumulation in response to AVP in IMCD and have provided evidence for AC3 as the adenylyl cyclase isoform that is responsible for this activity. Calmodulin-stimulated adenylyl cyclase activity may play a critical role in the fine regulation of water and urea transport in the IMCD. | 2018-04-03T00:00:36.761Z | 2005-04-08T00:00:00.000 | {
"year": 2005,
"sha1": "c65e093fe2874e565b3dbf71fbc36e6b5facd890",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/280/14/13624.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "656ac24c4156db96e66c6faf746fbc959d96a896",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
55398734 | pes2o/s2orc | v3-fos-license | Multi-drug resistance and molecular pattern of erythromycin and penicillin resistance genes in Streptococcus pneumoniae
The appearance and dissemination of penicillin resistant and macrolide resistant Streptococcus pneumoniae strains has caused increasing concern worldwide. The aim of this study was to survey drug resistance and genetic characteristics of macrolide and penicillin resistance in S. pneumoniae. This is a cross-sectional study, which was carried out on 70 samples suspected to be S. pneumoniae isolated from patients who were admitted in Intensive Care Unit (ICU) of southwest of Iran, in 2010 and 2011. At first, suspected colonies were identified by phenotypic and chemical tests. The isolates were confirmed as S. pneumoniae based on the presence of lytA gene by polymerase chain reaction (PCR) method. Antibiotic resistance was evaluated according to Standard Clinical and Laboratory Standards Institute (CLSI). Minimum inhibitory concentrations (MICs) of erythromycin and penicillin were determined by the E-test method. Molecular analyses of macrolide and penicillin resistance were carried out by using specific primers for detection of the resistance gene including erm(B), mef(A), pbp1a, pbp2b and pbp2x genes. The lytA gene was detected in 50 samples. There was prevalence of resistant strains to erythromycin (56%), penicillin (40%), ampicillin (56%), cefotaxime (50%), tetracycline (10%), trimethoprim-sulfamethoxazole (48%), nalidixic acid (16%), clarithromycin (48%), azithromycin (44%) and levofloxacin (4%). All strains were susceptible to chloramphenicol, amikacin, streptomycin and gentamicin. Gene analysis showed that 29 strains (58%) had mef(A) gene, and 24 strains (48%) had the erm(B) gene. Out of all the penicillin resistance and intermediate strains, 6 (20%) and 1 (3.33%) strains harbor mutations in pbp1a and pbp2x genes, respectively, but pbp2b was not identified in any sample. Resistance to penicillin, trimethoprim-sulfamethoxazole, clarithromycin and azithromycin in S. pneumoniae is a serious problem in this area and the local pattern of resistance/susceptibility must be considered for therapeutic regimens. The mef(A) gene was a predominant mechanism of macrolide resistance in this area. With regards to low frequency of pbps resistance genes, monitoring of other kinds of mechanisms is recommended.
INTRODUCTION
As a major bacterial pathogen, Streptococcus pneumonia infection starts from colonization of the human upper *Corresponding author.Email: mkargar@jia.ac.ir.Tel: +989173149203 Fax: +98711-6262102 Abbreviations: ICU, Intensive Care Unit; CLSI, Clinical and Laboratory Standards Institute; MICs, minimum inhibitory concentrations, PBPs, penicillin-binding proteins.respiratory tract, causing respiratory tract diseases such as pneumonia, bronchitis, otitis media and sinusitis.Under certain circumstances, bacteria invade host cells and evade host immunity, causing systemic infections such as bacteremia, sepsis and meningitis.Therefore, the interaction of S. pneumoniae with host respiratory tract epithelial cells is an initial step for infection.Many factors that contribute to the colonization and/or invasion of host epithelial cells have been characterized in S. pneumoniae.However, it is becoming obvious that Table 1.Primers used for amplification of lytA, penicillin and macrolide resistance genes (9).
Pneumococcal infections are treated with penicillin as the first choice drug, and erythromycin is also frequently used.Previously, pneumococci encountered in the community were uniformly susceptible to penicillin.Since the 1960s and 1970s, penicillin resistance has emerged.Macrolide resistance has also increased dramatically during the last decade.However, alarming high frequencies of penicillin and macrolide-resistant pneumococci have been reported, especially in several Asian countries, including Iran (Pallares et al., 2003).There are three known macrolide resistance mechanisms in S. pneumoniae.The target site of macrolides in 50S ribosomal subunit can be modified by methylation of the 23S rRNA adenine residue A2058 resulting in resistance to macrolide, lincosamides and streptogramin B (MLS B phenotype).This mechanism is mediated by erythromycin ribosome methyltransferase encoded by erm(B) and to a lesser extent by erm(A) genes.Macrolide resistance in S. pneumoniae may also result from an efflux system leading to the selective efflux of 14-and 15membered macrolides (M phenotype) encoded by the mef(A) gene (Li, 2010;Weber, 2010).
No beta-lactamase activity has been detected in S. pneumoniae resistance to beta-lactam antibiotics which is due exclusively to mutations in their natural target, the penicillin-binding proteins (PBPs), which prevent binding and make them indifferent to beta-lactam, that is, decrease their binding affinity with these drugs (Ferroni and Berche, 2001).These proteins are believed to be enzymes that catalyze the terminal stages of murein synthesis and are inhibited by covalent binding with penicillin at their active site.In highly resistant strains, there is a reduction in the capacity to bind to the molecules of antibiotics in at least three of the five existing PBPs: pbp1a, pbp2x and pbp2b (Maurer et al., 2008).
The aim of this study was to determine the genetic mechanisms and the phenotypic expression of macrolide and penicillin resistance in S. pneumoniae strains collected at Intensive Care Unit (ICU) of the University Hospitals of Shiraz, Iran.
MATERIALS AND METHODS
A total of 70 samples suspected to be S. pneumonia were isolated from patients who were admitted at the ICU of Nemazee and Faghihi Hospital, Shiraz, for the period of 2010 to 2011 were studied.Each patient's history sheet was examined in detail and findings were recorded on standard Performa including demographic data.All patients read and singed an 'informed consent' form at the beginning of the study and declared their willingness for the application of their anonymous data for research purpose.
Microbiological cultures
The pneumococcal isolates were obtained from invasive and noninvasive sites such as blood, nasopharyngeal secretions, tracheal secretions, sputum and bronchoalveolar lavage, of both pediatric and adult patients.All samples were cultured onto 5% whole sheep blood agar plates and incubated in the presence of 5% CO2 overnight at 37°C.The bacteria were identified based on morphological characters, Gram reaction catalase test, bile solubility and susceptibility to optochin.The lytA gene which encodes autolysin was used as target DNA to confirm S. pneumoniae strains (Table 1).
Conventional PCR
The DNA extraction was carried out by removing S. pneumoniae colonies from the culture medium, resuspending them in 300 µl of phosphate-buffer saline and then centrifuging them at 3000 rpm for 15 min.The supernatant was set aside and the sediment was used for DNA extraction.The sediment was re-suspended in 50 µl of 1x Tris-EDTA (TE) buffer at pH 7.4, incubated for 10 min at 37°C and at 100°C for 3 min.The samples were stored at -20°C until use (one to three days).The genes that were targeted to identify S. pneumoniae were the pneumococcal species specific genes such as the autolysin gene (lytA).Genes that were amplified for detection of penicillin resistance were pbp1a, pbp2b and pbp2x genes and genes targeting the macrolides genes were the ermB and mefA.The lytA, pbp1a, pbp2b, pbp2x, ermB and mefA genes (Table 1) were amplified by PCR (Figure 1).The optimal PCR condition for a 50 µl reaction included 1X PCR buffer, 1.5 mM MgCl2, 0.2 mM dNTP mix, 2 U Taq Polymerase (Fermentase), 20 pmol of each primer (Table 1) and 10 µl of DNA templates (Fukushima et al., 2008).
PCR amplification was carried with the cycling parameters as follows: after an initial denaturation step at 95°C for 5 min, 30 cycles of amplification were performed as follows: denaturation at 94°C for 30s, annealing temperature at 58°C for 30 s and extension temperature at 72°C for 30 s.The sizes of PCR products of these genes were analyzed by 1.5% agarose gel electrophoresis containing ethidium bromide (0.5 ml).The data were analyzed using SPSS software (SPSS for windows, 14 programs) and Chisquare.P-value less than 0.05 were taken to indicate statistical significance.
RESULTS
The sample consisted of 17 females (24%) and 53 males (76%).The age varied from the newborn to 87 years of age.Of the 70 samples initially identified as S. pneumoniae in the microbiology laboratories, eight were excluded from the study for not presenting any growth in culture medium or because the identity of the bacterial species was not confirmed in the microbiological test.The remaining 62 strains were simultaneously submitted to susceptibility tests and PCR to detect erythromycin and penicillin resistance genes and amplification of the lytA gene for confirmation of the bacterial identification.The lytA gene was detected in 50 samples, and the 12 samples that did not present this gene were removed from the final analysis.
Of the 50 isolates, 20 (40%), 10 (20%) and 20 (40%) strains were respectively susceptible, intermediate and resistance to penicillin.Out of the resistance and intermediate strains, 6 (20%) and 1 (3.33%) strains harbor mutations in pbp1a and pbp2x genes respectively, but pbp2b was not identified in the samples.There was no relation between mutations in pbp1a and pbp2x gene and resistance to penicillin (Table 4).
DISCUSSION
The global emergence of in vitro antimicrobial resistance in S. pneumoniae has become a serious clinical concern since the 1980s.During the past two decades, the rate of resistance to penicillin, other betalactams and nonbetalactam agents have been increasing rapidly in many Kargar et al. 971 parts of the world.In particular, data on rates of pneumococcal resistance from Asian countries at the end of the 1990s were alarming (Deasy, 2009;Livermore, 2003).
Investigations from other countries have also documented an increase in the prevalence of resistance to penicillin and other agents among pneumococcal strains.Pneumococcal resistance to penicillin has increased significantly in recent years, especially in European countries such as Spain, France and Hungary, where it has reached up to 71%.In some states of the USA, resistance to penicillin has reached 44%, whereas in Asia we can find alarming rates ranging from 70 to 78% in Hong Kong, South Korea and Taiwan (Lynch and Zhanel, 2010).Our study shows that the incidence of penicillinresistant strains among Iranian clinical isolates is alarmingly high (40%).
Macrolides are used as an alternative to beta-lactams for treatment of respiratory tract infections, however, recent surveillance data showed an increasing prevalence of macrolide-resistant S. pneumoniae in many parts of the world.Recent studies in European countries have shown an overall prevalence of erythromycin resistance of 17.2%, with significant national variability.The highest percentage of erythromycin was observed in France (58.1%) and Spain (57.1%), followed by Italy (31.4%) and Belgium (26.3%).The highest prevalence of macrolide resistance in pneumococci (47%) in the USA was seen in East South Central, which includes Kentucky, Tennessee, Alabama and Mississippi (Jenkins, 2008;Reinert, 2005;Camargos, 2006).
However, recent data from some Asian countries on macrolide resistance in pneumococci have far exceeded the prevalence rates of Western countries.Song et al. (2004) showed that 80% of pneumococcal isolates from Hong Kong were resistant to erythromycin, and 91% of Taiwanese isolates were fully resistant to erythromycin.In our study, resistance to erythromycin was 18%, which was lower than many Asian countries.Generally, erythromycin resistance in pneumococci results from either modification of the drug-binding site (encoded by erm(B)) or active efflux of the drug (encoded by mefA).The efflux mechanism is predominant in macrolide-resistant pneumococci in North America, whereas ribosomal methylation has been found in >80% of erythromycinresistant S. pneumoniae isolates in most European countries, except Germany.Ribosomal methylation by the erm(B) was the most common mechanism of erythromycin resistance in China, Taiwan, Sri Lanka and Korea, whereas efflux was more common in erythromycinresistant isolates isolated from Hong Kong, Singapore, Thailand and Malaysia.In most Asian countries except Hong Kong, Malaysia and Singapore, the erm(B) gene was found in >50% of pneumococcal isolates either singly or doubly with the mef(A) gene (Song et al., 2004).
In this study of 50 isolates, 29 strains (58%) carried mef(A) gene and 24 strains (48%) possessed erm(B) gene.In our study, we observed that there was a correlation between phenotypic (erythromycin resistance) and genotypic (presence of the erm(B) gene) resistance.S. pneumoniae resistance to beta-lactam antibiotics is due exclusively to mutations in their natural target, the penicillin-binding proteins (PBPs), which prevent binding and make them indifferent to beta-lactam, that is, decrease their binding affinity with these drugs.In highly resistant strains, there is a reduction in the capacity to bind to the molecules of the antibiotics in at least three of the five existing PBPs, pbp1a, pbp2x and pbp2b.Nagai et al. (2001) evaluated the presence of mutations in the pbp2b and pbp2x gene of 218 samples of S. pneumoniae isolated from children in Japan.Mutations in pbp2x were observed in several strains presenting intermediate resistance to penicillin.Zettler et al. (2004) reported that pbp2x was found in 84% of samples presenting intermediate resistance to penicillin.We observed that out of the resistance and intermediate strains, 6 (20%) and 1 (3.33%) strains harbor mutations in pbp1a and pbp2x genes, respectively but, pbp2b was not identified in the samples.There was no relation between penicillin resistance and mutations in pbp1a and pbp2x genes.
Conclusion
A very important factor in the treatment of patients with pneumococcal infection is the early introduction of antimicrobial therapeutics, which may be decisive in the evolution and prognosis of the disease.Although a number of therapeutic guidelines recommended are for pneumococcal infections, the local pattern of resistance/ susceptibility must be considered.Data presented in this article and related publications emphasize the desperate need to control the proper use of antibiotics to decrease the selective pressure for this and other organisms.Moreover, development of MDR patterns among S. pneumoniae strains indicates that newer antibiotics have to be developed to combat drug-resistant pneumococcal infections.Besides the alteration in the pbps genes, there have been other non-PBP resistance mechanisms that have been reported to alter the other β-lactam resistance in pneumococci, mutations in the histidin protein kinase CiaH and any mutations in the glycosyltransferase, CpoA.Another non-PBP resistance determinant that is essential for the complete development of high-level penicillin resistance involves alteration in the murMN operon.The murM and murN proteins control the biosynthesis of branched-stem structured cell wall muropeptides.
Table 3 .
Comparison between MIC and disk diffusion results of erythromycin in different genotypes of mef(A) and erm(B) genes.
Table 4 .
Comparison between MIC and disk diffusion results of penicillin in different genotypes of pbp1a, pbp2b and pbp2x genes. | 2018-12-11T09:51:08.024Z | 2012-01-12T00:00:00.000 | {
"year": 2012,
"sha1": "a6359792cc6526e63c52e899497bb92ddf1692a0",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/F51775233248.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a6359792cc6526e63c52e899497bb92ddf1692a0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
271081628 | pes2o/s2orc | v3-fos-license | Synthesis of Up-Conversion CaTiO3: Er3+ Films on Titanium by Anodization and Hydrothermal Method for Biomedical Applications
The present study investigates the effects of Er3+ doping content on the microstructure and up-conversion emission properties of CaTiO3: Er3+ phosphors as a potential material in biomedical applications. The CaTiO3: x%Er3+ (x = 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0%) films were synthesized on Ti substrates by a hydrothermal reaction at 200 °C for 24 h. The SEM image showed the formation of cubic nanorod CaTiO3: Er3+ films with a mean edge size value of (1–5) μm. When excited with 980 nm light, the CaTiO3: Er3+ films emitted a strong green band and a weak red band of Er3+ ions located at 543, 661, and 740 nm. The CaTiO3: Er3+ film exhibited excellent surface hydrophilicity with a contact angle of ~zero and good biocompatibility against baby hamster kidney (BHK) cells. CaTiO3: Er3+ films emerge as promising materials for different applications in the biomedical field.
Introduction
CaTiO 3 emerges as a promising host material for luminescence-based applications and biomedicine due to its exceptional combination of properties: high chemical durability, long luminescence lifetime, high color rendering index, low power consumption, and biocompatibility.Previous work also reported the biocompatibility of CaTiO 3 particles [1].However, to our knowledge, the UC luminescence and cell compatibility of CaTiO 3 : Er 3+ films on Ti substrate synthesized by a hydrothermal method have yet to be well documented.
Semiconductors doped with rare earth (RE) ions are potential materials for a variety of applications, such as antimicrobial activities, photocatalytic treatment, and environmental purification [2,3].Perovskite and oxide-based materials are effective hosts for the up-luminescence of RE ions [4,5] because of their low phonon energy, low cost, and easy synthesis [6,7].Recent research of luminescent RE ions has focused on the Er 3+ ion thanks to its unique electronic and optical properties and various applications, such as three-dimensional displays [8], solar cells [9,10], temperature sensors [11], photocatalysts [12,13], and biomedical field applications [14,15].The UC luminescence of CaTiO 3 : Er 3+ particles is highly attractive for various applications, including drug delivery, tumor therapy, and cell imaging.Er 3+ ions can be substitutionally doped into the CaTiO 3 lattice, where they occupy well-defined positions because the material's crystal structure and the close correspondence between the ionic radii of Er 3+ and Ca 2+ ions contribute to the prevention of lanthanide element release [16,17].The characteristics of Er 3+ emissions from 4 F 9/2 → 4 I 15/2 (red) and 4 S 3/2 , 2 H 11/2 → 4 I 15/2 (green) of the material displayed various transitions under room temperature conditions.The UC luminescence properties of the material can be tailored by adjusting the Er 3+ doping concentration and the UC photon excitation mechanism [1,18,19].
The hydrophilic property plays a crucial role in numerous biomedical applications.For instance, it serves several purposes, including tissue integration, enhanced cell interactions, and drug delivery.In the context of tissue integration, hydrophilic materials possess the ability to absorb water-based substances from the surrounding environment, enabling them to integrate better with body tissues.This can potentially reduce the risk of implant rejection and improve the long-term functionality of implants [20].Additionally, the hydrophilic property can foster a favorable environment for cell adhesion and growth, promoting wound healing and tissue regeneration around the implant [21].Furthermore, hydrophilic materials can be utilized to deliver drugs or growth factors to specific locations within the body, enhancing treatment efficacy and minimizing side effects [22].
In this study, we reported the hydrothermal synthesis of up-conversion (UC) emission of CaTiO 3 : Er 3+ film on Ti and discussed the influence of Er 3+ concentrations on the UC luminescence of CaTiO 3 : Er 3+ film.We also investigated the hydrophilic properties and cell compatibility of the CaTiO 3 : Er 3+ film for biomedical applications.
Synthesis of CaTiO 3 : x%Er 3+
CaTiO 3 : Er 3+ films were synthesized using the hydrothermal method.Calcium hydroxide [Ca(OH) 2, Merck (Rahway, NJ, USA), 99.9%], sodium hydroxide (NaOH, Merck, ≥97.0%),TiO 2 nanotubes template, and erbium chloride hexahydrate (ErCl 3 •6H 2 O, Merck, 99.9%) were used as starting materials.TiO 2 nanotubes templates were synthesized using the anodization method as reported in previous work [18].A total of 7.4 mg of Calcium hydroxide (Ca(OH) 2 ) was dissolved in 40 mL of H 2 O, and 3.2 mg of NaOH was dissolved in 10 mL of H 2 O.After stirring the two mixtures for 15 min at room temperature, the mixtures were combined under continued magnetic stirring.Next, 7.8 mg of ErCl 3 •6H 2 O was dissolved in H 2 O and slowly added to the beaker with the previous solution.Finally, the mixture solution was used for the hydrothermal synthesis of CaTiO 3 at 200 • C for 24 h.The synthesized CaTiO 3 : Er 3+ powder was subjected to a high-temperature treatment (annealing) at 800 • C for 2 h to obtain the final samples.
Physicochemical Analysis Methods
The crystallographic properties of the obtained samples were investigated using Xray techniques powder diffractometer (XRD, Siemens D500, Munich, Germany).Field emission scanning electron microscopy (FESEM, FE-SEM, JEOL JSM-7600F) was employed to investigate the morphology of CaTiO 3 : Er 3+ .FESEM equipped with EDS (Gatan, UK) was used for the elemental analysis of the samples.A special instrument (NANO LOG spectrophotometer) was used to measure the light emission properties (luminescence) of CaTiO 3 : Er 3+ .The instrument used two light sources: a high-power xenon lamp and a specific wavelength laser 980 nm.All the analyses were performed at room temperature.Water contact angle measurements were performed to assess the surface properties of the films by using a digital camera at room temperature.Close-up pictures were taken of tiny drops (about 1 microliter) of water placed on the top surface of each film.
Biocompatibility Assessment Methods
To assess cell compatibility in vitro, both the titanium substrate and CaTiO 3 : Er 3+ films were sterilized through autoclaving.This sterilization process involved exposure to high pressure steam at 121 • C for 60 min.Meanwhile, baby hamster kidney cells (BHK cells) were cultured in a growth medium called DMEM at 27 • C. The culture environment was maintained with humidified air and 5% CO 2 .Cells at the same concentration and density were then applied to both the titanium and CaTiO 3 : Er 3+ films.A special microscope with lasers (confocal laser scanning microscopy, FV3000RS, Olympus, Tokyo, Japan) was used to see how well the cells attached to the surfaces.After culturing for 48 h, the BHK cells on the CaTiO 3 : Er 3+ films and the Ti substrates underwent fixation (4% paraformaldehyde/PBS, 10 min), washing, permeabilization (0.1% Triton X-100/PBS, 5 min), washing, and fluorescent phalloidin labeling (45 min).Cell nuclei were specifically stained with a fluorescent dye called DAPI for a duration of 5 min.The stained cells adhering to the samples were mounted onto glass cover slips for subsequent observation of cell attachment.Cell growth was evaluated by CellTiter 96 ® AQ ueous one solution compound, which contains a tetrazolium compound [3-(4,5-dimethylthiazol-2yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium, inner salt; MTS].The quantity of the formazan product, which is measured by the absorbance at 490 nm using a microreader (Chromate 4300 Microplate Reader, Awareness Technology, Palm City, FL, USA), is directly proportional to the number of living cells on the sample.
Results of Material Synthesis
Following the fabrication of the material with varying Er dopant concentrations, the morphological and structural characterization of these samples was analyzed.The successful fabrication of CaTiO 3 : Er 3+ thin films was clearly demonstrated by using analytical techniques, including XRD, Raman, EDS, and SEM. Figure 1a shows the XRD patterns of the as-synthesized samples with different Er 3+ doping content.Based on JCPDS No. 22-0153 reference database, all diffraction patterns of the samples correspond to pure orthorhombic phase CaTiO 3 with space group Pbnm.The orthorhombic structure of CaTiO 3 contains a Ca 2+ ion with eight coordinates (CaO 8 ) and the Ti 4+ ion with six coordinates in an octahedron (TiO 6 ) [5].Ca 2+ atoms at the dodecahedron (CaO 8 ) site are easily replaced by Er 3+ , while Ti 4+ atoms continue to be unaltered at the TiO 6 site.Given the similarity of the ionic radii of Er 3+ (1.19 Å, coordination number = 12) and Ca 2+ (1.34 Å, coordination number = 12), RE 3+ can replace Ca 2+ in the CaTiO 3 structure [1].To compensate for charge imbalances caused by lattice defects, a substitution process occurs, including Ca 2+ (V' Ca ) and/or O 2− vacancies (V O •) [19,23].The diffraction profile of the Pbnm space exhibits typical (hkl) planes such as (111), (121), (031), ( 220), (040), (042), and (242) at 27.4 • , 33.1 • , 39.4 • , 41.2 • , 47.03 • , 59.1 • , and 69.8 • , respectively.The most dominant diffraction peaks appeared and were centered at 2θ = 33.1 • , corresponding to the (121) plane of the CaTiO 3 phase.As shown in Figure 1a, The XRD model of the CaTiO 3 film with optimized Er 3+ concentration suggests the presence of a minor phase (peaks marked by symbol ■) corresponding to Ti (JCPDS No. 44-1294), but this phenomenon did not affect much the phase of CaTiO 3 and optical properties.These results indicated the successful synthesis of the crystalline structure of CaTiO 3 films.Moreover, the XRD result revealed that the intensity of CaTiO 3 : Er 3+ diffraction peaks increases gradually with an increasing dopant concentration.When the concentration of Er 3+ is increased to 2.5%, two characteristic peaks, 33.1 • and 47.03 • , emerge with relatively high intensity and sharpness.This indicates that the crystal structure of CaTiO 3 : Er 3+ is well formed.
Raman spectroscopy serves as a valuable tool for studying symmetry changes in various compounds.In the case of the CaTiO 3 material, twenty-four Raman-active modes were identified within its orthorhombic Pbnm crystal structure (with Z B = 4), featuring four molecular units within the primitive cell.The material's irreducible representation is denoted as Γ Raman,Pbnm = 7A g + 5B 1g + 7B 2g + 5B 3g .Figure 1b depicts the Raman spectra of CaTiO 3 : Er 3+ films span the frequency scope of 100-900 cm −1 .Raman-active modes at 134 cm −1 is attributed to the oscillation of Ca bonded to the TiO 3 (Ca-TiO 3 ) lattice.Modes at 226, 244, 281, and 362 cm −1 are linked to O-Ti-O bending modes, while those at 464 and 495 cm −1 correspond to Ti-O 6 twisted modes (bending or internal oscillation of the oxygen cage), with a second large band observed in the scope of 600-700 cm −1 .These observations agree with previous works [24][25][26][27].
CaTiO3: Er 3+ films span the frequency scope of 100-900 cm -1 .Raman-active modes at 134 cm -1 is attributed to the oscillation of Ca bonded to the TiO3 (Ca-TiO3) lattice.Modes at 226, 244, 281, and 362 cm -1 are linked to O-Ti-O bending modes, while those at 464 and 495 cm -1 correspond to Ti-O6 twisted modes (bending or internal oscillation of the oxygen cage), with a second large band observed in the scope of 600-700 cm -1 .These observations agree with previous works [24][25][26][27].Figure 3 shows the FESEM picture of the CaTiO3: Er 3+ films.As shown in Figure 3 when the CaTiO3 is doped at a concentration of 0.5% Er 3+ , the material begins to form fibe clumps.Increasing the doping concentration of (1-2)% Er 3+ leads to the formation of in creasingly homogeneous square-shaped fiber bundles.The CaTiO3: Er 3+ films consist o uniform cubic particles with a bar length of (1-5) µm and a width of (1-2) µm.Th results of this SEM analysis are consistent with the previously analyzed XRD data.Figure 3 shows the FESEM picture of the CaTiO 3 : Er 3+ films.As shown in Figure 3, when the CaTiO 3 is doped at a concentration of 0.5% Er 3+ , the material begins to form fiber clumps.Increasing the doping concentration of (1-2)% Er 3+ leads to the formation of increasingly homogeneous square-shaped fiber bundles.The CaTiO 3 : Er 3+ films consist of uniform cubic particles with a bar length of ~(1-5) µm and a width of ~(1-2) µm.The results of this SEM analysis are consistent with the previously analyzed XRD data.Figure 3 shows the FESEM picture of the CaTiO3: Er 3+ films.As shown in Figure 3, when the CaTiO3 is doped at a concentration of 0.5% Er 3+ , the material begins to form fiber clumps.Increasing the doping concentration of (1-2)% Er 3+ leads to the formation of increasingly homogeneous square-shaped fiber bundles.The CaTiO3: Er 3+ films consist of uniform cubic particles with a bar length of (1-5) µm and a width of (1-2) µm.The results of this SEM analysis are consistent with the previously analyzed XRD data.
The rapport between up-conversion intensity (I upc ) bands and pump power excitation (P pump ) was examined to elucidate the up-conversion process in Er 3+ co-doped films, as illustrated in Figure 6a,b.Understanding the power accessorial of up-conversion luminescence is crucial for unraveling the up-conversion convert mechanism.Typically, when up-conversion luminescence is generated at low pump intensity, the correlation between up-conversion emission intensity (I) and pump power (P) can be written as I upc ∝ (P pump ) n or log (I upc ) ∝ n log (P pump ), where 'n' signifies the number of photons required to be ex-cited to attain the up-conversion emitting level [31].Figure 6a shows the power-dependent UC fluorescence spectra, and Figure 6b shows that the integrated intensity of the green and red emissions of CaTiO 3 : Er 3+ (2%) varies with pump power on the double logarithmic scale.Figure 6b shows that the log-log power accessorial slopes of 523, 550, and 670 nm are 3.05, 3.07, and 2.73, respectively.Our findings suggest that the green and red up-conversion (UC) luminescence of Er 3+ stems from three-photon processes in the circumstances of the CaTiO 3 : Er 3+ .Supported by the observed quadratic dependence on pump power and the alignment of energy levels, the most likely transitions responsible for the UC emissions are illustrated in Figure 7.The rapport between up-conversion intensity (Iupc) bands and pump power excitation (Ppump) was examined to elucidate the up-conversion process in Er 3+ co-doped films, as illustrated in Figure 6a,b.Understanding the power accessorial of up-conversion luminescence is crucial for unraveling the up-conversion convert mechanism.Typically, when upconversion luminescence is generated at low pump intensity, the correlation between upconversion emission intensity (I) and pump power (P) can be written as Iupc ∝ (Ppump) n or log (Iupc) ∝ n log (Ppump), where 'n' signifies the number of photons required to be excited to attain the up-conversion emitting level [31].Figure 6a shows the power-dependent UC fluorescence spectra, and Figure 6b shows that the integrated intensity of the green and red emissions of CaTiO3: Er 3+ (2%) varies with pump power on the double logarithmic scale.Figure 6(b) shows that the log-log power accessorial slopes of 523, 550, and 670 nm are 3.05, 3.07, and 2.73, respectively.Our findings suggest that the green and red up-conversion (UC) luminescence of Er 3+ stems from three-photon processes in the circumstances of the CaTiO3: Er 3+ .Supported by the observed quadratic dependence on pump power and the alignment of energy levels, the most likely transitions responsible for the UC emissions are illustrated in Figure 7. an electron transition from Er ( I11/2) to Er ( S3/2) stimulated state ensues, triggering radiative relaxation to Er 3+ ( 4 I15/2) along with the emission of a 550 nm green photon, as described by Equation ( 3).The electrons in the 4 S3/2 level undergo no radiative relaxation to the 4 F9/2 level; the next step is their return to the initial state of 4 I15/2, creating a red emission band, as indicated in Equation ( 4).The energy of the 4 S3/2 level surpasses that of the 4 F9/2 level, resulting in a subdued red emission band.In the direct current (DC) emission, the strong green emission and weak red emission can be explained as follows: Initially, Er 3+ ions in the basic state undergo excitation to the 4 G11/2 level (refer to Figure 7) through a xenon lamp with excitation wavelengths of 379 nm.Subsequently, a majority of the electrons in the 4 G11/2 level relax to the 4 S3/2 no radiative (NR) level with the swiftest multi-phonon relaxation rate, followed by their return to the radiative basic state, 4 I15/2, leading to the generation of two green emission bands.Notably, there is minimal excitation of electrons to the 4 I11/2 state, leading to a limited electron population in the 4 F9/2 level.Therefore, red emission occurs with weak intensity.Our findings indicate that the proposed mechanism effectively explains the dual-mode green emission observed in the CaTiO3: Er 3+ films from the perspective of the electronic structure scale.
Biocompatibility of CaTiO3: Er 3+
To confirm the biocompatibility of the material, the research team prepared a CaTiO3: 2%Er 3+ sample and conducted a study on its hydrophilicity.The hydrophilicity of a material is determined by using a contact angle measurement technique.In this work, we used the contact angle method to verify the hydrophilic properties of the Ti substrate, CaTiO3 Figure 7 shows the system's energy level diagram to understand the phosphor's dualmode green emission behavior.Several steps are proposed for selectively enhancing green UC emission of CaTiO 3 : Er 3+ phosphors [9,32].
Step 1: The process commences with the absorption of a photon by the electron at the 4 I 15/2 state, elevating it to the 4 I 11/2 level.This initial step, known as ground state absorption (GSA), is pivotal to the system's behavior.
Step 2: The electron at the 4 I 11/2 level absorbs a second photon and transitions to a higher energy excited state, the 2 F 7/2 level, by excited condition absorption (ESA).It is notable owing to the unstable 2F 7/2 state that leads to no radiative relaxation of the electron to the 2 H 11/2 and 4 S 3/2 levels (two meta-stable levels).
Step 3: The electrons in 2 H 11/2 and 4 S 3/2 return to the ground condition 4 I 15/2 , producing intense green emission.Meanwhile, a few electrons underwent non-radiative relaxation from the 4 S 3/2 level to the 4 F 9/2 level and then moved to the initial condition ( 4 I 15/2 ), forming a weak red UC emission.
The Er 3+ ions initially undergo excitation from the 2 F 7/2 to 2 F 5/2 level by absorbing laser photons (ground/excited state absorption, GSA/ESA) due to the strong absorption at wavelength 980 nm.Under 980 nm excitation, the 4 I 11/2 level of Er 3+ becomes populated by absorbing a single infrared photon from the 4 I 15/2 level (called ground state absorption, GSA), as depicted in Equation (1).Through the nonradiative relaxation (NR) progress from the 4 F 7/2 level to the 2 H 11/2 / 4 S 3/2 (two meta-stable levels) lower energy level, practically all of the electrons in the 2 H 11/2 / 4 S 3/2 level progressed down to the first state 4 I 15/2 of Er 3+ , which form intense green emissions (523-550 nm).The electrons in the Er 3+ ( 4 F 7/2 ) excited state could decay to the 2 H 11/2 excited state through a process called thermal radiation.This process emits a 523 nm photon as the electrons return to their ground state, Er 3+ ( 4 I 15/2 ), as illustrated in Equation (2).Moreover, a part of electrons in the Er 3+ ( 4 I 11/2 ) stimulated state can transfer energy to an adjacent electron within a similar energy level.Consequently, an electron transition from Er 3+ ( 4 I 11/2 ) to Er 3+ ( 4 S 3/2 ) stimulated state ensues, triggering radiative relaxation to Er 3+ ( I 15/2 ) along with the emission of a 550 nm green photon, as described by Equation ( 3).The electrons in the 4 S 3/2 level undergo no radiative relaxation to the 4 F 9/2 level; the next step is their return to the initial state of 4 I 15/2 , creating a red emission band, as indicated in Equation ( 4).The energy of the 4 S 3/2 level surpasses that of the 4 F 9/2 level, resulting in a subdued red emission band.
In the direct current (DC) emission, the strong green emission and weak red emission can be explained as follows: Initially, Er 3+ ions in the basic state undergo excitation to the 4 G 11/2 level (refer to Figure 7) through a xenon lamp with excitation wavelengths of 379 nm.Subsequently, a majority of the electrons in the 4 G 11/2 level relax to the 4 S 3/2 no radiative (NR) level with the swiftest multi-phonon relaxation rate, followed by their return to the radiative basic state, 4 I 15/2 , leading to the generation of two green emission bands.Notably, there is minimal excitation of electrons to the 4 I 11/2 state, leading to a limited electron population in the 4 F 9/2 level.Therefore, red emission occurs with weak intensity.Our findings indicate that the proposed mechanism effectively explains the dual-mode green emission observed in the CaTiO 3 : Er 3+ films from the perspective of the electronic structure scale.
Biocompatibility of CaTiO 3 : Er 3+
To confirm the biocompatibility of the material, the research team prepared a CaTiO 3 : 2%Er 3+ sample and conducted a study on its hydrophilicity.The hydrophilicity of a material is determined by using a contact angle measurement technique.In this work, we used the contact angle method to verify the hydrophilic properties of the Ti substrate, CaTiO 3 film, and CaTiO 3 : Er 3+ films.Interestingly, as shown in Figure 8, the result of all CaTiO 3 : Er 3+ films were considerably lower than that of the Ti substrate and CaTiO 3 films.Among the three kinds of films, the CaTiO 3 : Er 3+ films had the most evident decrease in contact angle, which indicated the best improvement in hydrophilic.Hence, surface transformation from the hydrophobic Ti to hydrophilic CaTiO 3 : Er 3+ films can be used to design water dispersible film phosphors in bio-medical fields.The findings demonstrate that the fabricated material possesses promising biomedical applications, particularly in the realm of implants.The material's hydrophilic plays a crucial role in enhancing cell adhesion and growth, which can potentially promote wound healing and tissue regeneration in the surrounding area.Furthermore, highly hydrophilic materials can be utilized for delivering drugs or growth factors to specific locations within the body.This capability holds the potential to improve treatment efficacy and minimize side effects.
To confirm the crucial role of hydrophilicity in biomedical applications, such as enhancing cell adhesion and growth, cell culture experiments were conducted on the material's surface to evaluate cell proliferation.Confocal laser scanning microscopy (CLSM) images in Figure 9a,b depict BHK cells adhering to the surfaces of Ti and CaTiO 3 : Er 3+ films, respectively.The uniform distribution and spread-out, fibrous morphology of the cells in both samples indicate good biocompatibility.
water dispersible film phosphors in bio-medical fields.The findings demonstrate that the fabricated material possesses promising biomedical applications, particularly in the realm of implants.The material's hydrophilic plays a crucial role in enhancing cell adhesion and growth, which can potentially promote wound healing and tissue regeneration in the surrounding area.Furthermore, highly hydrophilic materials can be utilized for delivering drugs or growth factors to specific locations within the body.This capability holds the potential to improve treatment efficacy and minimize side effects.An MTS assay was employed to further assess the biocompatibility of the materials with the cell proliferation of the Ti and CaTiO3: Er 3+ films.The rate of proliferation was measured after culturing up to 72 h, using MTS for mitochondrial reduction.This assay is based on the ability of metabolically active cells to reduce a tetrazolium-based compound, MTS, to a purple formazan product.The quantity of formazan product is directly proportional to the number of living cells in the culture.This assay is a widely recognized tool
Figure 2
Figure2shows the EDS spectra of the CaTiO3: xEr 3+ films (x = 0.5, 2.0, and 3.0 mol%).All samples show attendance of O, Ca, Ti, Er, and Na elements.The presence of Na could be due to the residual input solution.Analysis revealed no traces of other impurities, suggesting the high purity of the synthesized phosphors.
Figure 2 1 Figure 2 .
Figure 2 shows the EDS spectra of the CaTiO 3 : xEr 3+ films (x = 0.5, 2.0, and 3.0 mol%).All samples show attendance of O, Ca, Ti, Er, and Na elements.The presence of Na could be due to the residual input solution.Analysis revealed no traces of other impurities, suggesting the high purity of the synthesized phosphors.Materials 2024, 17, x FOR PEER REVIEW 5 of 1
Figure 8 .
Figure 8. Contact angles of Ti substrate, CaTiO3, and CaTiO3: Er 3+ films.To confirm the crucial role of hydrophilicity in biomedical applications, such as enhancing cell adhesion and growth, cell culture experiments were conducted on the material's surface to evaluate cell proliferation.Confocal laser scanning microscopy (CLSM) images in Figure9a,b depict BHK cells adhering to the surfaces of Ti and CaTiO3: Er 3+ films, respectively.The uniform distribution and spread-out, fibrous morphology of the cells in both samples indicate good biocompatibility.
Figure 9 .
Figure 9. CLMS images of the BHK cell on (a) Ti; (b) CaTiO 3 : Er 3+ films; and (c) proliferation of the BHK cell on Ti and CaTiO 3 : Er 3+ films after 72 h of culturing.The red indicates the cytoskeleton structure of the cells and the green indicates the cell nuclei. | 2024-07-10T15:19:48.630Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "0cecc79579b18d9c28cccf72133899a3f0c2df8c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ma17133376",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59204852fc21ef97949e45b8425bb2804c5bfbad",
"s2fieldsofstudy": [
"Materials Science",
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
58009890 | pes2o/s2orc | v3-fos-license | Lateral Costal Artery: Clinical Importance of an Accessory Thoracic Artery
The lateral costal artery has sometimes been identified as the culprit for the "steal phenomenon" after coronary artery bypass grafting, besides being occasionally used for myocardial revascularization. Its branches make anastomoses with the internal thoracic artery through lateral intercostal arteries. We aim to report, on three cases, the clinical significance of a well-developed lateral costal artery after coronary artery bypass grafting. Two out of three patients who underwent coronary artery bypass graft surgery in our center between June 2010 and August 2017, applied to us with stable angina pectoris, while the third one was diagnosed with acute coronary syndrome after applying to the emergency department. In coronary cineangiography, in all three cases, a well-developed accessory vessel arising from the proximal 2.5 cm segment of the left internal thoracic artery coursed as far as the 6th rib was detected, and it was confirmed to be the lateral costal artery. A stable angina pectoris in two of the patients was thought to be the result of steal phenomenon caused by the well-developed lateral costal artery. In the two cases with stable angina pectoris the lateral costal artery was obliterated via coil embolization. In the other case with the proximal left anterior descending artery stenosis, before percutaneous coronary intervention, the lateral costal artery was obliterated via coil embolization and the occluded subclavian artery was stented. Routine visualization in cineangiography and satisfactory surgical exploration of the left internal thoracic artery could be very helpful to identify any possible accessory branch of the left internal thoracic artery like the lateral costal artery.
costal artery (LCA) rises as a first branch of the LITA in 92% of the population. It is present 5.5% bilaterally and 11.1% unilaterally [2] . The mean distance between internal thoracic artery origin and lateral costal branch origin is 2.3 cm and 2.9 cm on the right and left side of the anterior thoracic wall, respectively. Mean diameter of the LCA is found to be 1.74±0.8 mm [2] . It has sometimes been identified as culprit for the "steal phenomenon" after coronary artery bypass grafting (CABG) and the artery itself is occasionally used for myocardial revascularization [3] . Embryologically, this artery, like the normal parietal arteries of the trunk, might form a longitudinal channel connecting the intersegmental arteries [3] .
In spite of advanced surgical techniques, it's not possible to improve LITA exploration to divide all its side branches. Ligation of the 1 st intercostal and more proximal branches of the LITA, which have superiority in left ventricular revascularization with 1A level of evidence, is of great importance to prevent "steal phenomenon". It's reported that non-ligated side branch frequency in coronary angiographies performed in patients who underwent coronary artery bypass grafting (CABG) is between 9-25% [4] .
Abstract
The lateral costal artery has sometimes been identified as the culprit for the "steal phenomenon" after coronary artery bypass grafting, besides being occasionally used for myocardial revascularization. Its branches make anastomoses with the internal thoracic artery through lateral intercostal arteries. We aim to report, on three cases, the clinical significance of a well-developed lateral costal artery after coronary artery bypass grafting. Two out of three patients who underwent coronary artery bypass graft surgery in our center between June 2010 and August 2017, applied to us with stable angina pectoris, while the third one was diagnosed with acute coronary syndrome after applying to the emergency department. In coronary cineangiography, in all three cases, a well-developed accessory vessel arising from the proximal 2.5 cm segment of the left internal thoracic artery coursed as far as the 6 th rib was detected, and it was confirmed to be the lateral costal artery. A stable angina pectoris in two of the patients was thought to be the result of steal phenomenon caused by the well-developed lateral costal artery. In the two cases with stable angina pectoris the lateral costal artery was obliterated via coil embolization. In the other case with the proximal left anterior descending artery stenosis, before percutaneous coronary intervention, the lateral costal artery was obliterated via coil embolization and the occluded subclavian artery was stented. Routine visualization in cineangiography and satisfactory surgical exploration of the left internal thoracic artery could be very helpful to identify any possible accessory branch of the left internal thoracic artery like the lateral costal artery.
INTRODUCTION
The first description of the lateral costal artery (LCA) was in 1730 by Heister, who called it the lateral internal thoracic artery (LITA) [1] . The famous anatomist Henle described it further as "arising from the internal thoracic near its entrance into the thorax and descending on the inner surface of four to six upper ribs and anastomosing with the corresponding intercostal arteries". In the same study, its risky location in terms of thoracentesis and various surgical procedures was also underlined [1] . Lateral After LCA obliteration, the patient's angina disappeared, but dyspnea persisted. Since she had advanced restrictive lung disease, she referred to a pulmonologist with medical treatment comprising of acetylsalicylic acid 100 mg, metoprolol 100 mg, spironolactone 50 mg and hydrochlorothiazide 50 mg.
Case 3: 71-year-old male patient, underwent triple CABG one month ago, applied to our emergency department with unstable angina pectoris. His ECG record displayed ST segment elevation and troponin-T value was measured 0.45 ng/ml (Table 1). In primary percutaneous coronary intervention, it was detected that the left subclavian artery (SCA) was proximally occluded, the LITA graft was patent, and there was a LITA side branch, thought to be the LCA, which was one third the diameter of the LITA. The LCA was extending to the 6 th rib and making anastomoses with intercostal arteries. First, balloon angioplasty was performed in the left SCA. Then, the lesion causing 80% left anterior descending artery (LAD) stenosis was stented. After that, the LCA was obliterated via coil embolization. Finally, the left SCA was stented. Stent placed in the SCA also occluded the LITA ostium inadvertently. The patient, being hemodynamically stable, was discharged from the hospital a week after admission with a medical treatment comprising of acetylsalicylic acid mg and metoprolol 100 mg. In follow-up visits, cardiac parameters have been found to be normal.
In our institution, LITA flow measurement is done by intraoperative free-bleeding technique. LITA is harvested and explored using electrocautery and metallic clips. Topical application of 0.2% papaverin solution at 37ºC is routinely done to prevent LITA spams. In the free-bleeding technique, the harvested LITA graft, before any balloon dilatation or topical papaverin application, is let to freely bleed from the distal end to a measuring cylinder for a minute while the heart rate and arterial tension are CASES Case 1: 65-year-old female patient, underwent triple CABG three months ago, applied to us with angina pectoris appearing after 50-100 m of walking. She had been under medical treatment of acetylsalicylic acid 100 mg and metoprolol 100 mg. Effort test of the patient whose physical examination and resting electrocardiography (ECG) were normal unveiled ST depression (Table 1). Coronary angiography performed in the patient revealed a well-developed LITA side branch at a distance of 2-2.5 cm from the origin of LITA (Figure 1). The accessory branch, being one and a half times the diameter of LITA, was extending to the lateral thoracic wall, where it was making anastomoses with lateral intercostal arteries and thus supplying blood to anterior and posterior side of the lateral thoracic wall. It was detected that this accessory thoracic artery, the LCA, was stealing a large part of the myocardial blood flow to lateral thoracic wall. The LCA was obliterated via coil embolization ( Figure 2). The patient's effort capacity had improved and no ST segment change was observed in the effort test performed one month after the coil embolization of the lateral costal artery.
Case 2: 56-year-old female patient expressed unstable angina pectoris and dyspnea within the first week after CABG. Transthoracic ECG revealed left ventricular free wall motion abnormality and 1-2 mitral valve regurgitation. Ejection fraction was 30-35% (Table 1). Coronary angiography was performed in the patient who has been under medical treatment for diabetes mellitus for 15 years. It exposed the LCA which arose from the LITA at a distance of 2-2.5 cm from the origin of LITA. It was extending to the 6 th intercostal space and was two thirds the diameter of the LITA. It was postulated that the LCA had aggravated the steal phenomenon, therefore it was obliterated via coil embolization. transverse cervical artery, inferior thyroidal artery, and ascending cervical artery. In the all three cases we present, the LCAs of varying diameter were anastomosing with lateral intercostal arteries. We have detected the undivided LCA in only three cases within seven years. In a long period of follow-up, due to probability of existence of asymptomatic patients and symptomatic patients applying to other institutions, the exact rate of prevalence of undivided LCA for our center couldn't be determined. In one of our cases, a female with breast-feeding history, LCA diameter was greater than the LITA diameter ( Figure 1). After evaluating the coronary angiographies of 103 patients who underwent CABG surgery, Sutherland et al. [10] found that the LCA was present in 30 (29%) patients, either unilaterally or bilaterally. They showed that 25 of these were extending to the 2 nd intercostal space, while the remaining 5 extended to the 5 th intercostal space.
Considering its invasive nature and potential complications, we abstained from postoperative intracoronary flow measurement. As for less invasive methods like myocardial perfusion scintigraphy, magnetic resonance imaging, positron emission tomography and transesophageal echocardiography, we faced problems regarding availability, cost, and radiation exposure. Transthoracic Doppler echocardiography is commonly used for the coronary and LITA blood flow measurements. As a result of suboptimal image quality in postoperative patient, only in the first case we were able to measure the coronary blood flow (45 cm/sn) via transthoracic Doppler echocardiography. Therefore, clinical findings and negative effort ECG were used as criteria in follow-up.
In the cases with inadequate surgical exploration of the LITA, great side branches could be passed over. LITA visualization absent in angiography could also lead to insufficient exploration of the LITA side branches. Mostly, the steal phenomenon caused by undivided LITA side branches is tried to be overcome by increasing the intensity of medical therapy, but it must be brought to mind that the presence of the LCA might be the reason for post-CABG angina.
CONCLUSION
Considering the prevalence of LCA and undivided LCA seen after CABG, in patients planned to undergo CABG, preoperative visualization of the left SCA and proximal part of the LITA is of paramount importance. Doing this could significantly lower the probability of serious postoperative complications. within normal limits. After measuring the total volume of blood in the cylinder, LITA graft with flow of 30 ml/min or more is considered to be proper for bypass grafting (Table 1).
DISCUSSION
The LITA, in 92% of the cases, arises from the first part of the left subclavian artery opposite to thyrocervical trunk 2 cm above the sternal end of the clavicle. In 7% of the cases it arises from the 2 nd part of the left subclavian artery, whereas in 1% of the cases it does from the 3 rd part [2] . In 70% of the cases, the LITA rises directly from the left subclavian artery while in the remaining 30% it originates from the left subclavian artery as a component of a common trunk with other arteries [2,5,6] . After its origin from the left subclavian artery it extends on left anterior thoracic wall for 1.5 cm and 5.4 cm lateral to the sternum at the levels of the 1 st and 6 th ribs, respectively. The LITA gives pericardiophrenic, thymic, sternal, anterior intercostal and perforating branches through its course to the abdominal wall over the posterior surface of the first six ribs. It divides into musculophrenic and superior epigastric arteries at the 6 th intercostal space. In the cases we present, the LITA was originating from the first part of the left SCA and coursing in its natural route. Its mean diameter was 2-2.5 mm (Table 1). Perioperative manual flow measurements indicated a mean flow of 45-56 ml/min (Table 1). Tough flow and size parameters were in normal limits, the steal phenomenon seen after LAD-LITA anastomoses was ascribed to myocardial vascular resistance directing the LITA flow toward the LCA. Calafiore et al. [7] in a study comparing 150 patients with left anterior thoracotomy to 150 patients with median sternotomy, reported the same rates of undivided lateral costal artery contrary to expectations, [15 (10%) and 17 (11.3%), respectively]. In the same study, rate of presence of undivided both 1 st intercostal artery and branches less than 1 mm in diameter were found to be significantly higher in thoracotomy group. These results indicate that the choice of incision could limit the access to smaller diameter branches but not to the LCA [7] . In a study comprising 262 patients who underwent CABG, Bauer et al. [8] found that the LITA has large side branches in 9% of the cases and has atypical location in 1% of the cases [8] . The undivided LITA branches, when detected, must be obliterated since they, in direct proportion to diameter and location, reduce LITA flow. In a study comprising 38 patients with angina pectoris after CABG, Biçeroğlu et al. [9] detected undivided LITA branches of varying diameter and length in 7 (18.4%) patients. Most of the side branches were found to be located at proximal parts of the LITA [9] . Visualization of the left SCA and the LITA before CABG is of utmost importance in the prevention of postoperative angina pectoris and myocardial infarction resulting from steal phenomenon. Otherwise, like in the cases we present, limited exploration of the LITA could result in serious complications.
A study conducted on cadavers demonstrated that the LCA shows variation at the proximal part of the LITA (15%) [6] . It could be present unilaterally or bilaterally, and it has a diameter close to the LITA. The same study pointed out the increased possibility of steal phenomenon due to these side branches in case the LITA was used as a vascular graft for the coronary revascularization [6] . Henriquez-Pino et al. [6] showed that the LITA arises directly from the left SCA in 70% of the cadavers and that the internal thoracic artery gives LAC branch more distally on the left side. Other arteries accompanying the LCA at the proximal part of the LITA are the suprascapular artery, | 2019-01-22T22:34:06.027Z | 2018-04-02T00:00:00.000 | {
"year": 2018,
"sha1": "6089bf183a2e99c5ef224a743aac5dba6bec3026",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21470/1678-9741-2017-0252",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6089bf183a2e99c5ef224a743aac5dba6bec3026",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253045290 | pes2o/s2orc | v3-fos-license | Distribution of Cancer and Cancer Screening and Treatment Services in Lagos: A 10-Year Review of Hospital Records
PURPOSE In Lagos State, Nigeria, the population distribution of cancers is poorly described because studies are conducted at a few tertiary hospitals. Therefore, this study aims to map all health facilities where cancer screening takes place and describe the cases of cancer screened for and treated. METHODS A cross-sectional survey to identify facilities involved in screening and management of cancers was performed followed by extraction of data on individual cases of cancer screened for and treated at these facilities from 2011 to 2020. All health care facilities in the state were visited, and the survey was performed using standardized national tools modified to capture additional information on cancer screening and treatment. Data analysis was performed using STATA version 14 and R version 3.6.3. RESULTS Cervical cancer was the commonest cancer, accounting for 55% of 2,420 cancers screened, followed by breast (41%), prostate (4%), and colorectal cancers (0.2%). Of the 7,682 cancers treated among Lagos residents, the top five were breast (45%), colorectal (8%), cervical (8%), prostate (5%), and ovarian (4%). The female:male ratio of cancer cases was 3:1. The peak age for cancer among females and males was in the 40- to 49-year age group and 60- to 69-year age group, respectively. The Ikorodu local government area had the highest rate of reported cancer per million population. CONCLUSION Cancer screening is poor with a significant gap in screening for breast cancer since it is the commonest cancer in the state. The findings indicate the urgent need for the establishment of organized screening programs for the predominant cancers in the state and the prioritization of cancer research that addresses key policy and program questions.
BACKGROUND
The GLOBOCAN 2020 estimates show that the burden of cancer is rising and is projected to rise much faster in developing countries. 1 The total number of new cases in Nigeria in 2020 was 124,815, of which 51,398 occurred in males with prostate cancer as the commonest at 29.8% and 73,417 occurred in females with breast cancer being the commonest at 38.7%, followed by cancer of the cervix at 16.4%. 2 Excluding nonmelanoma skin cancer, the top five most frequent cancers in males were prostate, colorectal, non-Hodgkin lymphoma, liver, and leukemia. In females, the top five were breast, cervical, non-Hodgkin lymphoma, ovarian, and colorectal. There were an estimated 78,889 cancer deaths with 34,200 in males and 44,699 in females.
Despite the rising burden of cancers in Africa, the availability of cancer screening and treatment services is limited. 3 Recognition of the implications of this situation has led to global efforts to implement cancer control plans at national and state levels, aiming to decrease cancer incidence, morbidity, and death rates and to enhance the quality of life of people living with cancer. These cancer control plans target a delineated population through the systematic implementation of evidence-based interventions for prevention, early detection, diagnosis, treatment, and palliative care. [4][5][6] Population-based cancer studies or registries are critical and vital because they are core components in the control strategy for cancer. The impact of cancer screening and specific interventions can be evaluated, and information from the registries can inform the formulation of policies and strategies for control and management of cancers. 7,8 Nigeria has developed a national system of cancer registries that consists of 13 population-based and 20 hospital-based cancer registries. 9 the Lagos State, which has an estimated population of more than 24 million people and is reportedly the fifth largest economy in Africa, 11,12 there are three hospital-based cancer registries but no population-based cancer registry. 10 Therefore, the availability of population-based data to inform policy and programing statewide is limited. Furthermore, the cancer registries focus on diagnosis and treatment of cancer and do not provide information about cancer screening, which is also important for decision making.
This study was undertaken to describe the availability and distribution of cancer screening and treatment services and cancers screened for and treated in the Lagos State, Nigeria. The findings will be used to inform policies, planning, and programing related to the prevention and control of cancers in the Lagos State.
Study design and setting
This was a cross-sectional survey to identify health facilities involved in screening and management of cancers in the Lagos State and to gather information about the cancer types seen at these facilities.
Study population and sampling
All health care facilities in the state were visited, and a survey tool is used to identify facilities within the state offering cancer screening and management services. There was no sample size as all facilities involved in the screening and management of cancers were identified and all available data related to the cancers they have screened for and/or managed were collected.
Data collection
Data collection occurred in two phases.
Phase 1-Mapping of the facilities in the Lagos State that offered any form of screening and/or management of cancers. The state monitoring and evaluation officers were recruited and trained to administer a questionnaire in every health facility within their local government and local council development area. The questionnaire was based on the Federal Ministry of Health's Nigeria Health Facility Register Data Collection Form for Hospitals and Clinics, which was modified to include cancer services and cancer specialists. Geocoordinates of facilities were collected using the location services of smartphones/tablets. A health facility was considered publicly owned if managed by government (federal, state, or local government). Otherwise, it was privately owned. There is a list of government facilities, and ownership was also obtained on-site.
Phase 2-Collection of retrospective data on cancers in the State. All the facilities that reported offering both or either screening and/or management of cancers were visited a second time to extract data from January 2011 or later, depending on the date of commencement of operations of that facility, using the cancer surveillance tool.
Data management, analysis, dissemination, and use. The survey instruments used were created in KoboToolbox data repository where the data collectors uploaded the data collected and abstracted from all health facilities. The data were cleaned and edited for errors before analysis. Data were analyzed using STATA statistical software version 14 (Sta-taCorp LLC, College Station, TX) and R version 3.6.3 (Free Software Foundation, Boston, MA), and maps were produced using QGIS version 3.18 (OSGEO, Beaverton, OR).
Percentages were computed for facilities screening for and/ or managing any cancer and specific cancers with disaggregation by the local government and sector (public and private). Maps showing the location of health facilities that provide cancer services were prepared.
The frequency of all cancers and specific cancer types was computed and disaggregated by age, sex, sector, and local government. Trends over time were also computed. The projected population of each local government area of Lagos for each year from 2011 to 2020 was used to compute the number of cancer cases per 1,000,000 population.
Ethical considerations
Confidentiality was maintained as patients' records and information were entered on encrypted and password-
Key Objective
What can policy makers and service planners learn from the distribution of cancer screening services, cancer treatment services, and cancer cases in the Lagos State? Knowledge Generated Cancer screening and treatment services are limited with cervical cancer as the main priority for screening. Breast cancer stands out as the predominant cancer by occurring more than five times as frequently as colorectal, cervical, and prostate cancers, which are the next commonest cancers.
Relevance
The availability and utilization of cancer services as well as the epidemiology of cancer point to an urgent need for organized cancer screening services with breast cancer among women as the first priority. protected devices. Identifiers or numbers were assigned to each patient's record and used in place of names. Approval to collect information was obtained from the Lagos State Ministry of Health, Alausa, Lagos, and ethical approval for the study was obtained from the Institution Review Board of the Nigerian Institute of Medical Research (IRB-21-022).
RESULTS
A total of 2,154 health facilities were identified during the mapping exercise, of which 2,002 (92.9%) were operational. A significant proportion of facilities (82%) were privately owned, whereas 18% were public facilities. Among public facilities, 90% were primary health care level facilities, 8% were secondary facilities, and 2% were tertiary facilities.
Health Facilities That Provide Cancer Services
A total of 447 health facilities surveyed indicated that they provided one or more cancer services. However, in followup visits to the facilities, data could only be retrieved from 104 health facilities for cancer screening and from 12 health facilities for cancer treatment.
During the period from 2011 to 2020, 104 health facilities were screened for one or more of the following cancers: breast, cervix, colorectal, and prostate distribution across 14 local government areas, whereas nine health facilities provided data on cancer treatment (Fig 1). However, in the first 6 months of the year 2021, 150 health facilities provided data on people who were screened for cancer (mostly cervical cancer) and these facilities were distributed across all 20 Local Government Areas (LGAs), whereas two facilities had data on cancer treatment for the first time. One facility that had cancer treatment data did not have dates.
Screening for Cancer in the Lagos State
From 2011 to 2020, cervical cancer was the commonest cancer screened for in the Lagos State accounting for 55% of 2,420 cancer screening performed. This is followed by breast cancer (41%), prostate cancer (4%), and colorectal cancer (0.2%). However, cervical cancer had the lowest positivity rate, whereas breast cancer contributed the largest number of positive screens (Fig 2). Colorectal and prostate cancers had high positive screen rates but very low numbers. More than 80% of people screened for cancer were age , 50 years; however, people age 50 years and above had higher positive screen rates.
From January to June 2021, the number of people screened for cervical cancer was more than 10 times higher than the number screened in the 10-year period from 2011 to 2020. During this period in 2021, cervical cancer accounted for 97.8% of 16,185 cancer screening performed followed by breast cancer (1.9%) and prostate cancer (0.3%). This is because of a statewide donorsupported screening program for cervical cancer. The number of people screened for breast and prostate cancers remained slightly higher but similar to the annual performance in 2020. In the first 6 months of 2021, the rate of positive screens for cervical cancer was slightly higher among women who were age , 50 years.
Cancer Cases in the Lagos State
A total of 9,822 cases of cancer were diagnosed from January 2011 to December 2020 in the Lagos State. Residents of the Lagos State accounted for 7,682 (78%) cases of cancer diagnosed, whereas nonresidents accounted for 1,597 (16%) cases. However, the state of residence was not specified for 543 (6%) patients. The number of cancer cases diagnosed annually among Lagos residents increased in the period 2015-2020 compared with 2011-2014 (Fig 3), and of the 7,682 cases of cancer among Lagos residents, 76% (5,817) occurred in females and 24% (1,865) occurred among males.
The highest number of reported cancer cases was reported from Alimosho LGA for both males and females followed by Ikorodu and Kosofe LGAs (Fig 4). However, when the population is taken into consideration, Ikorodu had the highest rate of reported cancer per million population. The distribution of the top 10 cancers diagnosed in Lagos is shown in Table 1. Breast cancer is predominant cancer among females followed by cancer of the cervix, colorectal cancer, ovary, and uterus, whereas prostate cancer is the most prevalent cancer among males. Others are cancers of the colon/rectum, ENT, liver, and skin. Among females, the top five cancers vary by age before stabilizing among women age 50 years and above, whereas among males, colorectal cancer is the commonest cancer in the 20-to 59year age group and from age 60 years, prostate cancer becomes the commonest cancer (Fig 5).
The top five cancers among females have all increased over time (Fig 6), whereas among males, only colorectal, liver, and prostate cancers increased in the second half of the decade.
Among females, 75% of cancers occurred among persons who were either professionals or services and sales workers compared with 56% among males ( Table 2).
DISCUSSION
This study shows that the availability of cancer screening or treatment services in the Lagos State is limited as only 5% of health facilities provided cancer screening or treatment services. Poor documentation and archiving of patient information and records might have contributed to this and the relatively low number of cancer cases reported since more than 300 health facilities had indicated that they provided cancer services but could not provide data to support the assertion. This loss of data is a critical concern as it could hinder optimal policy formulation, program planning, and resource allocation because of incomplete characterization of availability and distribution of cancer services.
Most of the health facility data on screening are recent from 2019 and may be due to growing interest in setting up screening programs by facilities or organizations. It is noteworthy that in the first 6 months of 2021, the number of people screened for cervical cancer was more than 10 times higher than the number screened in the 10-year period from 2011 to 2020. This was due to the introduction of an organized cervical cancer screening program by a nongovernmental organization. Furthermore, screening services became available in all local governments compared with 14 LGAs in 2020. Screening before 2020 appeared to be largely unorganized, and some were actually diagnostic, especially, for prostate and colorectal cancers, which had very high positivity rates after the supposed screening. The Ministry of Health organized special cancer screening clinics, but the records were not held at the health facilities. Thus, the number of people screened for cancer is likely to be higher than that reported in this study.
The expansion of screening in 2021 to all local government areas is of great benefit to the population but exclusively More than 10% of cancers being treated in Lagos occurred among individuals who are not residents in Lagos. The percentage may be higher as some gave the address of a friend or relative with whom they were residing for the duration of the treatment. Unsurprisingly, the majority of patient's residents outside of Lagos were from the Ogun State, which is the only state that shares a border with the Lagos State. Nonresidents were twice likely to receive cancer care from a private facility compared with residents. This may be indicative of a greater willingness to pay for private services. However, 80% of nonresidents received care at public facilities. This is probably indicative of the limited availability of cancer services in the private sector and costs.
Given the poor documentation, it is difficult to determine if the increase in the second half of the data was due to better documentation or retrieval of records for this period. However, the three facilities with cancer registries, of which the majority of data were extracted from all, showed an increase in the number of cancer cases in the second half of the decade. This increase is in agreement with the prediction that there would be a major increase in cancer incidence and mortality in developing countries. The top five cancers among Lagos residents are breast, colorectal, cervical, prostate, and ovarian in descending order. This is slightly different from the national picture, with the top five being breast, prostate, cervical, colorectal, and non-Hodgkin lymphoma. 2 Differentiation by sex showed that the top five female cancers were breast, cervix, colorectal, ovary, and uterus, with breast cancer accounting for 60% of cancers, whereas among men, top five cancers were prostate, colorectal, liver, and skin with prostate cancer accounting for 20%. The contribution of breast cancer is substantial as it is the main driver of the higher burden of cancer among women. Overall, the female-tomale ratio of cancer in the state is 3:1 but declines to 1.3:1 with the exclusion of breast cancer.
The proportional contribution of individual cancers varies between countries and between administrative areas and hospitals within the same country. 1,13,14 We found that among women in Lagos, the contribution of breast cancer (60%) was much higher than the national figure (39%), whereas among men, the contribution of prostate cancer (20%) was lower than the national figure (30%). This demonstrates the importance of determining the local epidemiology of cancers.
The large contribution of professionals and sales workers may be indicative of the proportional contribution of this group in the population of the state, indicative of an increased risk for cancer because of their lifestyle, and/or indicative of their financial ability to seek treatment. To address the issue of financial access, the Lagos State Government has included the treatment of early stages of breast, prostate, cervical, and colorectal cancers in the state health insurance package. 15 The package also covers the cost of screening for cancer of the cervix but not for breast, prostate, or colorectal cancer.
The highest number of reported cancer cases occurred among people living in Alimosho LGA, which has the largest population in the state. However, when the population is taken into consideration, Ikorodu had the highest rate of reported cancer per million population. This exemplifies the advantage of population-based cancer studies and registry, which enables a better understanding of the burden of disease. Ikorodu is also an active commercial/energy center and national broadcasting gangway as the transmitters of the Federal Radio Corporation of Nigeria, Voice of Nigeria, and those of the State Broadcasting Corporation (Radio Lagos/Eko F.M. and LTV) are located there. Further studies on the relationship between environmental factors and cancer in the state are necessary.
In conclusion, the availability and organization of cancer services statewide are poor with inadequate consideration of the epidemiology of cancer in the state and each health facility operating independently. Inadequate documentation compromises the quality of data and, thus, the utility of the data for policy, programing, and allocation of resources. The absence of organized screening for breast cancer is a major gap as is the absence of screening for prostate and colorectal cancers.
We, therefore, recommend that organized screening programs for the predominant cancers in the state should be instituted starting with a comprehensive breast cancer screening program as an urgent priority. Staff at health facilities should be trained on cancer documentation, archiving, and retrieval of data to avoid data loss, and health facilities should establish systems for documentation, archiving, and retrieval of cancer data. The establishment of a hospital-based cancer registry at major facilities that offer cancer care should be mandatory. | 2022-10-22T06:16:31.758Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "715767e5ec2f0bd109ab4cd3055cf52cf0e68a18",
"oa_license": "CCBYNCND",
"oa_url": "https://ascopubs.org/doi/pdfdirect/10.1200/GO.22.00107?role=tab",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "88ed57db5f3165925215922a30d01183d92b6d96",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14473058 | pes2o/s2orc | v3-fos-license | Multiple Alternative Splicing and Differential Expression Pattern of the Glycogen Synthase Kinase-3β (GSK3β) Gene in Goat (Capra hircus)
Glycogen synthase kinase-3β (GSK3β) has been identified as a key protein kinase involved in several signaling pathways, such as Wnt, IGF-Ι and Hedgehog. However, knowledge regarding GSK3β in the goat is limited. In this study, we cloned and characterized the goat GSK3β gene. Six novel GSK3β transcripts were identified in different tissues and designated as GSK3β1, 2, 3, 4, 5 and 6. RT-PCR was used to further determine whether the six GSK3β transcripts existed in different goat tissues. Bioinformatics analysis revealed that the catalytic domain (S_TKc domain) is missing from GSK3β2 and GSK3β4. GSK3β3 and GSK3β6 do not contain the negative regulatory sites that are controlled by p38 MAPK. Furthermore, qRT-PCR and western blot analysis revealed that all the GSK3β transcripts were expressed at the highest level in the heart, whereas their expression levels in the liver, spleen, kidney, brain, longissimus dorsi muscle and uterus were different. These studies provide useful information for further research on the functions of GSK3β isoforms.
Introduction
Glycogen synthase kinase-3 (GSK3) is a serine/threonine kinase that is mainly regulated by phosphorylation of its target substrates or itself. Since its initial purification from rabbit skeletal muscle [1], GSK3 has been continuously identified in connection with multiple pathways and shown to be a key component in the regulation of over fifty diverse proteins [2,3]. GSK3 plays a crucial role in the regulation of cell fate, regulating processes such as embryonic development, cell proliferation and apoptosis [4][5][6]. Furthermore, GSK3 has been linked to many human diseases such as cancer [7], Alzheimer's disease [8,9] and type II diabetes [10].
In mammals, GSK3 is primarily generated from two known genes: GSK3a and GSK3b. Interestingly, GSK3a is not found in birds [11]. GSK3 contains a two-domain kinase fold consisting of a b-strand domain at the N-terminus and an a-helical C-terminal domain [12]. The two isoforms have 98% sequence identity in the catalytic domain. GSK3a is 5 kDa larger than GSK3b due to a glycine-rich amino-terminus [13]. Phosphorylation of Ser9 and Ser21 causes inactivation of GSK3b and GSK3a, respectively [14,15], while activation of GSK3b and GSK3a is dependent on the phosphorylation of Tyr216 and Tyr279, respectively [16]. Although studies have indicated that the two GSK3 isoforms are functionally redundant [17], other studies have shown that they have different functions in the regulation of transcriptional activation [4]. In cardiac tissue, the two isoforms have different activities in response to pressure overload [18] and in mediating the differentiation of murine bone marrow-derived mesenchymal stem cells into cardiomyocytes [19].
Two alternative splice variants of GSK3b, named GSK3b1 and GSK3b2, have been isolated from human and mouse tissues [20,21]. Alternatively spliced mRNAs significantly contribute to protein diversity. It has been shown that mutations causing abnormal splicing are associated with disease [22]; for example, the mis-splicing of GSK3b resulted in the emergence of leukemia stem cells [23]. The isoforms of GSK3b have distinct substrate preferences [24] and phosphorylation activity on neural-associated proteins [25]. Thus, GSK3b2 has been recognized as a neuronspecific isoform [26]. Our previous study identified five transcripts in pig tissues and showed that GSK3b5 exhibits differential effects on glycogen synthesis in PK-15 cells [27].
Most studies on GSK3 have been carried out in humans and mice, but information on the goat GSK3b gene is still limited. Alternative splicing of GSK3b has been identified in many animal models but not in domestic animals. In this study, we cloned and characterized the goat GSK3b gene and identified six novel GSK3b splice variants in various goat tissues.
Characterization of the goat GSK3b gene
The GSK3b cDNA sequence of sheep was compared with mouse, porcine and human sequences, and specific primer pairs were designed in the conserved regions to amplify fragments covering the entire putative coding sequence of the goat GSK3b1 gene. The goat GSK3b1 gene is 1334 bp in length (GenBank Acc. No.: KJ649149) and consists of a 1263 bp open reading frame that encodes a 420 amino acid protein with an expected molecular weight of 46.72 kDa and an isoelectric point (pI) of 8.68. The amino acid sequence encoded by GSK3b1 shares 100%, 99%, 99% and 99% sequence identity with sheep (NP_001123212.1), mouse (NP_062801.1), porcine (AFN70426.1) and human GSK3b1 (NP_001139628.1), respectively.
Protein structure and function predictions indicate that goat GSK3b has a 285 aa S_TKc domain between Tyr 56 and Phe 340 (Fig. 1), which is recognized as a catalytic domain of the serine/ threonine protein kinase family. At the C-terminal region of the predicted protein, two regions of low compositional complexity were predicted by the SEG program: one from Ala 386 to Ala 402 and another from Ala 411 to Ser 420 .
Identification of multiple alternative transcripts of goat GSK3b
In the process of cloning the goat GSK3b gene, we screened and sequenced more than 100 positive clones to identify the multiple GSK3b transcripts (Fig. 2). Six transcripts were observed and designated GSK3b1, GSK3b2, GSK3b3, GSK3b4, GSK3b5 and GSK3b6 (Fig. 3). The nucleotide sequences of each transcript are 1334, 871, 1048, 742, 1373 and 1095 bp in length, respectively. The cDNA sequences of the six goat GSK3b transcripts were deposited in GenBank as KJ649149-KJ649154. Based on the sequence alignment results, the main differences between the GSK3b splice variants were found between the seventh and eleventh exons. We amplified these variable regions using specific primers (Exon8-11-F and Exon8-11-R, Table 1), and the PCR products were separated by 2.5% agarose gel electrophoresis to visualize the different expression patterns. As shown in Fig. 4, purifying and sequencing the individual bands revealed that six transcripts exist in different goat tissues.
The six goat GSK3b transcripts encode 420, 137, 349, 137, 433 and 309 amino acid proteins (Fig. 5). For consistency, all of the amino acid sequences were compared with GSK3b1. Because only a partial nucleotide sequence was extracted for GSK3b2, the amino acid sequence of GSK3b2 only contained the eighth, ninth, tenth and an incomplete eleventh exon (nucleotides 679 to 870 of the eleventh exon). Coincidentally, the same amino acid sequence was identified for GSK3b4. Both GSK3b2 and GSK3b4 have an expected molecular weight of 14.81 kDa and an isoelectric point the kinase domain. GSK3b6 has a molecular weight of 34.86 kDa and a theoretical isoelectric point of 8.66. Remarkably, the additional exon (10b) in GSK3b6 did not form a longer amino acid sequence but rather inhibited its transcription ( Table 2). Structures of the functional domains of multiple predicted GSK3b protein isoforms indicate that GSK3b1 has a 247 aa S_TKc domain between Tyr 56 and Tyr 302 . Both GSK3b5 and GSK3b6 have the same domain between Tyr 56 and Phe 353 (298 aa) and between Tyr 56 and Leu 307 (252 aa). However, two low complexity regions were not found in GSK3b3 and GSK3b6. The GSK3b2 and GSK3b4 sequences were atypical. Because no domains were identified when they were subjected to a BLAST search of established functional domain structures ( Fig. 1).
Genomic structure of the goat GSK3b gene
To obtain more information about the genomic structure of the goat GSK3b gene, we searched the goat nucleotide database [28] by BLASTN and found a contig encoding the GSK3b cDNAs. The full-length coding sequence of the goat GSK3b gene is formed from eleven major exons and two minor exons, which are alternatively spliced to generate multiple GSK3b isoforms. The contig is located on chromosome 1, and the nucleotide sequence corresponds to the genetic locus from 63,182,357 bp to 63,392,540 bp. Compared to the transcript of GSK3b1, GSK3b2 only contains the first, eighth, ninth, tenth, and eleventh exons, 176 nucleotides in the second exon and 86 nucleotides in the seventh exon. The altered transcript encodes a different open reading frame. The ninth and tenth exons have been deleted in GSK3b3. GSK3b4 was formed without the third, fourth, fifth and sixth exons, 143 nucleotides in the second exon and 17 nucleotides in the seventh exon. GSK3b5 contains a special 39-nucleotide insert called exon 8b that was originally situated in the intron between the eighth and ninth exons. GSK3b6 lost the ninth exon, 23 nucleotides from the eighth exon and 82 nucleotides from the tenth exon; however, it contained an additional 53-nucleotide insert, which was named exon 10b (Fig. 3).
Tissue expression patterns of GSK3b isoforms in the goat qPCR was used to further assess the mRNA expression patterns of the goat GSK3b transcripts in different tissues. The isoformspecific primer pairs were designed as Fig. S1. The PCR fragments were purified and sequenced to confirm the correct amplification of the individual transcripts.
All transcripts were found to be predominantly expressed in heart (P,0.01), whereas expression in the liver and kidney was relatively weak. The GSK3b1 gene was expressed at significantly higher levels in heart and longissimus dorsi muscle (P,0.01) than that in the other tissues examined. GSK3b2 mRNA was found to be predominantly expressed in heart and spleen (P,0.01) with lower levels found in longissimus dorsi muscle. GSK3b3 mRNA was found to be predominantly expressed in heart and brain (P, 0.01) with lower levels found in the liver and little GSK3b3 mRNA found in spleen and kidney. The GSK3b4 and GSK3b5 genes were expressed at the highest levels in heart and brain (P,0.01), whereas expression in the liver, spleen, kidney and longissimus dorsi muscle was relatively weak. GSK3b6 mRNA was predominantly observed in heart, longissimusdorsi muscle and uterus (P, 0.01) with lowest levels found in brain. Moreover, the mRNA expression levels in the uterus, longissimusdorsi muscle, spleen and brain varied, as GSK3b4 and GSK3b6 mRNA was abundant (P, 0.01) in the uterus but GSK3b2 and GSK3b5 were barely expressed. GSK3b1 and GSK3b6 mRNA was abundant (P,0.01) in longissimus dorsi muscle but GSK3b3, GSK3b4 and GSK3b5. The others were barely detected in spleen but GSK3b2 (P,0.01). Also, as GSK3b3 and GSK3b4 mRNA were abundantly expressed (P,0.01) in brain, but GSK3b1, GSK3b2 and GSK3b6 were relatively weak (Fig. 6).
Western blotting was performed to determine the levels of GSK3b proteins expressed in seven tissues. GSK3b was detected in all tissues, and two bands were observed that correspond to the predicted size based on two functional domains. The major, higher molecular weight band represents a larger GSK3b protein, such as GSK3b1 or GSK3b5 (420 aa and 433 aa, respectively). The minor, lower molecular weight band, may be GSK3b3 or GSK3b6, which are composed of 349 aa and 309 aa, respectively. The results indicated that the GSK3b protein was lowly expressed in the liver. Abundant GSK3b proteins were observed in the higher molecular weight band obtained from the heart, brain and longissimus dorsi muscle tissues (Fig. 7). Conversely, in the spleen and kidney, relatively higher protein levels can be observed in the lower molecular weight band, indicating that splice variants have different expression levels among goat tissues.
Discussion
In this study we cloned the GSK3b gene in goat and found six new splice variants. The conserved regions of the S_TKc domain, which include a glycine-rich stretch of residues located at the extreme N-terminus, a nearby lysine residue in the ATP binding region, and a conserved aspartic acid residue located at the center of the catalytic domain that is important for the catalytic activity, were identified in GSK3b1, GSK3b3, GSK3b5 and GSK3b6 [29]. Previous studies have demonstrated that GSK3b is inactivated by phosphorylation of Ser9, which leads to the dephosphorylation of glycogen synthase, a key regulatory enzyme in the process of muscle glycogen metabolism [30,31]. GSK3b activity is dependent on the phosphorylation of Tyr216 [16]. In the present study we identified two GSK3b isoforms that lack 283 amino acids in the kinase domain: GSK3b2 and GSK3b4. These missing amino acids include two important phosphorylation sites (Ser9 and Tyr 216), key positively charged residues in the binding pocket (Arg96, Arg180, and Lys205), a binding site in GSK3b2 that would block many substrate side chains (Ser 219, Arg220, and Tyr 221), and side chains of catalytic residues Asp181 and Arg 220 that could interact with phosphorylated Tyr 216 [12,32,33]. The varieties of GSK3b isoforms may be associated with functional divergence. In rats, GSK3b2 is highly expressed in the nervous system, while in COS-7 cells, its phosphorylation activity on MAP1B and tau is lower than that of GSK3b1 [34]. Studies have demonstrated that the transcript lost the key phosphorylation site in the binding pocket and has little effect on the mRNA expression level of GYS1 [27]. However, the other transcripts containing Lys 205, Tyr 216 and Tyr 220 significantly reduced the mRNA expression level of GYS1 and GYS2 [27]. Moreover, two low complexity regions were found in the C-terminus of GSK3b1, GSK3b2, GSK3b4 and GSK3b5 by the SEG program. These regions may have a connection with physiological regulation, such as the finding that P38 mitogen-activated protein kinase (MAPK) inactivates GSK3b [35]. Ser 389, an inhibitory residue that directly blocks the activity of GSK3b, was not present in GSK3b3 and GSK3b6, indicating that p38 MAPK may not be able to inhibit GSK3b3 or GSK3b6 activity through phosphorylation of Ser 389 [35]. Previous studies of the adult porcine GSK3b show that its mRNA is abundantly expressed in the liver and testis [27], although in the goat, it is expressed at the highest levels in the heart (P,0.01). Our study show that GSK3b1 and GSK3b6 were highly expressed in longissimus dorsi muscle, and GSK3b2 was highly expressed in spleen. GSK3b4 and GSK3b6 were highly expressed in the uterus. qRT-PCR analysis of mRNA levels in different tissues suggesting their potential functions in skeletal muscle development [36], immune system [37] and reproductive system. The high level of GSK3b3, GSK3b4 and GSK3b5 expression in goat brain tissue were similar to what was observed in previous studies, in which two GSK3b alternative transcripts were abundantly expressed in the mouse brain [21].
A BLAST search revealed the presence of exon 8b in goat GSK3b, which corresponds to exon 8b in mouse GSK3b [20] and porcine GSK3b splice variants [27]. Exon 8b in GSK3b showed a characteristic alignment that followed the GT-AG rule, implying that the splice sites in GSK3b sequences are conserved in mammals. A high expression level was detected in mouse and goat brain tissues, suggesting that the GSK3b that contained the exon 8b sequence has special functions in neurological tissue. However, the sequence with the inserted exon 8b in pig was expressed highest in the liver and testis. Although exon 10b in GSK3b has been isolated from pig [27], the deletion of the eighth exon, ninth exon and tenth exons in addition to the presence of exon 10b in GSK3b has never been reported. Moreover, the absence of the third, fourth, fifth and sixth exons, which is a novel feature in the genomic structure, was detected in GSK3b2 and GSK3b4.
In a western blot analysis, two bands were observed corresponding to at least two protein isoforms. Four isoforms with slight differences in molecular weight-GSK3b1 (420 aa), GSK3b3 (349 aa), GSK3b5 (433 aa) and GSK3b6 (309 aa)-were present in the bands. The two smaller isoforms, GSK3b2 and GSK3b4, have no significant homology to an antibody-binding domain or an identifiable fragment in a BLAST search of established functional domain structures. They were not observed in the western blots, suggesting that the absent regions may confer different biological functions to the variant GSK3b transcripts.
Animals and sample collection
The Nanjiang Brown goats used in this experiment were raised under standard conditions at the Station of the Nanjiang Brown Goat Breeding Center (Nanjiang, Sichuan, China). All tissues were collected from three female goats at 120 days after birth. All tissues were collected within 30 min after slaughter and immediately frozen in liquid nitrogen.
RNA isolation and cDNA synthesis
The total RNA was isolated from seven tissue samples (heart, liver, spleen, kidney, brain, longissimus dorsi muscle and uterus), which were stored in liquid nitrogen for RNA extraction. The RNA was extracted using Trizol reagent (Invitrogen, California, USA) according to the manufacturer's instructions. The purity and quantity of the RNA were determined by the 260/280 ratio and absorbance at 260 nm, respectively. First-strand cDNA was synthesized using the Prime Script RT reagent Kit (Takara, Tokyo, Japan) as described in the manufacturer's protocol. The corresponding cDNA was stored at 220uC. (Table 1) were designed based on the conserved regions. The thermo cycling conditions were as follows: an initial denaturation at 95uC for 4 min followed by 35 cycles of 95uC denaturation for 30 s, 61.7uC annealing for 60 s, and 72uC extension for 90 s. A final extension was performed at 72uC for 7 min, and the reactions were stored at 4uC. The PCR products were separated by 2.5% agarose gel electrophoresis, purified using an Agarose Gel Extraction Kit (Sangon, Shanghai, China), ligated and inserted into the pMD 19-T vector (Takara, Tokyo, Japan), and transformed into Escherichia coli DH5a cells (Biomed, Beijing, China). Positive clones were sequenced by Invitrogen Life Technology Co., Ltd (Invitrogen, Shanghai, China).
Molecular detection and cloning of GSK3b alternative transcripts
To obtain the alternative transcripts of goat GSK3b, we screened more than 100 positive clones in the process of cloning, which allowed us to visually select different GSK3b alternative transcripts and detect the difference among the fragments. Six new primer pairs were used to identify the specific amplicons of each GSK3b cDNA isoform ( Table 1). The isoform-specific primer pairs were designed as follows: the forward primer of GSK3b1 was selected based on the junction between the sixth and seventh exon, reverse primer was selected based on the junction between the eighth and ninth exon. The forward primer of GSK3b2 was located at the second exon, and the reverse primer was across the second and seventh exon. The forward primer of GSK3b3 was selected based on the junction between the eighth and eleventh exon, and the reverse primer was located at the 39-UTR. The forward primer of GSK3b-CDS and a novel reverse primer across the second and seventh exon (different from GSK3b2 reverse primer) were used to detect GSK3b4. GSK3b5 forward and reverse pairs were selected at the site of the eighth exon and at exon 8b, respectively. GSK3b6 forward and reverse pairs were located at the seventh exon and exon 8b, respectively (Fig. 6). Semi-quantitative reverse transcription-PCR was utilized to confirm that the primers had specifically and uniquely selected the target fragments. All the PCR bands were cut from the agarose gel for purification, sub-cloned and sequenced.
Quantitative real-time PCR (qRT-PCR) qRT-PCR was performed to detect the mRNA expression levels of the GSK3b alternative splice variants using a Bio-Rad CFX96 (Bio-Rad, California, USA). The qRT-PCR was carried out using a SYBR Green-based kit in 10 mL volumes containing 5 mL of SYBR Green Real Time PCR Master Mix (Takara, Tokyo, Japan), 0.8 mL of normalized template cDNA and 0.4 mL of each of the forward and reverse primers that were verified in the RT-PCR. The qPCR procedure was as follows: initial denaturation at 95uC for 3 min, 40 cycles of 95uC for 30 sec, alternative annealing for 30 sec, 72uC for 10 sec, and a final extension for 5 min with a temperature increment of 0.5uC/sec from 65uC to 95uC. Melting curve analysis was used to confirm specific PCR products. A cycle threshold was applied for the quantification of mRNA relative to the efficiency of Beta-actin expression by the comparative Ct (2 2DDCt ) value method. All data are expressed as the mean 6 SEM. Statistical analysis was performed using one-way ANOVA with the SAS Statistical Analysis System (SAS Institute Inc., NC, USA).
Bioinformatic sequence analysis
The molecular weight and isoelectric point (pI) were calculated by EditSeq 7.10 (DNAstar, Inc. Wisconsin, USA). ClustalW (http://www.ebi.ac.uk/clustalw/) was used for the multiple sequence alignment. The open reading frame was translated and BLAST searched using the NCBI ORF Finder (http://www.ncbi. nlm.nih.gov/gorf/gorf.html). The domain structure of the GSK3b proteins was searched by BLAST and analyzed with the SMART (http://smart.embl.de/) server.
Western blotting
Total proteins were extracted from different tissues using a Tissue or Cell Total Protein Extraction Kit (Sangon, Shanghai, China) and normalized with a BCA Protein Assay Kit (Sangon, Shanghai, China). The sample and buffer (Beyotime, Shanghai, China) were mixed well, and 20 mg of total protein was loaded per lane in a precast 10% polyacrylamide gel. After SDS-PAGE, the proteins were transferred from the gel to a PVDF membrane. The membranes were blocked with blocking buffer (Beyotime, Shanghai, China) and incubated with primary antibodies (GSK3b Rabbit mAb 27C10, Cell Signaling Technology, Inc, MA, USA) overnight at 4uC. After the membranes were washed with TBST, they were incubated with the secondary antibody (HRP-labeled goat anti-rabbit IgG H+L A0208, Beyotime, Shanghai, China) for 2 h at 37uC. After washing the membrane with TBST and TBS, the proteins were visualized using an ECL detection system (BeyoECL Plus, Beyotime, Shanghai, China). The GAPDH (AG019, Beyotime, Shanghai, China) protein was utilized as an internal control. Figure S1 Design of isoform-specific primer pairs for qRT-PCR. Black arrows label the isoform-specific primer pairs that used in qRT-PCR. Red frame label the primer pairs that used in RT-PCR to facilitate visualization of the RT-PCR results. | 2016-05-18T10:57:56.210Z | 2014-10-15T00:00:00.000 | {
"year": 2014,
"sha1": "82fbf13c623948cd822dfc0b2b2a937f79f3c44a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0109555",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "82fbf13c623948cd822dfc0b2b2a937f79f3c44a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
255548174 | pes2o/s2orc | v3-fos-license | Influences of (in)congruences in psychological entitlement and felt obligation on ethical behavior
Introduction Psychological entitlement and felt obligation are two correlated but distinctive conceptions. Prior studies have mainly explored their influences on employees' (un)ethical behavior, respectively. Recently, several studies suggest the interactive impacts of psychological entitlement with felt obligation on individual behavioral choices. In consistency with these studies, the present study focuses on the influences of (in)congruences in psychological entitlement and felt obligation on employees' (un)ethical behavior. Methods A two-wave multi-source questionnaire survey is conducted to collect 202 matched questionnaires from full-time Chinese workers. The polynomial regression with response surface analysis is employed to test hypotheses. Results The results indicate that: (1) employees have higher levels of work engagement and helping behavior but lower levels of unethical behavior when their psychological entitlement and felt obligation are balanced at higher levels rather than lower levels; (2) employees have higher levels of work engagement and helping behavior but lower levels of unethical behavior when they have higher levels of felt obligation but lower levels of psychological entitlement compared to those having lower levels of felt obligation but higher levels of psychological entitlement; and (3) work engagement mediates the relationship between (in)congruences in psychological entitlement and felt obligation and employees' helping behavior and unethical behavior. Discussion This study provides a novel insight into the interactive influences of (in)congruence in psychological entitlement and felt obligation on employees' ethical behavioral choices.
. Introduction
Psychological entitlement and felt obligation are two basic psychological characteristics of Generation Y employees (Anderson et al., 2017). Prior studies have regarded psychological entitlement and felt obligation as two contradictory conceptions, highlighting different influences of psychological entitlement and felt obligation on employees' ethical behavior. For instance, psychological entitlement facilitates deviant behavior and workplace bullying but impedes organizational citizenship behavior (OCB; Qin et al., 2020). In contrast, felt obligation motivates employees to engage in OCB but mitigates their intention to conduct counterproductive behavior and aggressive behavior (Moorman and Harland, 2002;Chen et al., 2021).
Entitlement is the extent to which employees believe that they deserve rewards and appreciation from organizations (Campbell et al., 2004). Therefore, entitlement is a typical self-interested personality. On the other hand, obligation is the degree to which employees believe that they owe consideration and resources to society (Eisenberger et al., 2001). Obligation is developed as tendencies to deviate from self-interests and is defined as an other-oriented personality. Brummel and Parker (2015) believed that psychological entitlement and felt obligation are not the two ends on the same self-interests continuum but are better conceptualized as two distinct theoretical constructs. These two personality traits shape individuals' behavior coordinately. They highlight the importance of studying the self-interested trait (i.e., psychological entitlement) in conjunction with the prosocial trait (i.e., felt obligation) because the absence of self-interest may not represent prosocial tendencies. On the basis of these arguments, they propose the orthogonal structure of felt obligation and psychological entitlement and demonstrate the underlying theoretical lens to explain how these two traits impact employees' behavioral choices (refer to Figure 1).
Running along the line from independent to interdependent continuum, employees' behavioral choices can be explained by equity sensitivities based on the equity theory (Mowday, 1991). Entitled employees are sensitive to the ratio of inputs to outputs at workplaces and they are eager to differentiate themselves from their peer coworkers (Lange et al., 2019), while felt obligation motivates employees to devote time and resources to their jobs and to perceive their rewards fairly (Brummel and Parker, 2015) and mitigates the high sensitivity to the ratio of inputs to outputs driven by psychological entitlement. Moreover, entitled and obligated employees tend to distinguish themselves from their coworkers and categorize themselves as professionals with specialized knowledge about their jobs by contributing more to the organizational effectiveness compared with their peers. Therefore, employees tend to exhibit more work efforts and prosocial behavior when psychological entitlement and felt obligation are balanced at higher levels rather than lower levels (Tennent, 2021). Running along the line from self-interested to otheroriented continuum, social value orientation based on the social interdependence theory is used to explain behavioral tendencies ( Van Lange, 1999). Employees with high psychological entitlement but low felt obligation are self-serving and neglect their coworkers' benefits (Eissa and Lester, 2022). They tend to engage in more deviant behavior which is disadvantageous to organizational effectiveness (Naseer et al., 2020). By contrast, employees with high felt obligation but low psychological entitlement are other-oriented. They are more likely to exhibit OCB and focus on their tasks to contribute to organizational effectiveness (Roch et al., 2019). Therefore, employees tend to be good citizens in organizations with low psychological entitlement but high felt obligation rather than high psychological entitlement but low felt obligation.
Although Brummel and Parker (2015) have proposed the orthogonal structure of felt obligation and psychological entitlement, recent studies are still trying to explore the influences of these two traits on employees' work behavior separately. Less is known about the underlying mechanism through which the (in)congruences in psychological entitlement and felt obligation impact employees' (un)ethical behavior (i.e., helping behavior and unethical behavior). Helping behavior denotes voluntary assistance to others in accomplishing their goals or preventing the occurrence of problems (Yue et al., 2017). In contrast, unethical behavior means violating social norms for moral behavior, such as pilfering company materials, giving gifts/favors in exchange for preferential treatment, and divulging confidential information (Paterson and Huang, 2019). Helping behavior and unethical behavior are the two typical aspects of ethical behavior in organizational psychological research (Miao et al., 2020;Li, 2021). Therefore, this study will adopt these two variables as outcomes of the (in)congruences in psychological entitlement and felt obligation. In addition, this study will employ work engagement, denoted as a positive, fulfilling work-related state of mind that is characterized by vigor, dedication, and absorption, as a mediator. Work engagement has been used by prior studies to explain the inner path linking personalities and (un)ethical behavioral choices (Bakker et al., 2012). In addition, work engagement has also been adopted as an indicator of perceptions of equity and prosocial orientations in organizations (Agarwal, 2014;Gheorghe et al., 2022).
Based on the equity theory and the social interdependence theory, the conceptual model is proposed (refer to Figure 2). This study intends to collect data from full-time Chinese workers to test the conceptual model using a multi-source two-wave questionnaire design. By doing so, this study will contribute to literature both on psychological entitlement and on felt obligation. First, we challenge the consensus that .
/fpsyg. . felt obligation is beneficial while psychological entitlement is detrimental, emphasizing the need to simultaneously consider psychological entitlement and felt obligation. Previous research has mainly examined the influences of psychological entitlement and felt obligation on (un)ethical behavior separately, reaching a consensus that felt obligation promotes pro-social behavior while psychological entitlement fosters unethical behavior (Lee et al., 2019b;Miao et al., 2020). This line of research fails to shed light on the interactive influences of felt obligation and psychological entitlement on individuals' (un)ethical behavioral decision-making, which has been addressed by Tennent (2021). We provided a novel perspective to investigate . /fpsyg. . how (in)congruences in psychological entitlement and felt obligation influence employees' (un)ethical behavior by adopting polynomial regression with response surface analysis. Second, this study identifies work engagement as the underlying mechanism linking psychological entitlement-felt obligation fit with (un)ethical behavior. The importance of work engagement is emphasized in both the equity theory and the social interdependence theory. In the equity theory, work engagement is the proximal outcome of the sense of equity and further motivates individuals to perform ethical behavior (Cao et al., 2020). In the social interdependence theory, other-oriented rather than self-oriented employees are more likely to devote resources to engage in their jobs and to contribute to the organizational effectiveness by inhibiting unethical behavior but fostering ethical behavior (Gheorghe et al., 2022). This study adopts work engagement as the mediator in the relationship between (in)congruences in psychological entitlement and felt obligation, and (un)ethical behavior contributing to both the equity theory and the social interdependence theory.
Hypothesis development Psychological entitlement and felt obligation
It is a pervasive sense that an individual feels they deserve more than others, even if this is not commensurate with one's actual abilities and efforts (Zitek and Jordan, 2021). The concept of psychological entitlement is derived from narcissistic personality (Lee et al., 2019a). However, recent research has differentiated psychological entitlement from narcissism. Psychologists argue that narcissism is primarily about the self. By contrast, psychological entitlement is mainly about the self in relation to others (Lee et al., 2019a). To keep a superior status compared to peers, entitled employees are more likely to engage in unfavorable actions at work. Prior studies have explored the positive relationship between psychological entitlement and unethical pro-organizational behavior, aggressive behavior, workplace incivility, and workplace deviance based on the social identity theory, equity theory, and social exchange theory (Lee et al., 2019b;Liu and Zhou, 2021). Moreover, the literature indicates that ethical leadership and organizational justice play key roles in inhibiting the positive relationship between psychological entitlement and ethical behavior (Al Halbusi et al., 2021a, 2022bAl Halbusi, 2022).
Felt obligation and psychological entitlement are the two basic psychological characteristics of Generation Y employees (Anderson et al., 2017). The concept of felt obligation is developed by Eisenberger et al. (2001) based on the research of perceived organizational support. Felt obligation is a prescriptive belief regarding whether one should care about the organization's wellbeing and should help the organization reach its goals (Ogunfowora et al., 2021). Contrary to psychological entitlement, felt obligation is a typical prosocial trait and has always been regarded as an antecedent to ethical behavior at work. Prior studies have explored the positive influences of felt obligation on helping behavior, green behavior, and voice behavior based on the social exchange theory and the social identity theory (Eisenberger et al., 2001;Campbell et al., 2004;Al Halbusi et al., 2022a). Research also suggests that ethical leadership and organizational justice are important in fostering felt obligation and promoting ethical behavior (Al Halbusi et al., 2021b;Halbusi et al., 2021).
Di erentiating psychological entitlement congruences from felt obligation congruences
Equity sensitivity is one of the core concepts in the equity theory, which is referred to as the degree to which people respond to situations of perceived inequality due to their preferences for equality (Miles et al., 1994). Entitled employees prefer to undermine their rewards compared with their devotion relative to their colleagues (Li, 2021). By contrast, obligation motivates employees to behave altruistically. They tend to give more than they have received compared with their coworkers (Kim and Qu, 2020). Obligation would mitigate the high sensitivity to equity stimulated by entitlement. Moreover, Tennent (2021) suggested that in social interactions, obligated and entitled employees are more likely to engage in helping behavior to categorize themselves as professional members.
Alongside the congruence line, employees with high entitlement and high obligation are labeled as interdependent. They care about both their rewards and organizational effectiveness. The equity theory indicates that employees are motivated to seek perceptions of equity in organizations (Kollmann et al., 2020). Entitled employees are chasing more rewards and higher status in comparison with their coworkers. They prefer to compare the inputs and outputs at work with those of their coworkers (Brummel and Parker, 2015). On the other hand, high-level obligation drives them to take organizations' benefits into consideration simultaneously when they are acquiring their benefits (Brant and Castro, 2019;Lorinkova and Perry, 2019). They are more likely to engage in their work to achieve higher performance and behave prosocially and legitimately to make themselves distinguishable and enhance their sense of equity (Tennent, 2021). By contrast, employees with low entitlement and low obligation are labeled as independent. They are indifferent to their benefits or the organizational development. Due to the low obligation and low entitlement, they are less likely to engage in prosocial behavior to benefit the organizations and coworkers (Thompson et al., 2020). Worse still, they will not devote their resources to fully . /fpsyg. . fulfill their work roles and maintain high-level performance (Xu et al., 2020). Accordingly, this study proposed the following hypothesis: H1: When an employee's psychological entitlement is aligned with felt obligation at a higher level rather than a lower level, employees tend to exhibit a higher level of work engagement (H1a) and helping behavior (H1b), but a lower level of unethical behavior (H1c).
. . Di erentiating psychological entitlement incongruences from felt obligation incongruences
Social value orientation, developed from the social interdependence theory, is denoted as the weights people assign to their own and others' outcomes in situations of interdependence (Balliet et al., 2009). Employees with high felt obligation are motivated to cooperate with others, engage in their current jobs, and help coworkers rather than damage their benefits. Psychological entitlement is a subdimension of having a narcissistic personality (Ackerman et al., 2019). Prior studies suggest that entitled individuals have a sustained inflated view of themselves (Lange et al., 2019). The proximal behavioral results of such negative emotional states include aggressive behavior in the workplace, such as interpersonal deviance, incivility, and bullying (Vatankhah and Raoofi, 2018;Naseer et al., 2020).
Alongside the asymmetry line, employees with high psychological entitlement and low felt obligation are categorized as self-oriented. They are more likely to serve themselves by damaging others' benefits (Vatankhah and Raoofi, 2018). The social interdependence theory divides employees into three categories: prosocial, individualistic, and competitive (Johnson and Johnson, 2005). Entitled employees with low felt obligation are more likely categorized as individualistic and competitive (Vatankhah and Raoofi, 2018). Existing studies demonstrate that they are less engaged in their work (Thompson et al., 2020), tend to behave unethically (Miao et al., 2020), and are disinclined to behave helpfully (Li et al., 2022). By contrast, employees with high felt obligation are categorized as other-oriented or prosocial. Their behavior is motivated by felt obligation rather than entitlement. According to the predominant obligation literature, they prefer to devote their resources to current jobs (Ackerman et al., 2019), exhibit extra-role behavior to facilitate organizational effectiveness (Roch et al., 2019), and not engage in unethical behavior (Gheorghe et al., 2022). Therefore, this study proposed the second hypothesis: H2: Employees with higher felt obligation but lower psychological entitlement have a higher level of work engagement (H2a) and helping behavior (H2b) but a lower level of unethical behavior (H2c) compared with employees with higher psychological entitlement but lower felt obligation.
. . Work engagement as a mediator of the (in)congruence e ect on helping behavior and unethical behavior Furthermore, it is assumed that employees' work engagement mediates the psychological entitlement-felt obligation (in)congruence effect of their helping behavior and unethical behavior. Prior studies supply fruitful evidence for the influences of work engagement on both helping behavior and unethical behavior based on both the equity theory and the social independence theory (Sulea et al., 2012;Meynhardt et al., 2020). Employees who are psychologically engaged in their work have a greater likelihood of performing things beyond job requirements and devoting more time and effort to work-related issues and relationships, which equals helping behavior (Mostafa, 2019). Moreover, work engagement provides employees with self-control resources to regulate their behavior, which contributes to the mitigation of unethical behavior. As noted above, work engagement is shaped by psychological entitlement and felt obligation jointly, which in turn will influence employees' helping behavior and unethical behavior. Therefore, we hypothesized as follows: H3: An employee's work engagement mediates the relationship between (in)congruence in psychological entitlement and felt obligation, and the employee's helping behavior (H3a) and unethical behavior (H3b).
. . Procedures and participants
To avoid common method bias (CMB; Podsakoff et al., 2003) and social desirability bias (SDB; Nederhof, 1985), this study adopted a two-wave multi-source questionnaire survey design. We collected data from two subsidiary companies of a construction group company located in Beijing, China. We contacted the human resource managers of the two subsidiary companies and acquired their assistance. Before the questionnaire survey, we have an interview with human resource managers and frontline employees to confirm the clarity, readability, comprehension, and suitability of our questionnaires (Al Halbusi et al., 2021a). Then, with the aid of the two human resource managers, we sent emails through their intra-company information systems. In the email, we elaborated on the research purpose and survey process. We recruited participants for our samples and asked for their consent to participate. The respondents were assured that their responses were confidential and that they had the right to end participation in the survey . /fpsyg. . at any time. We formed two research groups in WeChat, a universal social media application in China, and invited the respondents to join the WeChat groups. The research assistants distributed questionnaires through mobile websites, with the questionnaires completed through mobile phones in WeChat groups. Before the data collection, online informal consent was secured from the respondents.
In the first wave, the respondents were required to assess their demographic information, psychological entitlement, and felt obligation. Totally, 227 participants completed this survey. In the second wave, the respondents were required to assess their work engagement. We contacted their team leaders and required them to assess the specific participants' helping behavior and unethical behavior. Finally, 202 participants with their leaders completed the survey in this wave. The effective response rate was 88.98%. Among the samples, 48.5 were women; 62.9% held bachelor's degrees and 17.8% held master's degrees or above; 61.4% were married; the average age of the employees was 32.32 (±6.33); and the average tenure in their companies was 5.83 years (±5.30). A drop-out analysis was conducted, which found that the dropped samples had no differences in demographic information from the completed samples. The participants were notified that when they completed the first wave of the questionnaire, they would receive RMB 15 (≈USD 2.09); and when they completed both parts of the questionnaire, they would receive RMB 50 (≈USD 6.95). The high reward for completing both parts of the questionnaire was adopted to ensure an effective response rate.
. . Measures
The original questionnaires were published in English. A back-to-back translation procedure was adopted to ensure translation accuracy (Brislin, 1980). A 5-point Likert scale was employed, with "1" indicating "strongly disagree" and "5" indicating "strongly agree." The measured variables are as follows: . . . Psychological entitlement Psychological entitlement was assessed using a 4-item scale, adapted from Yam et al. (2014). A sample item was written as "I honestly feel I'm just deserving more than others." Cronbach's alpha of this scale was 0.91.
. . . Felt obligation
It was assessed using a 6-item scale developed by Eisenberger et al. (2001). A sample item was written as "I feel a personal obligation to do whatever I can to help the organization achieve its goals." Cronbach's Alpha of this scale was 0.87.
. . . Work engagement
The three-item ultra-short work engagement developed by Schaufeli and De Witte (2017) was adopted in this study. The sample item was "At my work, I feel bursting with energy." This scale yielded Cronbach's Alpha of 0.82.
. . . Unethical behavior
The five-item scale developed by Paterson and Huang (2019) was preferred in this study. The sample item was "The employee does personal business during company time." Cronbach's Alpha of this scale was 0.81.
. . . Helping behavior
The three-item scale proposed by Yue et al. (2017) was utilized in this study. The sample item was "The employee helps other employees when it is clear that their workload is too heavy." This scale yielded Cronbach's Alpha of 0.77.
. . . Control variables
Considering the influences of demographic information on (un)ethical behavior, this study controlled gender, age, and education in the regression analysis, in accordance with previous studies (Savir and Gamliel, 2019).
. . Analytical strategy
Polynomial regression with response surface analysis was adopted to test the abovementioned hypotheses. This method has been used in psychological and management studies to explore how the combination of two independent variables impacts other dependent variables, particularly in the case of congruence and discrepancy measures (Edwards, 1994). Polynomial regression with response surface analysis can provide a three-dimensional view of the joint influences of two independent predictors on one outcome which makes this statistical approach superior to other traditional regression analyses (Edwards and Parry, 1993).
The classical equation for polynomial regression was Z = b 0 + b 1 X + b 2 Y + b 3 X 2 + b 4 XY + b 5 Y 2 + e. In this equation, Z referred to the dependent variables (work engagement, helping behavior, and unethical behavior), X represented psychological entitlement, and Y represented felt obligation. In the response surface analysis, coefficients in the polynomial regression were used to examine the surface pattern, which could provide a three-dimensional visual representation of the data for the interpretation of the polynomial regression results. The surface pattern was determined by the slope and curvature of the . /fpsyg. . congruence line (X = Y) and the incongruence line (X = -Y) (Edwards and Cable, 2009). Before polynomial regression with response surface analysis, X and Y were cantered (Edwards, 1994). To test hypothesis 1, it was determined whether the slope along the congruence line (X = Y) was significantly positive for the outcomes consisting of work engagement and helping behavior, and significantly negative for the outcome consisting of unethical behavior. To test hypothesis 2, it was determined whether the slope along the incongruence line (X = -Y) was significantly negative for the outcomes such as work engagement and helping behavior, and significantly positive for the outcome such as unethical behavior. To test hypothesis 3, the block approach proposed by Edwards and Cable (2009) was adopted. A block variable that combined the five polynomial terms was calculated based on their respective weights in the polynomial regression analysis. Afterward, path analysis was conducted to examine the mediation model using Mplus 7.4.
. . Confirmatory factor analysis
Before implementing the regression analysis, we conducted confirmatory factor analysis (CFA) to test the survey validity. The results in Table 1
. . Descriptive statistics and correlation analysis
We also calculated the means and standard deviations for the different variables. Those correlations between focal variables are shown in Table 2.
. . Polynomial regression with response surface analysis
We then conducted a polynomial regression analysis using SPSS v.21.0 to test the hypotheses. The results of Model 2 (Table 3) indicated that psychological entitlement was not significantly associated with work engagement (B = 0.08, SE = 0.05, n.s.) but felt obligation was significantly positively associated with work engagement (B = 0.50, SE = 0.07). To test H1 and H2, we used response surface analysis and examined the response surface pattern based on the curvature and slopes of the congruence and incongruence lines. The results of Model Table 3 showed that the slope (β = 0.58, SE = 0.10, p < 0.01) for the congruence line (x = y) was positive and significant. The significantly positive slope indicated that, compared with independent employees, those interdependent employees had a higher level of work engagement, thus verifying H1a. The results of Model 2 in Table 3 showed that the slope (β = −0.43, SE = 0.08, p < 0.01) for the incongruence line (x =y) was significant. This result indicated that the level of work engagement was higher for other-oriented employees than for self-oriented employees, thereby supporting H2a. The results of Model 4 (Table 3) demonstrated that psychological entitlement was not significantly associated with work engagement (B = 0.03, SE = 0.04, n.s.) but felt obligation was significantly positively associated with work engagement (B = 0.45, SE = 0.06). The results of Model 4 in Table 3 showed that the slope (β = 0.48, SE = 0.09, p < 0.01) for the congruence line (x = y) was positive and significant. The significantly positive slope indicated that, compared with independent employees, those interdependent employees had a higher level of helping behavior, thereby supporting H2b. The results of Model 4 in Table 3 showed that the slope (β = −0.42, SE = 0.07, p < 0.01) for the incongruence line (x = -y) was significant. This result indicated that the level of helping behavior was higher for other-oriented employees than for self-oriented employees, thus confirming H2b.
in
The results of Model 6 (Table 3) displayed that psychological entitlement was significantly positively associated with unethical behavior (B = 0.30, SE = 0.05, p < 0.01) while felt obligation was significantly negatively associated with unethical behavior (B = −0.60, SE = 0.06, p < 0.01). The results of Model 6 in Table 3 showed that the slope (β = −0.31, SE = 0.10, p < 0.01) for the congruence line (x = y) was negative and significant. The significantly negative slope indicated that, compared with independent employees, those interdependent employees had a lower level of helping behavior, thus supporting H2c. The results of Model 6 in Table 3 showed that the slope (β = 0.90, SE = 0.08, p < 0.01) for the incongruence line (x = -y) was significant. This result indicated that the level of unethical behavior was lower for other-oriented employees than for self-oriented employees, thereby confirming H2c. The results of the response surface analysis are shown in Figure 3.
To examine the underlying mechanism linking the fit between psychological entitlement and felt obligation with (un)ethical behavior, this study used a block variable approach to test hypothesis 3 (Figure 4), with the results shown in Figure 4. The fit between psychological entitlement and felt obligation was positively correlated with work engagement (B = 0.47, p < 0.01). In addition, work engagement was positively associated with helping behavior (B = 0.29, p < 0.01) but negatively associated with unethical behavior (B = −0.19, p < 0.05).
To further test H3a and H3b, bootstrapping analysis was used to examine the direct and indirect effects (Table 4). For the role of work engagement in mediating the relationship between (in)congruences in psychological entitlement and felt obligation, and helping behavior, the indirect effect was significant [Effect = 0.22, SE = 0.09, 95% CI = (0.07, 0.41)], thereby confirming H3a. For the role of work engagement in mediating the relationship between (in)congruences in psychological entitlement and felt obligation, and unethical behavior, the indirect effect was significant [Effect = −0.24, SE = 0.10, 95% CI = (−0.47, −0.06)], thus verifying H3b.
. Discussion
This study has adopted a two-wave multi-source questionnaire survey to test the influence of the orthogonal structure of felt obligation and psychological entitlement on (un)ethical behavior. Based on the polynomial regression with response surface analysis, this study has found that interdependent employees have higher levels of work engagement, helping behavior, and unethical behavior compared with independent employees alongside the congruence line. Other-oriented employees have higher levels of work engagement, helping behavior, and unethical behavior compared with self-interested employees. Moreover, work engagement mediates the influence of fit between psychological entitlement and felt obligation on both helping behavior and unethical behavior. Our study has made two contributions to research on both psychological entitlement and felt obligation.
First, this study has explored the joint influence of psychological entitlement and felt obligation on employees' behavioral choices. Prior studies regarded psychological entitlement and felt obligation as two ends of one concept. For illustration, Brummel and Parker (2015) suggested that felt obligation and psychological entitlement are two distinct concepts. In fact, they both separately and jointly shape employees' behavioral choices. Existing studies predominantly focused on how psychological entitlement and felt obligation impacted individuals' behavior independently (Lorinkova and Perry, 2019;Alnaimi and Rjoub, 2021). To acquire some insights, this study has conducted empirical research to explore the joint influence of entitlement and obligation on employees' engagement and (un)ethical behavior based on the orthogonal structure of felt obligation and psychological entitlement proposed by Brummel and Parker (2015).
In terms of the incongruence line, this research has found that other-oriented employees with a higher obligation but a lower entitlement have higher levels of work engagement, helping behavior, and unethical behavior than self-oriented employees with a lower obligation but a higher entitlement. The results are consistent with prior studies highlighting the prosocial tendencies of felt obligation (Lee et al., 2019b) and self-serving tendencies of psychological entitlement (Neville and Fisk, 2019) within the theoretical framework of the . /fpsyg. .
FIGURE
Response surface analysis results.
FIGURE
Results for the model. *p < . and **p < . . social interdependence theory. In terms of the congruence line, this research has challenged the prior studies emphasizing the negative effect of high psychological entitlement on engagement and (un)ethical behavior. By virtue of the equity theory, this study has noted that obligated and entitled employees are motivated to devote more resources to distinguish themselves from their peers (Tennent, 2021). As a result, entitled employees tend to exhibit high engagement and helping behavior but low unethical behavior when they also have a high felt obligation to enhance the sense of equity. Arguably, this study has provided a novel insight into the outcomes of felt obligation and psychological entitlement. Second, this study has uncovered the underlying path linking the fit between psychological entitlement and felt obligation with (un)ethical behavior. Work engagement was adopted by prior studies as an important mechanism to explain the influences of personality on ethical behavioral choices (Tisu et al., 2020). From the perspective of the equity theory, work engagement reflects the responses to their perception of their inputs to outputs at work, which further determines their ethical behavior (Agarwal, 2014). From a social interdependence perspective, work engagement reflects employees' commitment to contributing to organizational effectiveness by helping co-workers and inhibiting unethical behavior (Tjosvold et al., 2008). Although prior studies explored the antecedents and outcomes of work engagement, scarce research adopted work engagement to clarify the relationship between personality traits and (un)ethical behavior. Against this background, the present study has enjoyed novelty by adopting work engagement as a mediator to explain how (in)congruences in psychological entitlement and felt obligation impact employees' helping behavior and unethical behavior from both the equity theory and social interdependence theory perspectives.
This research has several practical implications for practitioners. Our findings suggest that entitled employees may be less likely to engage in their jobs and helping behavior, but more likely to engage in unethical behavior. Therefore, organizations should ensure they adopt strategies to reduce the likelihood of employees experiencing such unfavorable psychological states. For example, organizations may seek to measure psychological entitlement among their employees in selection and performance evaluation procedures and to identify employees with high entitlement.
However, for the companies which have recruited entitled employees, studies have not provided specific managerial strategies. This study finds that entitled employees also have .
/fpsyg. . outstanding performance when they have a high felt obligation, Therefore, several strategies can be adopted to stimulate entitled employees' work engagement and helping behavior and attenuate unethical behavior. According to existing studies, positive leadership and organizational support can be used to nurture felt obligation (Lorinkova and Perry, 2019;Thompson et al., 2020).
. Limitations and future research
This study has several limitations, which help point out directions for future research. First, this study cannot establish causal relationship between focal variables. The adopted questionnaire survey design can only provide us with insights into the association between focal variables rather than the causal influences of psychological entitlement and felt obligation on behavioral choices. Future research may adopt a cross-lagged panel design to overcome these shortages.
Second, this study mainly focuses on the mediating effect of work engagement. There may be other alternative mechanisms that can explain the indirect influence of fit between psychological entitlement and felt obligation on (un)ethical behavior. For instance, organizational identification can also be used to explain the indirect relationship between personality and (un)ethical behavior. Future research may explore other mechanisms to enrich the present study.
Third, this study is conducted within Chinese culture. The entitlement and obligation vary with age and culture (Brummel and Parker, 2015). Future studies may replicate this study in a Western culture context to confirm its external validity.
. Conclusion
By adopting two-wave multi-source data in leadersubordinate dyads, this study has explored the impacts of (in)congruences in entitlement and obligation on (un)ethical behavior. This research has found that when psychological entitlement and felt obligation are balanced at higher levels rather than lower levels, employees have higher work engagement and helping behavior but lower unethical behavior. By contrast, when psychological entitlement and felt obligation are asymmetric, employees with high felt obligation but low psychological entitlement have higher helping behavior and work engagement but lower unethical behavior compared with employees with low felt obligation but high psychological entitlement. In addition, work engagement plays a role in mediating the relationship between obligationentitlement fit and (un)ethical behavior. Drawing on the social interdependence theory, this study has clarified the advantages of felt obligation and the disadvantages of psychological entitlement along the incongruence line, which is consistent with prior studies. Moreover, based on the equity theory, this study has proposed that high entitlement is beneficial to employees when their obligation is high simultaneously, enriching traditional entitlement literature. Through polynomial regression with response surface analysis, this study has provided a novel perspective to expand on the influences of psychological entitlement and felt obligation on ethical behavioral decision-making.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors. | 2023-01-10T14:32:18.941Z | 2023-01-09T00:00:00.000 | {
"year": 2022,
"sha1": "71250d06a691f2273a05c8b46c70455576c1461a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "71250d06a691f2273a05c8b46c70455576c1461a",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
202321066 | pes2o/s2orc | v3-fos-license | The association between market-determined and accounting-determined risk measures in the South African context
A key problem in estimating the cost of capital for an unlisted company has been the determination of its beta coefficient. Market prices for such companies are not available, therefore the traditional regression methods for estimation are not possible. Thus, it is necessary for a proxy beta to be determined. In this article an attempt is made to develop such a proxy beta by using eight accounting variables. These accounting variables are shown to be significantly correlated to the market beta for individual companies. In addition, regression analyses are performed to develop an estimation model which will allow the individual company to obtain a proxy beta from its accounting variables. Satisfactory regression equations are developed for both the single share case and the portfolio case. The article is concluded with the presentation of a four-step procedure which will permit managers of unlisted companies to obtain a proxy for their beta and hence to estimate their overall cost of capital. In addition, it is shown that the procedure presented is consistent with the findings of modern portfolio theory.
Market prices for such companies are not available, therefore the traditional regression methods for estimation are not possible.Thus, it is necessary for a proxy beta to be determined.In this article an attempt is made to develop such a proxy beta by using eight accounting variables.These accounting variables are shown to be significantly correlated to the market beta for individual companies.In addition, regression analyses are performed to develop an estimation model which will allow the individual company to obtain a proxy beta from its accounting variables.Satisfactory regression equations are developed for both the single share case and the portfolio case.The article is concluded with the presentation of a four-step procedure which will permit managers of unlisted companies to obtain a proxy for their beta and hence to estimate their overall cost of capital.In addition, it is shown that the procedure presented is consistent with the findings of modern portfolio theory.
Introduction
In recent years an increasing number of companies have been using modem capital budgeting techniques in evaluating their capital investment decisions.All of these techniques require that the company determine its cost of capital-i.e. the return it needs to earn from its investments to satisfy all of its providers of capital simultaneously.This cost of capital is then used either as the discount factor in the net present value (NPV) calculation or as the hurdle rate if an internal rate of return (IRR) approach is adopted.
Both academics and practitioners now agree that the weighted average cost of capital approach provides the most appropriate way of estimating the cost of capital for the individual finn.This approach requires the company to estimate the cost of each of its sources of capital (debt, equity, preference shares, etc) and to weight these by the proportion of each source in the company's target capital structure.In theory this is an extremely simple and appealing procedure.In practice, the costs of debentures, preference shares, and other debt instruments are usually determined by reference to current market rates for these types of instruments.They are therefore relatively easy to establish.However, the cost of equity is not as easily established.Even if the company is a listed company, a share price is quoted and not a return.Furthennore, there is almost universal agreement that the share price is fixed by investors' expectations of future dividends rather than by the history of past dividends.Therefore, the return required by the equity holders on their investment in the company is not easy to determine.
Several approaches to estimating the cost of equity have emerged in the literature.The early approaches were based on forecasts of future dividends and the discounting of these to produce the current share price (cf., for example, Gordon, 1955 andGordon &Shapiro, 1956).In recent years the Capital Asset Pricing Model (Sharpe, 1964) has provided a means for companies to estimate their cost of capital without having to make forecasts of future dividends.This model can be stated as follows: where RE= thefinn'scostof equity;RF= the risk-free rate; E(Rm -RF)= expected risk premium paid by market over and above the risk-free rate; and B = a measure of the covariability of the share price with the market relative to the volatility of the market.
The parameter which is most difficult to estimate in the above equation is the B parameter.The risk-free rate can be estimated by using either the treasury bill rate or the banker's acceptance rate whereas the expected market premium can be estimated by averaging the market premium over a large number of years.If the company is a listed company the B parameter can be estimated using the market model (Fama, Fisher, Jensen & Roll, 1969): R; = a+ BRm; + e where R; = the return on the share in period i; Rm;= the return on the market in period i; a and B = the regression parameters which can be estimated using ordinary least squares (OLS) regression; and e = the random error term which is assumed to obey the assumptions necessary for OLS regression.
Convention suggests that five years of monthly data yield reasonable estimates of the B parameter.
Although the model has received some criticism in the literature (e.g.Roll, 1977) it remains popular in practice.
This is probably due to its intuitive appeal and the simplicity of application.Consequently the CAPM has been used by many listed companies to estimate their cost of capital.
Unfortunately, use of the model is not widespread among unlisted companies.This is because in the absence of a regular market price for the equity of the company, beta estimation is not possible in the conventional sense (i.e. using the market model).To overcome this problem many texts suggest that the unlisted company choose a listed company in the same type of business and estimate the beta for that company ( say BL).This beta can then be used as a first approximation for the beta of the unlisted company (Brealey & Myers, 1985:172).However, it has been shown that the beta is directly related to the leverage employed in the company (Hamada, 1972).Therefore, it is necessary to first unlever the beta of the listed company as follows.
where BA = unlevered beta for the listed company; BL = levered (estimated) beta for the listed company; EL= the total value of equity in the listed company; and D L = the total value of debt in the listed company.
The beta for the unlisted company can then be estimated by re-levering this BA by the leverage employed by the unlisted company.That is, where B eq = the equity beta of the unlisted company; D,, = the total value of debt in the unlisted company; and E,, = the total value of equity in the unlisted company.Whilst this procedure might prove adequate in countries with very large exchanges it is inadequate for countries with relatively few listed companies.This is true in South Africa where, due to the increase in conglomeration over the last few years, it may prove difficult for an unlisted company to find an appropriate surrogate company.Additional problems are encountered when the listed company is thinly traded (Dimson, 1979) which is the case for many companies listed on the JSE (Strebel, 1977).
The problem arises as to how best to estimate beta for an unlisted company.One approach is to attempt to estimate the beta from accounting variables as such variables are readily available to the management of an unlisted company.
If this is possible then added benefits will ensue.For example, if a relationship can be established between market beta and accounting variables this relationship could be used, inter alia, to assess the impact on the market's assessment of risk in changes in the accounting structure of the company and to assist in determining the rate of return which can justifiably be earned by companies in regulated industries.
In this article, an attempt is made to develop a relationship between market beta and eight accounting variables.A brief review of the relevant literature is presented in the second section and this is followed by a brief discussion of the data and methodology in Section 3. Section 4 presents initial results showing the correlation coefficients between market beta and each of the accounting variables examined.The regression models are presented and discussed in Section 5 and the article closes with a brief summary and conclusion.
Review of past research
The research into the relationship between market beta and accounting variables can be divided into two distinct cl~ -a univariate approach which concentrates on attempts to find a single accounting surrogate for market beta and a multivariate approach in which the relationship between market beta and several accounting variables is examined.These will be discussed separately below.
In one of the first major univariate studies, Hamada (1972:449) showed that financial structure had an important influence on beta but he disagreed with certain other authors on whether beta varies directly with the level of financial leverage.This followed an earlier study (Hamada, 1969:19) in which he proved analytically that beta will increase as a company increases its leverage.He concluded that if the Modigliani & Miller (1958) (1974:627) devised an operating leverage variable (the ratio of fixed to variable operating costs) which proved to have modest explanatory power. . .In addition to these attempts to establish a relauons~ between market beta and a single traditional accou_nbng variable, several researchers attempted to establish 3 relationship between market beta and an accounting-based beta.For example, Gonedes (1973:410) defined an accoun• ting beta based on earnings divided by the book value of assets.The correlations between this accounting beta and market beta were insignificant except when first differences were used to compute the betas.Beaver & Manegold_(l9?5)extended this work by conducting an extensive invesugauon employing three different measures of accounting beta.They found significant correlations (both Spearman rank• order and Pearson product-moment) with market betas for all the accounting betas examined (Beaver & Manegold, 1975:248).In addition, they found that the strength of the correlation increased with increasing portfolio size. .
Hill & Stone (1980) devised a risk composition beta which they claimed to be an accounting analogue of the market .beta.Their results indicated that the risk-composition beta was generally more highly correlated with the market beta than were the other accounting betas.In addition, the riskcomposition beta was able to predict the magnitude of the future market beta with significantly less error than other accounting betas.
As far as South African companies are concerned, Retief, Affleck-Graves & Hamman (1984) showed that the Hill & Stone results did not hold for a sample of companies chosen from the JSE.In particular, they showed that the correlation between most of the accounting betas and the market beta was negative.They concluded that it was unlikely that a single accounting beta would prove an adequate surrogate for market beta in the South African context (Retief, Affleck-Graves & Hamman, 1984:210).
Other researchers have sought to forge multivariate links between beta and several corporate risk factors.For example, Logue & Merville (1972:42) regressed the betas of 21!,7 industrial common shares on nine financial variables.Only return on assets, asset size, and financial leverage variables appeared significant, but correlations were low with r 2 equalling 0,25.Breen & Lerner (1973:344) divided 1 400 companies into 12 groups according to the month in which 1969 financial results were announced.They then regressed the betas in each month grouping on seven financial variables.They found that most variables were not significant, and those that were, were not consistently significant over time.Rosenberg & McKibben (1973:325) examined 32 variables derived from both accounting and share market data.They found 13 significant variables but the directions of their relationship with beta, as expressed by the signs of their regression coefficients, were generally unexpected.In addition, the variables had only 2% more explanatory power than the naive assumption that beta equalled one for all shares.Lev & Kunitsky (1974:264) found beta to be significantly associated with dividend payout, indicators of smoothing in a company's capital expenditure, dividends, sales, and earnings.The regression coefficients had expected signs and the r 2 was 0,47.Melicher (1974:239) found significant multivariate links between beta for electric utility shares during 1967-1971 and dividend payout, return on common equity, market activity, plant to total capitalization, and size.The pattern of signs was generally as expected and r 2 ranged from 0,33 to 0,41.Replication of the tests on the 1963-1967 period, however, produced very poor results.
In a follow-up study, Melicher & Rush (1974:541) sought to relate changes in betas from 1962-1966 to 1967-1971 to 11 financial variables.The results were discouraging.Only financial leverage, earnings growth, and plant to total capitalization proved significant with r 2 ranging from 0,22 to 0,26.
Thompson (1976:178-181) formulated 43 variables to explain the beta of a common share by using prior research on corporate behaviour and characteristics and by developing a model.His model, based on a widely used share evaluation technique revealed three major risk factors inherent in the beta of a share.These risks stem from fluctuations in the earnings, dividends, and an earnings multiple of the individual company.
Belkaoui (1978:5) concluded from evidence based on examining 55 Canadian companies that accounting-based 155 measures of risk are impounded in the systematic risk of a common share.A significant positive relationship was found between both the current ratio and long-term debt to common equity and systematic risk.However, his results conflicted with those obtained in similar studies conducted in the USA.Pettit & Westerfield (1972:1662), on the other hand, did not find significant correlations between liquidity and leverage against market beta.
In South Africa very few studies have emerged in this area.However, Retief (1980:42) investigated five return measures, namely return on assets, return on equity, EBIT /average total assets, EBIT /selected liabilities, and return on book capitalization.However, he found no significant results.
The above refers to only a few of the numerous studies attempting to establish the underlying determinants of systematic risk.What seems to be clear is that systematic risk is related in some way to risk factors existing in the corporation.However, it is still far from clear which risk factors are important and these factors seem to vary between different markets, different economic climates and conditions, different time periods, and are even sample dependent.
Data and sample selection
The companies chosen to comprise the sample for the study and the time period examined are identical to those selected for the study of Retief, Affleck-Graves & Hamman (1984).The sample consists of 63 companies quoted on the Johannesburg Stock Exchange (JSE), each of which had a June financial year-end for each year from 1973 to 1982.The companies are listed in Appendix 2. For additional details concerning the selection the reader is referred to Retief, et al. (1984:207).
An analysis of the previous attempts to establish a relationship between market beta and accounting variables has shown that the choice of variables depends on the researcher and that choices were usually made in an ad hoc manner.Measures, however, can be generally divided into the following classes: profitability; leverage; liquidity; and efficiency.
Rather than choosing a multitude of ratios (e.g.40 or 50) and increasing the chances of obtaining a spurious relationship, it was decided to choose only one or two variables from each of the major classes of accounting variables.Accordingly, the following eight variables were selected: Financial leverage (F) Operating leverage ( 0 L) Asset turnover (TATO) Current ratio (CR) EBIT to total assets (ROA) Equity beta (BE) Cash flow beta ( B CF) Standard deviation of cash flow (SCF).
The exact definition used for each of these ratios is presented in Table 1 and it is assumed that the definition used will not materially affect the results.
Five of these eight variables are traditional financial ratios and therefore will not be discussed further.Additional information concerning the applicability and relevance of these ratios can be formed from a number of sources as Weston & Brigham (1981); Keown, Scott, Martin & Petty (1985) and Halloran & Lanser (1985).The three remaining variables are, however, not standard ratios and therefore warrant some additional comment.Firstly, equity beta was chosen to represent the class of accounting beta variables because it proved to be the best of the traditional accounting betas in the South African context (Retief, et al., 1984).Secondly, cash flow beta was also included as it was argued (Retief, 1984:201) that this accounting beta might be more appropriate under conditions of high inflation.The method of estimating cash flow beta is presented in Appendix 1. Finally, the standard deviation of cash flow was incorporated to include some unsystematic component in the accounting variable measures.
Each of the first five ratios were calculated for each of the years 1973-1982.These ten values were then averaged for each company to obtain an average value for the ratio for each company.The two accounting betas and the standard deviation of cash flow were estimated for each company using the ten years of available data, i.e. 1973-1982.Finally, the estimates for the market betas used in the subsequent correlation and regression analyses were estimated using monthly data over the entire period.
The results obtained for each variable were then averaged across the 63 companies comprising the sample.Some summary statistics for these variables are provided in Table 2.
Simple correlation analysis
For each of the accounting variables discussed in the previous section the Pearson product-moment correlation coefficient with the market beta was calculated for (a) single shares; (b) portfolios consisting of three shares; ( c) portfolios consisting of seven shares (except in the case of SCFand BCF where the portfolios consisted of six shares due to the reduction of the sample from 63 companies to 60 companies -d.Appendix 1 ).
The portfolios were formed by grouping adjacent shares after ranking on market beta and portfolio variables were calculated as the arithmetic average of the variables for all companies included in the portfolio.
The results obtained are summarized in Table 3. portfolio sizes.Therefore, for example, financial leverage is always the most significant of the accounting variables whereas the cash flow beta is always the second most significant variable.Also, operating leverage is consistently the least significant followed by asset turnover and return on assets.
The empirical results presented in
The results presented in Table 3 thus indicate that the financial ratios traditionally employed are significantly related to market beta-i.e. the market measure of risk.In addition, the sign of the correlation is usually consistent with expectations.For example, financial leverage has a positive correlation with market risk supporting the widely held belief that increasing financial leverage increases risk.
Similarly, the current ratio has a negative correlation with market risk indicating that, on average, the higher the current ratio, the lower the market risk and vice versa.Again, this is as expected.
However, despite the fact that the ratios are, in general, significantly correlated with market beta, it must be pointed out that the correlations are not particularly high in absolute terms.Even the financial leverage ratio has only a correlation of0,56 with market beta in the single share case.This implies that only approximately 32% of the variability in market beta can be explained by financial leverage.Therefore, although the relationship is significant and useful, it is unlikely to be of great assistance to the individual company in attempting to assess its market beta.
Indeed, the results therefore indicate that for an investor analysing the riskiness of a company in isolation, leverage should clearly be an important consideration.But, it is not the only factor that influences market beta as other factors account for approximately 68% of the variability in the beta coefficient.On the other hand, for an investor making portfolio decisions the leverage of the company is a crucial factor in assessing risk.At the portfolio level leverage explains as much as 96% of the variability in the market beta.In essence what appears to happen is that the other risk factors that affect the riskiness of an individual share (e.g. business risk) are diversified away at the portfolio level.However, the leverage factor is largely unaffected by the diversification and thus becomes the dominant risk factor.
The regression approach
The results presented in the previous section indicated that, individually, the accounting variables examined are unlikely to enable the individual firm to estimate its market risk accurately.It is therefore necessary to examine whether these variables can be used collectively to provide a more accurate estimate of the company's market beta.
In order to examine this, stepwise regression with market beta as the dependent variable was used to determine which combination of the independent variables was most suitable for estimating market beta.The results for the single share case are summarized in Table 4.
An analysis of Table 4 reveals that financial leverage ( F) alone explains 32,5% of the variation in Bm.The inclusion of S CF in addition to F increases the coefficient of determination (r 2 ) by l l, 12%.This represents a statistically significant increase.Likewise the inclusion of B CF and the current ratio (CR) also significantly increases the coefficient of determination ( at the 10% level of significance).However, the further inclusion of BE, ROA or OL does not significantly increase the coefficient of determination at the 10% level of significance.
The above discussion implies that only the variables F, Table4 Summary of steps for the stepwise regression analysis: single share case BcF, ScFand CR need to be considered in the estimation of Bm in the single-share case (from the set of eight variables examined).The regression equation derived using these four variables is given by (t values in parenthesis): This equation explains 50,16% of the variation in market beta.
This procedure was repeated for the three-share portfolio case.The stepwise table is shown in Table 5.An analysis of the results (along the same lines as previously) shows that only three variables, namely F, B CF, CR, should be included in the final regression equation.The variable F alone ex~lains 62,13% of the variation in Bm.The inclusion of B 'F increases the coefficient of determination by 15,12%, which once again represents a statistically significant increase.
Using only these three variables the following regression equation is obtained: This equation explains 84,93% of the variation in Bm.An identical procedure was repeated for the six-share portfolio case.The stepwise regression results are summarized in Table 6.An analysis of this table shows that the variable F on its own explains 94,07% of the variation in Bm. 1be inclusion of O Lin addition to F only increases the coefficient of determination by 2,3% whereas the further inclusion of ScFis responsible for an increase of an additional 2,08%.s.Afr, J. Bus. Mgmt. 1986, 17(3) Further inclusion of any other variables does not significantly increase the coefficient of determination at any acceptable level of significance.
However, even though both O L and SCFcause significant increases in the coefficient of determination the increase is small in magnitude and hence it is not recommended that these two variables be included in the final regression equation.It is therefore recommended that only the F variable be included in the multi-share portfolio case.This yields the following regression equation: Bm = -0,930 + 3,456F (11,26) This equation explains 94,07% of the variability in Bm.In concluding this section, it is worth noting some overall trends in the results.Firstly, the correlation coefficients (r) improve in all cases as portfolios are formed, possibly indicating a reduction in the measurement error or the occurrence of non-random grouping.
Secondly, the financial leverage ratio ( F) is the most significant variable in all the regression models.It not only explains the highest portion of the variability of the beta coefficient individually, but it also displays the highest t values throughout.
Thirdly, in all the cases cited above, the models indicate that the riskiness of a share, as perceived by the market, tends to be most sensitive to the following classes of accounting data: financial structure; cash flow; and liquidity.
Finally, the regression approach confirms the argument that other accounting variables need to be included in the individual share case if a suitable explanation of Bm is to be obtained.Of the variation in Bm, 32% was explained when only leverage was taken into account and 23% was explained when only ncF was taken into account.However, when other variables were included, this improved to 50% -a significant improvement.
Conclusion
In this article the relationship between market beta and eight accounting variables has been examined.The results obtained indicate that each of the eight accounting variables is individually significantly correlated with the market's assessment of the systematic risk inherent in the individual company.As such, the results indicate some support for the value of accounting information from an investor's'point of view.However, it must be stressed that the evidence of a significant correlation only indicates the presence of a linear relationship between the two variables.It does not enable one to conclude that a causal relationship exists.The latter can only be established by means of a thorough theoretical study which is not the aim of this article.
From the regression analyses it was found that for an individual company the eight accounting variables examined can provide a reasonable estimate of the market beta.Thus, managers of unlisted companies can estimate their cost of equity using the following four-step procedure: (i) From their historic annual financial statements and their future target structure, estimate the leverage ( F), the current ratio (CR), the standard deviation of their cash flow (ScF), and the beta cash flow (BCF).
(ii) Use these estimates in the following equation to obtain an estimate of their market beta.
(iii) Use this estimate of Bm to obtain an estimate of their cost of equity from the Capital Asset Pricing Model: R.,q = RF+ Bm.E(Rm -RF).
(iv) This estimate of the cost of equity (Rt!#l) can be used in the weighted average cost of capital calculation to obtain their overall cost of capital (k 0 ): ,. k0 = 2 W;,R; i=l where W; = the proportion of total funds provided by source i; and R; = the estimate of the return required by the providers of the ;th source of funds.The results presented in this article indicate that the above procedure will result in a statistically significant estimate of Bm and hence of the weighted average cost of capital.However, it must be remembered that this regression equation was established using listed companies.This was necessary because a market estimate of B m was required to determine the regression relationship.On using the Bm in the CAPM to obtain the return required by equity holders it is implicity assumed that such equity is easily marketable.Because this is not valid for unlisted companies, it is possible that a premium should be paid for this lack of liquidity.Examination of the fixed interest markets indicates that a premium of between 1 % and 3% is evident in the yield to maturity of non-liquid assets.Thus it is tentatively suggested that in stage (iii) above, a 2% premium be added to Rt!#l to allow for the lack of liquidity.Of course, this is merely a tentative recommendation and additional research is necessary to determine the exact premium, if any, which should be earned by unlisted companies.
Finally, it is interesting to note that the regression analyses indicate that as portfolios are formed, fewer of the accounting variables are necessary to produce a reasonable estimate of the market beta.Indeed with portfolios of size six, the leverage ratio on its own enables management to estimate the market beta with a high degree of accuracy.On reflection, this is not a strange result.It merely reflects the benefits of diversification.Therefore, when six companies are combined, there is a diversification effect which reduces the sensitivity of the portfolio to changes in the individual companies' cash flows and current ratios.For example, one company may have unexpectedly low cash flow in a particular year but this may be compensated for by another company which has unexpectedly high cash flow in that year.The effects of cash flow variations are thus diversified away in the portfolio.However, the leverage factor, or degree of financial risk, is not as easily diversified.The portfolio will certainly reflect the average leverage of its constituent companies, but there is no compensation within an individual year.For this reason it is not surprising that the leverage factor remains a significant variable at the portfolio level.Indeed, the results presented indicate that as far as investors are concerned, as opposed to managers, leverage is the only significant accounting variable to consider as they would be expected to hold diversified portfolios in an efficient market.This in turn provides support for the separation theorem (Sharpe, 1964) which indicates that the major decision facing an investor is the amount of assets he wishes to place at risk and the amount he holds in the risk-free asset.'Ibis is nothing more than saying that his major decision is the degree of leverage he personally wishes to have.He can obtain such leverage
Appendix 1 Cash flow beta
Traditionally the financial risk of a company has been associated with the company's ability to service its fixed charges, such as principal and interest repayments on debt, lease payments, and dividends on preferred shares.However, it has been suggested that in assessing the financial risk of a company the investor cannot rely on debt ratios alone but must also take congnizance of the payment schedule of the debt and the average interest rate.Therefore, in addition to the usage of debt ratios, it Im been suggested that investors should also analyse the cash flow ability of the company to service the debt.Consequently, the greater and more stable the cash flows of the company, the smaller the risk of insolvency and consequently the less risky the company from the market's point of view.Moreover, it is generally accepted that under conditions of high inflation, cash flow becomes an important variable whilst traditional measures of earnings become less important.It was therefore decided to define a cash flow beta in a similar way to that in which other accounting betas have been defined (e.g.Hill & Stone, 1980).
To do this, it was necessary to define cash flow.Cognizance was taken of the debate in the literature concerning the appropriate definition of cash flow, but this study does not attempt to address the issue.Rather,a simplistic definition of cash flow was used, namely cash flow = earnings after taxation plus depreciation of fixed assets.
Irrespective of the definition used, it would be incorrect to simply calculate the absolute cash flow values for each company for each year and regress those values against a market index of cash flows in order to obtain a 'cash flow' beta coefficient.This follows because the beta concept is a return concept and not an absolute level concept.In addition, inflation causes a reduction in the purchasing powerofmoney, and hence an 'increase' in the cash flow figure tends to occur each year.This would result in correlation results being biased in the sense that a positive correlation would emerge due to a common inflation effect.For this reason, the relative change in the value of cash flow from year ( t -1) to year ( t) was used for the calculation of the cash flow beta.Thus cash flow beta (BCF) was estimated using the following time series regression: R11 = a+ BCF,Rm, + ult where R1, = the relative change in the value of cash flow for company i from year ( t -1) to year ( t); Rm, = a market-wide index of the relative change in cash flow for the market from year ( t -1) to year ( t); Vil = the stochastic individualistic component of R 1 ,; and a, BcF = the inte~ and slope parameters respectively of the assumed linear relationship between R1, and Rm,.
ROE; = ( earnings after taxation -minority interest in income -pref dividends) + (Book value of common equity) RO Em= a market index of accounting equity rate of returns Each of the above measures were calculated per company per year for 1973-1982.To establish the relationship with market beta, a single value fortbe time period studied was calculated as follows:Example, in the case of the F(financial leverage ratio); for company i, year t:(Fixed ~ts and all other non-current assets)+ (Current assets) -(Equity) (F;), = (Total assets) then N F; = L ( F;),/ N for the total period r=l where N = number of years in the time period studied The calculation of the relative change in cash flow (d(CF)) posed 8 problem due to the occurrence of negative values or values close to uro.For example, for company i:
Table 1
Definitions of ratios (variables) | 2019-09-11T08:12:21.459Z | 1986-09-30T00:00:00.000 | {
"year": 1986,
"sha1": "51ad84d3d423fb48b72005a7750df15e024de120",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4102/sajbm.v17i3.1050",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "63ee77a328105bf1d7cbd9ecc305ad6afe13554b",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
235187745 | pes2o/s2orc | v3-fos-license | Treatment of Bronchopleural and Alveolopleural Fistulas in Acute Respiratory Distress Syndrome With Extracorporeal Membrane Oxygenation, a Case Series and Literature Review
OBJECTIVES: To describe a ventilator and extracorporeal membrane oxygenation management strategy for patients with acute respiratory distress syndrome complicated by bronchopleural and alveolopleural fistula with air leaks. DESIGN, SETTING, AND PARTICIPANTS: Case series from 2019 to 2020. Single tertiary referral center—University of California, San Diego. Four patients with various etiologies of acute respiratory distress syndrome, including influenza, methicillin-resistant Staphylococcus aureus pneumonia, e-cigarette or vaping product use-associated lung injury, and coronavirus disease 2019, complicated by bronchopleural and alveolopleural fistula and chest tubes with air leaks. MEASUREMENTS AND MAIN RESULTS: Bronchopleural and alveolopleural fistula closure and survival to discharge. All four patients were placed on extracorporeal membrane oxygenation with ventilator settings even lower than Extracorporeal Life Support Organization guideline recommended ultraprotective lung ventilation. The patients bronchopleural and alveolopleural fistulas closed during extracorporeal membrane oxygenation and minimal ventilatory support. All four patients survived to discharge. CONCLUSIONS: In patients with acute respiratory distress syndrome and bronchopleural and alveolopleural fistula with persistent air leaks, the use of extracorporeal membrane oxygenation to allow for even lower ventilator settings than ultraprotective lung ventilation is safe and feasible to mediate bronchopleural and alveolopleural fistula healing.
B ronchopleural and alveolopleural fistulas (BAPFs) are exceedingly difficult to treat in patients with acute respiratory distress syndrome (ARDS) on ventilatory support. Approximately 10-30% of ARDS cases are complicated by a BAPF, with 2% having a persistent air leak (PAL) (1,2). Despite conventional ventilator strategies for ARDS (i.e., low tidal volume [TV] ventilation), increased transpulmonary pressures (TPPs) and high respiratory rates (RRs) may impede closure of a BAPF (2,3). We describe patients with ARDS and BAPF who were placed on extracorporeal membrane oxygenation (ECMO) to allow for low ventilator pressures that promote BAPF closure.
BAPF are pathologic connections between either the bronchi or alveoli to the pleural space. Differentiating between them can be difficult and both can have a prolonged or PAL defined as an air leak greater than 5 to 7 days (4). PALs are associated with longer hospital stays and increased mortality. Bronchopleural fistulas can occur in trauma, post-radiation or microwave ablation, pulmonary infections, or iatrogenic due to pulmonary resections or airway procedures (5). Alveolopleural fistulas etiologies include spontaneous pneumothorax, pulmonary infections such as necrotizing pneumonias, malignancies, iatrogenic after thoracentesis or chest tube placement, and barotrauma from mechanical ventilation. There is considerable overlap in the etiologies of BAPFs and differentiating the cause may be important for therapeutic options. Traditionally bronchopleural fistulas are treated surgically, while alveolopleural fistulas are managed nonoperatively, that is, tube thoracostomy. When positive pressure ventilation is required, the goal is to minimize airway pressures to facilitate BAPF closure (6). Other therapies for refractory BAPF have included surgical repair, endobronchial procedures (endobronchial valves, etc.), and pleurodesis (7). There is considerable variability in the management of BAPFs, depending on the etiology and local expertise. Conventional ventilator management in ARDS relies on low TV ventilation and plateau pressure goals, which are defined as 6 mL/kg of ideal body weight (IBW) and less than 30 cm H 2 O, respectively. However, this level of ventilatory support may impede BAPF healing due to high TPP. TPP is the distending pressure across the lung (pressure difference from airway opening to pleural space) (8). In the setting of BAPFs, high TPP promotes PALs (9). Thus, decreasing the TPP may help facilitate closure of BAPF.
Treatment of Bronchopleural and Alveolopleural
ECMO is a type of mechanical life support for refractory cardiac and/or respiratory failure (10). This technology has been increasingly used for severe ARDS in the United States since the 2009 H1N1 influenza pandemic and again in 2020 due to the coronavirus disease 2019 (COVID-19) pandemic (11). There is controversy surrounding ventilator management while on ECMO for ARDS. While on ECMO, the Extracorporeal Life Support Organization (ELSO) recommends ultraprotective lung ventilation (UPLV), which is used at the majority of ECMO centers (12). UPLV is defined as positive end-expiratory pressure (PEEP) of 10-15 cm H 2 O, a driving pressure (DP) of 10 cm H 2 O, and RR of 5-10 breaths per minute (13). Depending on the respiratory system compliance, the lower DP results in lower TVs of ~2-4 mL/kg of IBW. Thus, the goal of UPLV supported by ECMO is to lower the TPP further than low TV ventilation, which will facilitate BAPF closure by decreasing stress across the lung.
We describe our experience in using even lower than UPLV settings in four patients with ARDS and BAPF to allow for further decreases in TPP to promote BAPF closure.
PATIENTS AND METHODS
This is a retrospective case series from a single tertiary care academic hospital system. Patient records for study inclusion were identified in a previously existing ECMO database from January 1, 2017, to September 30, 2020. Our ECMO database contains data required by the ELSO and is maintained by a group of physician and nurse leaders as part of an institutional quality improvement database. The Institutional Review Board waived the need for informed consent. All patients over the age of 18 years with ARDS who had a BAPF prior to ECMO initiation were included. The primary outcomes were BAPF closure and survival to discharge. Excel 365 (Microsoft, Redmond, WA) was used for data collection and analysis.
RESULTS
These four patients' demographics, etiology of ARDS and BAPF, and ventilator settings can be seen in Table 1. The ventilator settings throughout the patient's hospitalization are found in Figure 1. Patient outcomes can be seen in Table 2.
Patient 1
A woman in her late 20s with a history of IV drug abuse presented with fevers and altered mental status and was found to have right internal jugular vein septic thrombophlebitis complicated by methicillin-resistant Staphylococcus aureus (MRSA) bacteremia, MRSA pneumonia, and ARDS. She was intubated for hypoxic respiratory failure and developed bilateral pneumothoraces for which chest tubes were placed. She also had pneumomediastinum and subcutaneous emphysema with air leaks noted in both chest tubes. The patient was then transferred to our institution with severe ARDS (ratio of Pao 2 to Fio 2 of 96) with a PAL noted from both chest tubes for the preceding 6 days and anuric renal failure. Continuous renal replacement therapy was started using a right femoral dialysis catheter upon arrival. She was also placed on venovenous ECMO with a left femoral 21F drainage and left internal jugular 18F reinfusion cannula. Immediate post-ECMO, UPLV was initiated which consisted of: RR from 30 to 12 breaths per minute, DP 20 to 8 cm H 2 O resulting in TV from 277 mL (4.8 mL/kg of IBW) to 87 mL (1.5 mL/kg of IBW), and PEEP from 16 to 14 cm H 2 O. Her PAL resolved after 3 days on ECMO, however, due to intermittent recurrence of air leak, the PEEP was further decreased to 5 and then 0 cm H 2 O on ECMO days 12 and 17, respectively. Concurrently, DP was decreased to 5 and then 1 cm H 2 O on ECMO days 16 and 17, respectively. With these ventilator settings, which were lower than UPLV (i.e., essentially on T-piece), she required full ECMO support. Three days after these changes, even the intermittent air leak was no longer present. Ventilatory support was gradually increased without recurrent air leak, see Figure 1 for ventilator management on ECMO. She was weaned off ECMO on day 25 and liberated from the ventilator on day 27. Chest tubes were removed after 35 days, and she was discharged to a long-term acute care (LTAC) facility. Four months later, she returned to visit our ICU and was on room air with no dyspnea.
Patient 2
A man in his early 20s with a history of asthma who was actively vaping presented with acute dyspnea and developed respiratory failure requiring mechanical ventilation, with course complicated by right-sided pneumothorax requiring a chest tube that had a continuous air leak. An extensive workup for etiology of his respiratory failure, including infectious, medication-induced, and rheumatologic causes, was negative, so he was diagnosed with e-cigarette, or vaping, product use-associated lung injury (EVALI) (14). Figure 1. He tolerated endotracheal intubated and was able to walk around the unit with assistance. After 7 days on ECMO, the patient was decannulated and successfully extubated the same day, after 8 days on the ventilator. Chest tube was removed after 11 days, and he was discharged home after a 12-day hospitalization.
Patient 3
An obese woman in her early 30s presented with hypoxic respiratory failure due to influenza with MRSA pneumonia superinfection. Her hospital course was complicated by a right-sided pneumothorax 3 days postintubation requiring a chest tube. Subsequently, a continuous air leak was noted for 3 days prior to transfer to our center, where she was placed on venovenous ECMO (right femoral 25F drainage and right internal jugular 21F reinfusion cannula) for severe ARDS (Pao 2 /Fio 2 88) and barotrauma. Ventilator changes post-ECMO consisted of: RR from 30 to 19 breaths per minute, DP from 36 to 10 cm H 2 O resulting in TV from 330 mL (5 mL/kg of IBW) to 78 mL (1.2 mL/kg of IBW), and PEEP was maintained at 2 cm H 2 O. The continuous air leak immediately resolved with ECMO and lower than UPLV settings (specifically the PEEP). There was gradual improvement of pulmonary compliance without recurrent air leak, see Figure 1, and the chest tube was removed after 11 days. The patient was weaned off ECMO after a total of 12 days and extubated after 20 days, and she was discharged home after a 24-day hospitalization.
Patient 4
A man in his early 50s with hypertension and polycythemia presented with fevers and cough, and he was With these lower than UPLV settings (DP and PEEP) on ECMO, the patient's PAL, which had been present for 13 days, immediately resolved. Ventilatory support was gradually increased without recurrent air leak, see Figure 1. He was decannulated from ECMO after 16 days, and the chest tube was removed the same day. After 27 days on the ventilator, he was able to tolerate intermittent trach collar and was discharged to an LTAC after his 42-day hospitalization.
DISCUSSION
We present a case series and literature review of patients who have BAPF with ARDS. There are multiple observations: 1) ECMO should be considered in patients with ARDS complicated by BAPF and PALs; 2) a ventilation strategy with even lower PEEP and DP than UPLV (ELSO guidelines) may be used while on ECMO to further decrease TPP and RR to mediate BAPF closure with air leaks (patients 2 and 3), or PALs (patients 1 and 4); and 3) this strategy may apply to multiple etiologies of ARDS and BAPF including EVALI, influenza, COVID-19, and MRSA pneumonia. ECMO may be considered in patients with severe ARDS and BAPF (15,16). ECMO support maintains oxygenation and ventilation, allowing clinicians to minimize ventilatory support, which, in addition to tube thoracostomy, can promote healing of BAPFs. While ECMO cannulation strategies are beyond the scope of this case series, we recommend a cannulation strategy that offers maximal flows with minimal recirculation. This strategy usually requires a two-catheter approachthree patients described in our series had a femoral drainage and internal jugular return cannula. Only case 2 had a single site, dual-lumen ECMO cannula (e.g., Crescent [Medtronic, Minneapolis, MN] or Avalon Elite [Getinge Maquet, Rastatt, Germany]) (17,18). Single site catheters may be limited due to lower ECMO flows and the need for transesophageal echocardiography or fluoroscopy for cannula placement.
In patients with ARDS and BAPF, ECMO may facilitate closure if used with the appropriate ventilator management that minimizes TPP (i.e., UPLV or lower). This strategy decreases the patient's mean airway pressure and RR. Many ECMO centers use UPLV, similar to the landmark ECMO to Rescue Lung Injury in Severe ARDS study, in an effort to minimize ventilator-induced lung injury (VILI) in ARDS patients (12,15,19). Furthermore, decreasing the TPP by reducing the DP may have a mortality benefit in patients with ARDS (9). UPLV can mediate BAPF closure by minimizing air leak that promotes pleural apposition. In three cases (patients 2-4), the air leaks resolved immediately with ventilator settings lower than UPLV as seen in Table 2 and Figure 1. The ventilator settings in these cases were not slowly titrated down but immediately changed post-ECMO cannulation, resulting in the resolution of the air leaks.
However, even with UPLV while on ECMO, some patients may continue to have PAL. This occurred in case 1, who had ongoing air leak while on UPLV. Therefore, we further minimized TPP with a DP of 1 cm H 2 O and PEEP of 0 cm H 2 O, which led to resolution of the air leak. After 3 days, we then challenged the lung with increasing PEEP to see if the air leak would reoccur. In these scenarios, the further decrease in the TPP likely promoted BAPF healing. Additionally, due to the efficiency of ECMO to remove carbon dioxide, the RR can be decreased to 5-10 breaths per minute to further facilitate BAPF closure. This strategy may also further minimize VILI through the decrease in cyclical lung strain with each ventilated breath (19). We acknowledge that some levels of PEEP may be protective against VILI in patients with ARDS by preventing atelectatic trauma and thus superinfections (20,21). Although there is a paucity of data on long-term outcomes, we believe our strategy offers a pragmatic approach to BAPF treatment.
In some centers, after ECMO initiation, an early tracheostomy is performed, or the patient is extubated, so that they may be quickly weaned off sedation and mechanical ventilation. This strategy minimizes or eliminates the need for positive pressure ventilation that will decrease TPP. Our center does not routinely perform early tracheostomies for ARDS and the decision is made on a case-by-case basis. For example, case 2 did not undergo tracheostomy as it was believed he might recover quickly, and he was still able to participate in physical therapy and other care even with an endotracheal tube in place. However, we note that TPP might still be high (even off of positive pressure ventilation) depending on respiratory drive and the patient's negative intrathoracic pressure during inspiration. Those who are delirious or have ARDS with very poor lung compliance may have persistently high RR/drive. For example, Crotti et al (22) showed that increases in ECMO gas sweep could effectively reduce RR and drive in those ECMO patients with chronic obstructive pulmonary disease or bridge to transplant, but not uniformly in those with ARDS. Thus, we believe minimizing time on positive pressure ventilation (with or without tracheostomy) may help mediate BAPF closure in select patients if their respiratory patterns (as a surrogate for TPP) and BAPF are closely monitored.
To our knowledge, there are no prior reports utilizing our ventilator strategy in medical (i.e., nonsurgical) patients with ARDS and BAPF. There is limited literature utilizing ECMO for patients with traumatic or surgical bronchopleural fistulas, which are summarized in Table 3 (23)(24)(25)(26)(27). Similar to our cohort, ventilator settings were minimized post-ECMO to promote bronchopleural fistula closure. However, these studies have three notable differences. First, our population consists of patients with ARDS, while these studies only included patients with surgical complications resulting in bronchopleural fistulas. Second, the patients in these surgical case series required interventions to repair their BAPFs. Finally, we used venovenous ECMO for all our patients, while some patients in these series were placed on extracorporeal carbon dioxide removal or venoarterial ECMO. The standard therapy for BAPF with air leaks may not apply in patients with ARDS; thus, we have recommended the following algorithm in Figure 2. Therapies for PALs such as endobronchial valves, stents, and surgical repair may not be tolerated in patients with ARDS due to their high oxygenation and ventilation requirements. Endobronchial valve placement may not be feasible as multiple lobes may be affected by BAPFs since ARDS causes diffuse lung injury. Furthermore, these patients may not tolerate neither the sequential balloon occlusion necessary to determine optimal endobronchial valve location nor having a lung segment unable to participate in gas exchange postvalve placement (28). However, during ECMO support, these therapies may be better tolerated. Only in case 1 were endobronchial valves considered; however, we decided to pursue a trial of very low TPP before attempting placement of a valve. Cases 2-4 did not have endobronchial valves considered due to the quick resolution of the BAPF post-ECMO and implementation of UPLV or lower. Case 4 was also not considered a candidate due to the concerns for staff exposure to COVID-19 during the procedure. Finally, use of blood patches for BAPF in patients with ARDS may not be feasible or safe since these critically ill patients often have a baseline low hemoglobin and would poorly tolerate infectious complications.
There are many unanswered questions. Importantly, the optimal PEEP, DP, and resulting TPP in patients with ARDS on ECMO is currently unknown. Because TPP includes spontaneous respiratory efforts, it may be important to also reduce spontaneous respiratory drive with sedation or even neuromuscular blockade while providing full ECMO support. Substantial respiratory drive that increases TPP may be a limiting factor to early extubation (with or without tracheostomy) while on ECMO support. We further have little evidence to guide our duration of UPLV (or even lower ventilator settings as in our cases) with low TPP, when to rechallenge the lung with higher TPP, and the longterm outcomes in these patients. Longer courses of UPLV increase the inherent risks associated with mechanical ventilation and ECMO support, such as ventilator-associated pneumonia and bleeding. Importantly, in our series, the decision to place patients on ECMO were based not solely on presence of BAPF but also the severity of ARDS. Again, the risks and benefits of this approach have not been delineated. Further rigorous research in this area is needed, both small physiologic studies carefully measuring TPP as well as larger multicenter studies of ventilator management comparing hard outcomes.
CONCLUSIONS
In patients with ARDS, ECMO and standard UPLV can be used to promote closure of BAPF. However, even with UPLV, there are select cases with refractory PALs in which further decreases in TPP (DP and PEEP) and RR could be considered. | 2021-05-26T05:19:20.954Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "bbdf068dc409e527d404dcc4cac3194649e3bb6a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/cce.0000000000000393",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bbdf068dc409e527d404dcc4cac3194649e3bb6a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235128303 | pes2o/s2orc | v3-fos-license | Histopathological findings and clinicopathologic correlation in COVID-19: a systematic review
The severe acute respiratory syndrome Coronavirus-2 (SARS-CoV-2) pandemic has had devastating effects on global health and worldwide economy. Despite an initial reluctance to perform autopsies due to concerns for aerosolization of viral particles, a large number of autopsy studies published since May 2020 have shed light on the pathophysiology of Coronavirus disease 2019 (COVID-19). This review summarizes the histopathologic findings and clinicopathologic correlations from autopsies and biopsies performed in patients with COVID-19. PubMed and Medline (EBSCO and Ovid) were queried from June 4, 2020 to September 30, 2020 and histopathologic data from autopsy and biopsy studies were collected based on 2009 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A total of 58 studies reporting 662 patients were included. Demographic data, comorbidities at presentation, histopathologic findings, and virus detection strategies by organ system were collected. Diffuse alveolar damage, thromboembolism, and nonspecific shock injury in multiple organs were the main findings in this review. The pathologic findings emerging from autopsy and biopsy studies reviewed herein suggest that in addition to a direct viral effect in some organs, a unifying pathogenic mechanism for COVID-19 is ARDS with its known and characteristic inflammatory response, cytokine release, fever, inflammation, and generalized endothelial disturbance. This study supports the notion that autopsy studies are of utmost importance to our understanding of disease features and treatment effect to increase our knowledge of COVID-19 pathophysiology and contribute to more effective treatment strategies.
The quality analysis revealed that 18 (31%) articles included in this review were of high quality, 25 (43%) were of moderate quality, and 15 (26%) were of low quality (Supplementary Table 1).
The demographic findings and the pathologic characteristics are summarized in Tables 1 and 2, respectively.
Gastrointestinal tract
The gastrointestinal tract (GI) is a potential target of SARS-CoV-2 due to ACE2 receptor expression by the GI mucosa [10,76]. GI symptoms, including diarrhea, nausea, vomiting, anorexia, and abdominal pain [11,77] have been described in 3-61.1% of COVID-19 patients [11,77,78] either at illness onset or during hospitalization [77]. Subjects with GI symptoms have a longer interval from illness onset to hospital admission, and symptoms become more pronounced with disease progression [78]. About 3% of patients present only with GI symptoms and without respiratory complaints [78][79][80].
The lower GI (LGI) tract is more likely to be involved than upper GI (UGI); thus, we report the autopsy finding separately. Of note, detailed postmortem histological examination of the GI tract often lacks in the literature, owing to more focused descriptions of lung histopathology [20] and challenges of autolytic changes. Viral RNA detected in GI tissue by RT-PCR generally correlates with disease severity and is more reliable than RNA detected in the stool [77].
Intestinal ischemic damage has been described during endoscopic and surgical procedures and confirmed with histologic examination [84][85][86]. Other histological features included apoptotic bodies and endothelial hobnail and bizarre nuclear shapes resulting from direct viral CPE, identified by IHC for the nucleocapsid protein of SARS-CoV-2 [84].
Pancreas
High plasma levels of amylase or lipase have been reported in 9 of 52 (17%) patients [93,94], 6 of whom (67%) also exhibited moderate hyperglycemia [93]. Many COVID-19 patients have diabetes [95], but whether this is associated with death remains controversial [96]. Patients with alterations of pancreatic enzymes have a higher incidence of GI symptoms, more severe illness on admission, lower levels of CD3+ and CD4 + T cells, higher liver enzymes, and higher erythrocyte sedimentation rate compared to subjects without altered values [93].
The pancreas has not been commonly investigated in COVID-19 postmortem examinations. Current available data refer to 6 studies with 44 patients, including 34 full autopsies, 3 MIAs, and 7 in situ dissections [26,30,31,39,82,83] with most reports showing no significant abnormalities [26,30,31,82]. Degeneration of a few pancreatic islet cells without abnormalities in the exocrine pancreas [83] and asymptomatic focal pancreatitis in 5 of 11 patients (45%) [39] have been reported. However, severe pancreatitis has been rarely reported in clinical studies [97]. An additional autopsy study published during the review process of this manuscript reports pancreatitis in 2 of 8 (25%) patients, 1 frankly necrotic-hemorrhagic and another with microscopic acute inflammation but without macroscopic abnormality [98]. These findings may represent a direct viral effect [93], consistent with ACE2 receptor expression in the exocrine and endocrine pancreas [94], an indirect effect of respiratory failure, or a harmful immune response induced by SARS-CoV-2 infection [93].
Skin examination has been reported in 16 autopsies, of which 3 full and 13 MIAs from 4 studies [44,82,83,91]. Many of these reports describe normal cutaneous layers and appendages, and mild perivascular lymphocytic infiltrate and petechiae [83,91] with negative RT-PCR [82]. Dermatitis and a hypercoagulative status, possibly directly related to viral infection, were reported in 1 study. Superficial dermis perivascular mononuclear infiltrate in 11 of 13 cases [44,83], dermis and hypodermis small vessels endothelial changes and fibrin microthrombi in 3 cases [83], and purpura in 1 case [44] were also described. These autopsy findings are consistent with the cutaneous manifestations described clinically and histopathologically in survivors.
Coagulation alterations
Pulmonary embolism (PE) is a cause of death in a subset of COVID-19 patients. PE was described in 27 of 115 patients, of whom 39, including those with PE, had deep venous thrombosis [29,30,38,39,42].
Pulmonary thrombosis in small and medium-size pulmonary artery branches [39], as well as multi-organ thrombosis [42], was identified in 18 of 18 autopsies, of which 8 had associated pulmonary infarction [39]. Of note, 14 patients had received anticoagulation, suggesting that pulmonary thrombi formed despite treatment [39,40]. Similarly, no significant association was found between anticoagulation therapy and presence of thrombi, with 46% (22 of 48) of anticoagulated patients having large thrombi and 88% (42 of 48) having arteriolar and capillary microthrombi in another study [50].
Multisystem inflammatory syndrome in children (MIS-C)
Postmortem examination of an 11-year-old child with MIS-C demonstrated heart failure as the primary determinant of fatal outcome with myocarditis, pericarditis, and endocarditis [123]. In this case, SARS-CoV-2 was detected in heart tissue by RT-PCR and, purportedly by TEM. In the fatal case of a 17-year-old, the inflammatory infiltrate present within the heart was eosinophil-rich [90].
Discussion
The pathophysiologic mechanism of COVID-19 is still poorly understood. Autopsies have significantly contributed to our knowledge of the pathologic derangement occurring with COVID-19 and provided evidence for current treatment strategies. It is apparent from the growing number of autopsy studies that SARS-CoV-2 direct effects are primarily limited to the lung, while the ensuing systemic disease occurs through indirect rather than direct effects. This discussion will focus on the main systemic aspects of the disease and hypothesize a unifying pathogenic mechanism.
The spectrum of respiratory involvement in COVID-19 is broad, ranging from asymptomatic infection to flu-like symptoms to variably severe pneumonia, multi-organ failure, and death. In cases with a progressive course, acute respiratory distress syndrome (ARDS) develops in 31-41.8% of COVID-19 patients. Mortality among those who develop ARDS is 52.5-93% [18,124]. The clinical definition of ARDS is based on Berlin criteria [125,126], and mild, moderate, and severe ARDS is associated with increased mortality (27%; 32%; and 45%). Most patients meeting clinical criteria for ARDS have histologic features of DAD [127][128][129]. ARDS may be caused by infection/ sepsis, shock, trauma, aspiration, transfusion, drug reaction, oxygen toxicity, vaping-associated pulmonary injury, and connective tissue disease, but the pathognomonic histologic lesion is the same-hyaline membranes occurring as a response to alveolar and endothelial injury, causing capillary endothelial damage and leakage of plasma into the alveolar space [130]. This exudate admixed with cellular debris lines the alveolar surface impeding regular gas exchange. DAD progresses from an acute (exudative) phase characterized by hyaline membranes into an organizing/ proliferative phase, lasting 1-3 weeks and characterized by interstitial proliferation of fibroblasts and myofibroblasts, type 2 pneumocyte hyperplasia, and squamous metaplasia. If the patient survives the acute and proliferative stage, there may be resolution, stabilization of the process, or progression to the fibrotic (chronic) phase with collagen deposition, architecture remodeling, and honeycomb lung. Residual functional impairment is the consequence of continued interstitial fibrosis and airspace remodeling. In typical DAD caused by various etiologies, microvascular thrombi are part of the spectrum of pathologic lesions, and extensive vascular remodeling may be seen [130,131]. A contribution of antibody-dependent enhancement (ADE) is being actively investigated in COVID-19. A role for ADE has been invoked in SARS-CoV and MERS infection, among other viral infections. Immune complexes resulting from ADE mechanisms could contribute to DAD [132].
In our study, 152 of 263 (58%) patients had pathologic evidence of interstitial pneumonia (morphologically defined by interstitial lymphocytic infiltrate), and 50 (19%) had acute pneumonia or bronchopneumonia. Aspiration or superimposed bacterial infection were likely the underlying etiology of acute pneumonia, based on the neutrophilic pattern of infiltration. Characteristic morphologic patterns of viral injury in the lung include interstitial inflammation, DAD, and necrotizing bronchitis/bronchiolitis [53]. In our study, 230 of the total 263 patients had DAD, and 152 had interstitial lymphocytic infiltrate/pneumonia, described as sparse in some cases. Previous studies have shown that bilateral pneumonia is both a cause and a mimicker of ARDS. In a pre-COVID-19 ARDS autopsy study, 58% of patients had pneumonia, while 42% had DAD only without pneumonia, and 36% of clinically suspected ARDS were histologically diagnosed as pneumonia, while 20% of cases clinically diagnosed as pneumonia had only DAD on autopsy [133]. Early ARDS studies show that sepsis with multi-organ failure is the cause of death in 84% of patients, while respiratory failure accounts for only 16% of fatalities [134][135][136][137]. A systemic inflammatory response with fever, elevated inflammatory markers (eg, D-dimer, ferritin), and release of the proinflammatory cytokines tumor necrosis factor, interleukin (IL)-1, IL-6, and IL-8 exceeding the body homeostatic mechanisms may determine a DAD injury and a systemic response independently of the presence of pneumonia, similar to that seen in some severe cases of COVID-19.
Thromboembolism is another emerging feature of COVID-19. Ackermann et al. compared lung autopsy findings in COVID-19, influenza A (H1N1), and uninfected controls. Although both cohorts shared similar DAD findings, COVID-19 patients had severe endothelial injury with disrupted endothelial cell membranes and more arterial but less venous vascular thrombosis than influenza patients. COVID-19 and influenza patients had greater expression of lung ACE2 compared to uninfected controls and more ACE2-positive endothelial cells and endothelial injury [25]. Heterogenous disease stages and treatment applied to both cohorts make interpretation of these results complex [138]. The issue of thromboembolism versus thrombosis in COVID-19 has been debated in recent literature [139][140][141]. Lax et al. distinguish thrombosis from thromboembolism based on subsegmental artery involvement, distribution of thrombotic material in multiple vessels, associated endothelialitis, subtotal or total filling of the occluded vessels, and distribution of thrombotic events in areas of hypostasis, as opposed to the randomly distributed pattern of thromboembolism [39,141]. Micro-and macrothrombosis of pulmonary arteries have been described with endothelialitis [51]. It is apparent that vascular thrombosis is a frequent phenomenon in COVID-19 autopsies. However, a thrombosis secondary to local inflammation frequently accompanies DAD secondary to diverse etiologies [130,138,142,143]. Pulmonary infarcts are also seen with intubation in ARDS [140,144]. These findings suggest the pulmonary microthrombi may not be specific to COVID-19 but are associated with ARDS. Thus, definitive pathogenetic distinction between coagulopathy, endotheliopathy, and vasculitis, and whether these are a result of direct viral effect or systemic inflammation does not emerge from these autopsy studies. In autopsies from COVID-19 inpatients, COVID-19 untreated decedents in the community, and SARS-CoV-2 negative DAD controls, all but 1 untreated community COVID-19 patient with focal fibrinous pneumonia had DAD. COVID-19 inpatient and community patients had similar lung histopathologic findings at autopsy, suggesting that DAD is a viral and not iatrogenic injury [145]. DAD in COVID-19 patients was histologically indistinguishable from DAD from other causes. Thrombosis was similar in SARS-CoV-2 positive patients and negative controls [145]. COVID-19, 2003 SARS, and H1N1 showed DAD in 88%, 98%, and 90% of patients, respectively, while 57% of COVID-19, 58% of SARS, and 24% of H1N1 influenza patients had pulmonary microthrombi [146], suggesting that these findings are not specific for COVID-19. Persistence of multisystemic thrombosis at autopsy in a subset of COVID-19 patients undergoing prophylactic inpatient anticoagulation suggests that microthrombosis may be secondary not only to a generalized endothelial disturbance but also to an inflammatory response with cytokine release, fever, and inflammation, which is known to be characteristic of ARDS [64,147,148]. Additional larger comparative studies are required to further clarify this issue, shed light on the use of prophylactic versus therapeutic anticoagulation and associated risk of complications, and prevent the systemic inflammatory response before the ineluctable chain of events caused by the cytokine storm is initiated.
Nonspecific shock injury in multiple organs was the main finding in this review. This injury is described in the GI system and kidney, occasionally accompanied by vasculitis or thromboembolic events. A prospective study of 701 patients admitted to a COVID-19 tertiary hospital in Wuhan showed that acute kidney injury occurred in 5.1% of patients [149]. This is reflected by the nearly global finding of acute tubular injury in our study. However, in the same study, 43.9% also had proteinuria, and 26.7% had hematuria [149]. African ancestry is a known risk factor for kidney disease, resulting from G1 and G2 risk alleles in the APOL1 gene [150]. Collapsing glomerulopathy is strongly associated with APOL1 risk alleles [151]. Although the pathogenic role of APOL1 in kidney diseases is still unclear, collapsing glomerulopathy in the setting of APOL1 risk alleles homozygosity has been associated with HIV nephropathy, lupus nephritis, membranous glomerulopathy, interferon and pamidronate treatment [152][153][154][155][156]. Glomerular and inflammatory conditions, such as the COVID-19 cytokine storm, could result in a "second hit" injury superimposed on genetic susceptibility due to APOL1 risk variants [157,158].
Hepatic injury in COVID-19 may be multifactorial, related to direct viral CPE, hypoxic damage, cytokine storm, sepsis, or drug hepatotoxicity [89,[159][160][161]. Moreover, patients may have underlying chronic liver disease, such as cirrhosis, hepatitis, or cancer, which may increase the risk of SARS-CoV-2 infection with immunosuppression [89,159] and contribute to more severe liver damage, particularly with concurrent hypoxia [161]. Therefore, it is important to distinguish acute pathologic changes attributed to direct SARS-CoV-2 infection from chronic underlying diseases potentially predisposing to a fatal COVID-19 course and nonspecific secondary effects related to hypoxia, hypotension, or sepsis [39].
Acute myocardial injury, defined by elevated cardiac troponin levels, has been clinically described in 5-38% of COVID-19 patients [162,163]. Patients with comorbidities or cardiovascular disease are more likely to present with cardiac symptoms. Myocardial injury is associated with a greater need for mechanical ventilation and higher inhospital mortality [164]. Although a clinically significant component of COVID- 19, myocardial injury appears to be limited in autopsy studies. A myocarditis meeting histopathological Dallas criteria was only identified in 27 of 191 patients (14%) in our study. Similarly, limited myocardial injury not meeting Dallas criteria, e.g., individual myocyte necrosis, focal myocardial lymphocytic infiltrate, or "borderline myocarditis" (the latter two forms representing the same entity but often separately described in a few studies) were identified in 16 (8%), 8 (4%), and 5 (3%) patients, respectively. SARS-Cov-2 genome in heart tissue was detected in 60% of our reviewed cases, but IHC for the spike protein was negative in 1 tested patient. Therefore, systemic inflammation secondary to cytokine release, endothelial inflammation, and associated microvascular thrombosis leading to multi-organ failure may be the main pathogenetic mechanisms associated with COVID-19 myocardial injury [164]. The long-term effects of COVID-19 myocardial injury are yet to be established. However, in pre-COVID studies, death for biopsy-proven viral myocarditis ranged from 19% at 5 years to 39% at 10 years, respectively [165,166].
Two patterns of cutaneous COVID-19 manifestations have been identified. Exanthems, such as rash, urticarial eruptions, and chickenpox-like vesicles tend to occur early in the disease course and sometimes precede other systemic manifestations [114,173]. Vascular lesions, such as chilblains, livedoid eruptions, and purpura, usually appear several days after the onset of general symptoms or even in their absence [114,173]. The first group of lesions could represent an early response to the initial viral replication or the cytokine storm. In contrast, the second could result from a delayed cell-mediated immunologic response to the virus or an unbalanced coagulation state towards a prothrombotic microenvironment with microthrombi formation in the dermal vessels. The early cutaneous manifestations could aid in the prompt identification of asymptomatic patients, improve epidemiological tracking, and promote timely therapeutic management. The skin vascular lesions might represent a marker of systemic vessel injury. Therefore, these patients' prognosis could improve after administration of anticoagulant and antiinflammatory therapy.
In children, COVID-19 symptomatology is typically milder [174]. However, in April 2020, several children with no prior medical history presenting with fever, cardiovascular shock, and hyperinflammation syndrome were reported [175][176][177][178][179][180]. In May 2020, the CDC issued an advisory to report cases meeting criteria for multisystem inflammatory syndrome in children (MIS-C) [181], defined by fever and severe illness requiring hospitalization with multisystem (≥2) organ involvement, laboratory evidence of inflammation, and recent infection or exposure to SARS-CoV-2 within four weeks prior to symptoms onset in individuals aged <21 years. MIS-C is rare, with an estimated incidence of 2 per 100,000 [182]. As of September 17, 2020, a total of 935 confirmed cases with 19 deaths were reported in the US.
There are clinical and pathophysiologic similarities in MIS-C and Kawasaki disease (KD), a febrile illness affecting young children characterized by vasculitis that can result in coronary artery aneurysms. In two series of MIS-C, 73-78% of children were previously healthy, with the most common comorbidities being obesity and asthma [183,184]. Similar to KD, MIS-C is a post-infectious phenomenon related to IgG-ADE rather than the result of acute viral infection [178,[185][186][187]. A predominantly gastrointestinal presentation (92%) [187] is frequent, while respiratory symptoms (70%) are often due to severe shock, and ARDS is not a prominent feature [184]. Acute cardiac decompensation [188] and cardiovascular system involvement were present in 80% of patients with coronary artery aneurysms in 8% [183]. In contrast, MIS-C and KD differ in several aspects, including ethnicity, age of onset, clinical and immunologic manifestations. While~80% of children with KD are predominantly of Asian descent and younger than five years (peak incidence around 10 months of age), children with MIS-C are predominantly of Hispanic and Afro-Caribbean descent and have a median age of 8.3-11 years [183,184]. In KD, coronary artery abnormalities are more frequent than in MIS-C, occurring in 15-25% of patients and decreasing to 5% with prompt therapy, while these occur only in 1% of MIS-C cases. Left ventricle dysfunction and shock are also more likely in the latter. Immunologically, KD has an interleukin17A-mediated hyperinflammation, which is absent in MIS-C, and higher levels of biomarkers associated with arteritis and coronary artery disease. Furthermore, MIS-C has lower naïve CD4 + T cell and T follicular helper and increased central and effector memory subpopulations compared to KD [189]. MIS has also been reported in adults recently, although as a rare occurrence [190][191][192].
Putative SARS-Cov-2 virions or viral-like particles suggestive of SARS-Cov-2 have been purportedly identified by TEM in several recent reports. These highly contested reports [56,102,[193][194][195][196][197][198][199][200] often describe physiological structures and organelles, such as clathrin-coated vesicles, multivesicular bodies, rough endoplasmic reticulum, and unidentified subcellular structures. This issue highlights the challenging task of identifying SARS-Cov-2 virions by TEM due to unfamiliarity with coronavirus morphology and morphogenesis, artifacts of delayed postmortem fixation, lack of detection due to possible viral clearing, and sensitivity of the methodology, as reviewed by Hopfer et al., and Bullock et al. [102,201].
Limitations
Limitations of this study include its focus on COVID-19 histopathologic injury rather than laboratory, imaging, and macroscopic findings. In addition, disparate reporting methodologies emerged from the analyzed studies, accounting for vastly heterogeneous data in the literature. We chose to include only peer-reviewed articles and exclude pre-prints. Thus, some articles may have been missed if the initial pre-print was subsequently published. Similarly, given the fast pace of recent publications emerging on this topic, articles published during the editing and review stages of this manuscript were not included.
Conclusions
In summary, SARS-CoV-2 viral protein has been identified in situ only in a small proportion of extrapulmonary organs in a subset of patients, supporting a limited extrapulmonary direct effect of the virus. The predominant pathologic findings emerging from this review suggest that a unifying mechanism underlying the systemic clinical manifestation of COVID-19 is the characteristic inflammatory response of ARDS with cytokine release, fever, inflammation, generalized endothelial disturbance, and secondary multisystemic shock-injury. Although very preliminary, an additional contributing factor could be a detrimental immunologic response, such as ADE, which is being actively investigated in COVID-19.
Autopsies have reaffirmed their value to public health, aiding our understanding of novel diseases. Though limitations due to postmortem autolytic changes may hinder some studies, the increasing autopsy literature supports a wealth of information contributing to our understanding of COVID-19.
Data availability
All data generated or analyzed during this study are included in this published article and its supplementary information files.
Author contributions GAG and SC performed study concept, design, and development of methodology; GAG, SC and MEK provided acquisition, analysis, and interpretation of data, writing, review, and revision; SEM, RE, JJ, and GS provided review and revision; AM and GE provided technical and material support. All authors read and approved the final paper.
Compliance with ethical standards
Conflict of interest GE is a microscopy and imaging specialist at Hunt Optics & Imaging, Inc, Pittsburgh, PA. All other authors have no relevant disclosures.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 2021-05-24T13:46:32.898Z | 2021-05-24T00:00:00.000 | {
"year": 2021,
"sha1": "0ea1a78921bf63bc90db2f4d6b8b30fd56255fce",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/s41379-021-00814-w.pdf",
"oa_status": "BRONZE",
"pdf_src": "Springer",
"pdf_hash": "0ea1a78921bf63bc90db2f4d6b8b30fd56255fce",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264530708 | pes2o/s2orc | v3-fos-license | Trans-scaphoid trans-lunate trans-triquetral volar perilunate dislocation: A case report
Introduction Translunate volar perilunate dislocations are extremely rare, with few documented cases. Only eight instances of volar translunate perilunate dislocation have been described in the literature. This report presents a successfully treated case using early reduction and internal fixation that led to a very satisfying outcome at 9 months follow-up. Case report A 20-year-old man presented with left wrist pain and swelling after a fall from a vehicle at 50 km/h, landing on an outstretched right hand. Radiographs and a CT scan identified scaphoid, lunate, and triquetral fractures, along with a volar perilunate dislocation. Surgical treatment was performed with a dorsal approach, including scaphoid and lunate fracture fixation, triquetral avulsion repair, and lunate stabilization with K-wires. The wrist was immobilized for 6 weeks, intense physical therapy started after K-wires removal. At 9 months follow-up, positive results were seen clinically and radiologically. Discussion A perilunate fracture-dislocation includes dislocation of the carpus from the lunate. Johnson divided these injuries into lesser arc (pure ligamentous) and greater arc (fracture-related). Bain introduced the translunate arc concept in a case series of three patients, depicting a path through the lunate causing lunate fracture alongside perilunate injury. Treatment focuses on lunate reduction and fixation, combined with addressing greater and lesser arc injuries. Achieving successful lunate realignment and fixation is challenging. However, early diagnosis, prompt reduction, rigid fixation, and repair of both arc injuries can lead to optimal functional recovery.
Introduction
Perilunate dislocations are rare but severe injuries, with the majority being dorsal dislocations [1].Lunate fracture is also a rare injury; it is seldom an isolated injury and is often associated with other carpal fractures and ligament injuries [2,3].Translunate palmar perilunate dislocations are extremely rare, and only very few case reports are available in the literature.In a series of 157 cases of perilunate dislocation, there were no lunate fractures reported [1].According to Anil K. Bhat et al. [4], only eight cases of volar translunate perilunate dislocation have been described in the literature.We present a case of this rare injury successfully treated by early reduction and internal fixation, led to a good outcome at 9 months follow-up.
Case report
A 20-year-old man presented to us with pain and swelling of the left wrist following a fall from a vehicle at 50 km/h onto an outstretched right hand.He presented to the hospital one day after the accident with an isolated right wrist injury.Clinical examination revealed a diffusely tender, swollen wrist with a palmar skin abrasion, reduced movement of the wrist, and a normal median nerve.Radiographs of the wrist showed a volar perilunate dislocation, associated with a fracture of the scaphoid, the lunate, and the triquetrum (Fig. 1).Computed tomography (CT) scan was performed after a successful reduction with wrist flexion and detected a comminuted fracture of the palmar pole of the lunate and an avulsion fracture of the triquetrum in addition to the volar perilunate dislocation and the middle third scaphoid fracture (Fig. 2).The surgical treatment was performed three days after the initial injury, using a dorsal approach through the interspace between the third and fourth compartments.A longitudinal incision was made in the dorsal capsule, followed by subperiosteal dissection to ensure sufficient exposure of the carpal bones.The scapholunate interosseous ligament was intact; after exposing the scaphoid fracture and the triquetral avulsion, the scaphoid fracture was reduced and fixed with a 3.5 mm Herbert screw, the lunate fracture was stabilized with a 2.5 mm Herbert screw then the triquetral avulsion was repaired using a mini anchor, followed by reduction and K-wire fixation to ensure proper lunate alignment (Fig. 3).The wrist was immobilized in a cast for a total of 6 weeks with gentle finger range of motion exercises as tolerated.At 6 weeks, the cast and K-wires were removed, and mobilization was started.The patient was kept under regular follow-up.He achieved dorsiflexion of 0 to 30 degrees and palmar flexion of 0 to 35 degrees at 3 months post-surgery.At 9 months follow-up, the wrist was painless, and the clinical result was excellent after intense physical therapy.He returned to regular work with 85% of grip strength, he reached a dorsiflexion of 0 to 80 degrees and palmar flexion of 0 to 70 degrees, radial deviation of 20 degrees, and ulnar deviation of 50 degrees.The Mayo wrist score was 85 (good) (Fig. 4).Radiology confirmed satisfactory alignment and fixation (Fig. 5).
Discussion
A perilunate fracture-dislocation includes dislocation of the carpus from the lunate.This injury involves various ligamentous and fracture patterns, which have been documented and described.Johnson divided injuries into lesser arc (pure ligamentous) and greater arc (with carpal fracture) injuries [3].In greater arc injuries, the damage extends through a larger arc, resulting in fractures of the bones surrounding the lunate, such as the radial and ulnar styloid processes.Lesser arc injuries involve purely ligamentous damage to the articulations immediately surrounding the lunate.The combined, rare condition of translunate, peri lunate fracture-dislocation is not included in the instability patterns as described by Johnson and Mayfield or in the classification of lunate fractures as described by Teisen and Hjarbaek [5].
Most of the cases described in the literature were combinations of greater arc with translunate arc injury, whereas our case was a mix of greater arc with translunate arc injury.Bain [6] introduced the translunate arc concept in a case series of three patients.The arc is complementary to the greater and lesser arcs; it includes a path of injury through the lunate, producing a lunate fracture with associated perilunate injury (fracture, dislocation, or subluxations).Graham [7] introduced an inferior arc, in which the path of injury is through the radiocarpal joint (Fig. 6).The treatment principles involved in the management of translunate arc injuries integrate simplistically with Johnson's original classification system [8].The management principles of a translunate arc injury should be directed toward the reduction and fixation of the lunate, with concurrent repair of greater arc injuries (reduction and fixation of fractures around the lunate followed by ligamentous repair), and lesser arc dysfunctions (direct stabilizing ligaments of the lunate).
Perilunate dislocations and subluxations of the wrist are rare injuries, and up to 25% of cases could be initially missed [9].Palmar translunate perilunate dislocations are extremely rare and typically the result of high-energy trauma.Consequently, these injuries are more severe, more difficult to treat, and are more likely to have a bad prognosis.As we can see among the previously documented cases, the functional results were poor in the majority of cases, with diverse complications such as carpal instability and arthrosis leading to carpectomy or wrist arthrodesis.As for our patient, despite the short follow up duration, the results were good.
Attaining successful lunate realignment and fixation might be challenging.However, early diagnosis, prompt reduction, rigid fixation, and repair of both lesser arc and greater arc perilunate injuries can result in optimal functional recovery.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Fig. 2 .
Fig. 2. Translunate fracture with perilunate injury (Dislocation with transscaphoid fracture and an avulsion of the triquetrum) after reduction.A: Axial CT scan of the right wrist demonstrating a scaphoid fracture and triquetrum avulsion fracture.B: Sagittal CT scan of the right wrist demonstrating translunate comminuted fracture.
Fig. 3 .
Fig. 3. Postoperative radiographs of the left wrist.The scaphoid and the lunate fractures were fixed using Herbert screws followed by reduction and K-wire fixation to ensure lunate alignment.
Fig. 4 .
Fig. 4. Clinical assessment at 9 months follow-up, showing a good range of motion and a grip strength.
Fig. 5 .
Fig. 5. Radiological assessment showing fractures consolidation and good alignment of the carpus at A: 6 months follow-up B: 9 months follow up.
Fig. 6 .
Fig. 6.Illustration showing the patterns of injury progression in case of greater, lesser, translunate, and inferior arcs.
K
. El Khaymy et al. | 2023-10-28T15:02:36.542Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "6b1adbe0c42c7bb8849fed1ec0585b796abf94ea",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2f067750fe6df17038e9c0ffd4607656e6a5ca0d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
251182199 | pes2o/s2orc | v3-fos-license | Recognition Method of Massage Techniques Based on Attention Mechanism and Convolutional Long Short-Term Memory Neural Network
Identifying the massage techniques of the masseuse is a prerequisite for guiding robotic massage. It is difficult to recognize multiple consecutive massage maps with a time series for current human action recognition algorithms. To solve the problem, a method combining a convolutional neural network, long-term neural network, and attention mechanism is proposed to identify the massage techniques in this paper. First, the pressure distribution massage map is collected by a massage glove, and the data are enhanced by the conditional variational auto-encoder. Then, the features of the massage map group in the spatial domain and timing domain are extracted through the convolutional neural network and the long- and short-term memory neural network, respectively. The attention mechanism is introduced into the neural network, giving each massage map a different weight value to enhance the network extraction of data features. Finally, the massage haptic dataset is collected by a massage data acquisition system. The experimental results show that a classification accuracy of 100% is achieved. The results demonstrate that the proposed method could identify sequential massage maps, improve the network overfitting phenomenon, and enhance the network generalization ability effectively.
Introduction
The fast pace of life and high work pressure lead to common sub-health phenomena, such as backache and lack of vitality. Massage is a physical naturopathic therapy that relieves fatigue and pain and improves sub-health problems [1]. At present, the demand for masseuses at home and abroad is strong, but the gap is large, and the technique level is uneven. Robotic massage will replace human massage services with the development of robot technologies. To realize robotic massage, it is crucial to systematically understand the massage techniques of professional masseurs, explore the features of massage techniques, and provide a reference for robots to reproduce the massage techniques of masseurs.
A tactile sensor is required to obtain massage technique information about a masseuse; it is a sensor used to imitate the tactile function of animals and humans. It can sense the force between the sensor and the contact object, the shape and temperature of the detected object, etc. According to the working principle, tactile sensors can be divided into piezoresistive, capacitive, piezoelectric, inductive, photoelectric, etc. Currently, the first three types of tactile sensors are commonly used. Tactile sensors have been widely used in many fields, such as teaching [2,3], medical care [4], virtual reality [5] and other fields. Literature [6] proposed a capacitive tactile sensor for collecting hand information when manipulating clothing, but the sensor has a small range and few touch-sensing units, so it can't reflect the detailed information of the operator's hand. In the literature [7], a capacitive tactile sensor is proposed. Its electrodes are made using flexible circuit board technology. Air is used as the sensor dielectric material, and a layer of silica gel spacer that compared with other methods, CNN is more sensitive to static gestures in complex environments, but recognition accuracy is low for gestures with rotation. Reference [26] proposed a system for human behavior recognition based on wearable smart devices, using a two-dimensional convolutional neural network CNN to recognize 9 human behaviors in a home environment, and achieved good recognition results. Reference [27] developed a tactile glove that can collect normal force. The volunteer wears the tactile glove and grabs the object to obtain a visual image of the human hand's normal force and uses the Resnet-18 structure network to recognize 6 kinds of grasping postures. A recognition accuracy of 89.4% is obtained. Reference [28] fuses the convolutional layer and the long-short-term memory unit to the long-term memory convolution layer. This layer and the standard long-short-term memory unit are used in series to extract the spatial and timing features of the acceleration sensor. The experimental results show that this method achieves higher accuracy with a shorter sliding window for data acquisition. However, the recognition of time series-related data in deep learning is mainly for speech recognition, and the recognition of images is mainly for single-frame images. A massage technique often corresponds to multiple frames of images, so it is necessary to design a network structure that can extract the timing features of sequential massage maps. Therefore, a 2D convolutional neural network and a long-short-term memory neural network are combined. The long-short-term memory neural network used for speech feature extraction is applied to extract the timing features of massage map groups. The channel attention mechanism is improved to the frame attention mechanism, and it is introduced between the convolutional neural network and the recurrent neural network. The convolution neural network is used to automatically extract the deep abstract spatial domain features of each frame of the massage maps, and the long-and short-term memory network is used to extract the timing features between frames in each group of tactile data. Finally, the sensor data are sent to the Soft Max classifier to realize the recognition of the massage techniques. In addition, using traditional methods to make a massage dataset takes a long period and a lot of effort, so it is necessary to use an algorithm to expand the dataset.
In view of the above problems, this paper conducts research on massage manipulation recognition based on flexible tactile sensors and deep learning. The tactile sensor integrated on the glove is used to collect massage information about the masseuse, the sensor data are preprocessed and expanded, and a neural network is constructed to realize the classification of massage maps with time series. The contributions of this paper are as follows: (1) Design and manufacture a dot-matrix flexible tactile sensor that can be integrated on gloves and its data acquisition system to collect detailed massage information of the masseuse's hand; realize the key frame extraction of the massage map group based on the frame difference method; and use the conditional variational auto-encoder to expand the massage data of the key frame. Thus, the preprocessing of the massage data and the expansion of the dataset are completed; (2) Realize the key frame extraction of the massage map group based on the frame difference method and use the conditional variational auto-encoder to expand the massage data of the key frame. Thus, the preprocessing of the massage data and the expansion of the dataset are completed; (3) Combine the convolutional network and the recurrent neural network to establish a neural network structure that can realize the training of sequential massage map groups. The channel attention mechanism is improved into the frame attention mechanism and introduced into the network to achieve autonomous training of the weights of each frame of tactile data. Thus, the ability to extract features from network data are enhanced. Then, build a massage technique identification experimental system, carry out massage technique identification test research, extract massage technique features, and provide a solid foundation for guiding robotic massage applications.
The remainder of this paper is organized as follows. Section 2 proposes the massage data acquisition system and the massage haptic dataset. The neural network structure for massage technique recognition is presented in Section 3. Section 4 describes the experiment and the results. The conclusions are presented in Section 5.
Tactile Sensor and Data Acquisition System
Circular copper sheets with a diameter of 5 mm, as well as row and column electrodes, are deposited on the polyimide substrate by the flexible circuit board process, and electrodes are drawn out with a flexible circuit board connector to make the upper and lower electrode layers of the flexible sensor. The conductive double-sided tapes are pasted on the Velostat material (developed by Custom Materials) as a piezoresistive material. Then, the piezoresistive material is cut out with a punch into a number of rounds with a diameter of 6 mm. They were pasted on the circular copper sheet of the lower electrode sheet. The upper electrode sheet and the lower electrode sheet are integrated with double-sided tape to fabricate a flexible tactile sensor with a sandwich structure. In this way, sensors with sensor units of 8 × 8 and 2 × 11 are fabricated and integrated into the palm of the fabric glove and the first knuckle region of the five fingers by sewing to make massage gloves. The physical map of the massage gloves is shown in Figure 1a.
Tactile Sensor and Data Acquisition System
Circular copper sheets with a diameter of 5 mm, as well as row and column electrodes, are deposited on the polyimide substrate by the flexible circuit board process, and electrodes are drawn out with a flexible circuit board connector to make the upper and lower electrode layers of the flexible sensor. The conductive double-sided tapes are pasted on the Velostat material (developed by Custom Materials) as a piezoresistive material. Then, the piezoresistive material is cut out with a punch into a number of rounds with a diameter of 6 mm. They were pasted on the circular copper sheet of the lower electrode sheet. The upper electrode sheet and the lower electrode sheet are integrated with doublesided tape to fabricate a flexible tactile sensor with a sandwich structure. In this way, sensors with sensor units of 8 × 8 and 2 × 11 are fabricated and integrated into the palm of the fabric glove and the first knuckle region of the five fingers by sewing to make massage gloves. The physical map of the massage gloves is shown in Figure 1a. Figure 1b shows the working principle of the tactile sensor. The middle layer of the tactile sensor is a piezoresistive material, and its magnitude of resistance varies with the magnitude of the normal force applied to the sensing unit. The sensor is connected to the voltage divider circuit through the multiplexer, and the multiplexer controls the sensing unit to be connected to the voltage divider circuit. Voltage of sensing units is collected through the digital-to-analog conversion function of the lower computer to obtain resistance of the middle layer of sensing units, and the resistance is obtained by the following formula: where Rsensor is the resistance of piezoresistive material of a single sensing unit, Rfixed is the resistance of the divider resistor in the voltage divider circuit, VCC is the source voltage of the voltage divider circuit, and Vsensor is the voltage applied on the sensing unit, which is selected by the multiplexer. Figure 2 is a block diagram of the massage data acquisition system. The flexible tactile sensor integrated on the glove is connected to the multiplexer through the FPC connectors and the FPC cables, and then connected to the voltage divider circuit. The main control chip is the STM32, which controls the on-off of the multiplexer and combines the ADC function to scan the voltage of each sensing unit one by one and send it to the upper Figure 1b shows the working principle of the tactile sensor. The middle layer of the tactile sensor is a piezoresistive material, and its magnitude of resistance varies with the magnitude of the normal force applied to the sensing unit. The sensor is connected to the voltage divider circuit through the multiplexer, and the multiplexer controls the sensing unit to be connected to the voltage divider circuit. Voltage of sensing units is collected through the digital-to-analog conversion function of the lower computer to obtain resistance of the middle layer of sensing units, and the resistance is obtained by the following formula: where R sensor is the resistance of piezoresistive material of a single sensing unit, R fixed is the resistance of the divider resistor in the voltage divider circuit, V CC is the source voltage of the voltage divider circuit, and V sensor is the voltage applied on the sensing unit, which is selected by the multiplexer. Figure 2 is a block diagram of the massage data acquisition system. The flexible tactile sensor integrated on the glove is connected to the multiplexer through the FPC connectors and the FPC cables, and then connected to the voltage divider circuit. The main control chip is the STM32, which controls the on-off of the multiplexer and combines the ADC function to scan the voltage of each sensing unit one by one and send it to the upper computer. Matlab visualizes the force of each sensing unit of the sensor. The force of the sensing unit from small to large corresponds to the color from cold to warm, and finally, the pressure distribution map of the tactile sensor is obtained.
Massage Haptic Dataset
After completing the design of the massage data acquisition system, it is necessary first collect a pressure distribution map of the massage techniques. Second, the key fram of the massage map groups are extracted by the frame difference method, and the redu dant frames are discarded. Finally, a neural network is built based on the principle of t conditional variational auto-encoder to expand the massage map groups to complete t production of the dataset.
Massage Data Collection
Three types of massage techniques are designed: clockwise rubbing against the h man scapula with the palm, counterclockwise rubbing with the palm against the hum scapula, pressing with the five fingers in turn on the human scapula, and rubbing t scapula back and forth once with the palm.
The ADC clock frequency of the lower computer of the data acquisition system configured as 9 MHz, and the sampling period is 55 cycles. The lower computer is co nected to the upper computer through the uart, the upper computer visualizes and sav the collected voltage data of sensing units, and the sampling frequency of the visualiz tactile frames is about 12 Hz.
Two volunteers were invited to collect massage data. One of the volunteers wore t designed massage glove on the right hand and massaged on the other volunteer's scapu according to the designed three types of massage techniques to collect data. Each type massage technique was repeated 100 times, and 100 sets of data were collected for ea type of massage technique. Each set of data contained 60 frames of visualized massa maps. The massage data collection process diagram is shown in Figure 3.
Massage Haptic Dataset
After completing the design of the massage data acquisition system, it is necessary to first collect a pressure distribution map of the massage techniques. Second, the key frames of the massage map groups are extracted by the frame difference method, and the redundant frames are discarded. Finally, a neural network is built based on the principle of the conditional variational auto-encoder to expand the massage map groups to complete the production of the dataset.
Massage Data Collection
Three types of massage techniques are designed: clockwise rubbing against the human scapula with the palm, counterclockwise rubbing with the palm against the human scapula, pressing with the five fingers in turn on the human scapula, and rubbing the scapula back and forth once with the palm.
The ADC clock frequency of the lower computer of the data acquisition system is configured as 9 MHz, and the sampling period is 55 cycles. The lower computer is connected to the upper computer through the uart, the upper computer visualizes and saves the collected voltage data of sensing units, and the sampling frequency of the visualized tactile frames is about 12 Hz.
Two volunteers were invited to collect massage data. One of the volunteers wore the designed massage glove on the right hand and massaged on the other volunteer's scapula according to the designed three types of massage techniques to collect data. Each type of massage technique was repeated 100 times, and 100 sets of data were collected for each type of massage technique. Each set of data contained 60 frames of visualized massage maps. The massage data collection process diagram is shown in Figure 3.
Massage Data Process
The massage data collection system is used to collect data for the above three massage techniques. If the sensor collection frequency is fast, there may be many redundant frames in the initially collected frames. If it is sent directly to the neural network, the training parameters will be greatly increased. If it increases, it may cause network overfitting and reduce recognition accuracy. Therefore, the frame difference method is first used to get the pixel difference between every two adjacent frames of the massage map group, then extract the 20 frames with the largest difference between adjacent pixels in each massage map group as key frames.
In the process of deep learning training, the phenomenon of overfitting often occurs when the scale of the training data are not large enough, and the feature of training data are over-repetitive learned. The overfitting phenomenon results in poor generalization ability of the model and poor prediction ability for new data. However, it usually requires a high cost to obtain a large amount of new data, so it is necessary to perform data enhancement on the collected raw data through algorithms. The keyframes of massage haptic data can be augmented by constructing a neural network based on the principle of a variational auto-encoder [29]. Figure 4 is a schematic diagram of a variational auto-encoder. The massage data samples {X1,…,Xn} are described by X. In an ideal case, the distribution p(X) of X can be obtained according to {X1,…,Xn}, all massage data samples, including {X1,…,Xn} can be collected from p(X). However, it is impossible to achieve in practice, so an additional variable z is introduced, and it is assumed that the posterior distribution p(Z|X) is a standard normal distribution. For any given sample Xk, it is assumed that there is a distribution p(Z|Xk), transform the distribution p(X) of the original massage tactile data into a distri-
Massage Data Process
The massage data collection system is used to collect data for the above three massage techniques. If the sensor collection frequency is fast, there may be many redundant frames in the initially collected frames. If it is sent directly to the neural network, the training parameters will be greatly increased. If it increases, it may cause network overfitting and reduce recognition accuracy. Therefore, the frame difference method is first used to get the pixel difference between every two adjacent frames of the massage map group, then extract the 20 frames with the largest difference between adjacent pixels in each massage map group as key frames.
In the process of deep learning training, the phenomenon of overfitting often occurs when the scale of the training data are not large enough, and the feature of training data are over-repetitive learned. The overfitting phenomenon results in poor generalization ability of the model and poor prediction ability for new data. However, it usually requires a high cost to obtain a large amount of new data, so it is necessary to perform data enhancement on the collected raw data through algorithms. The keyframes of massage haptic data can be augmented by constructing a neural network based on the principle of a variational auto-encoder [29]. Figure 4 is a schematic diagram of a variational auto-encoder. The massage data samples {X 1 , . . . ,X n } are described by X. In an ideal case, the distribution p(X) of X can be obtained according to {X 1 , . . . ,X n }, all massage data samples, including {X 1 , . . . ,X n } can be collected from p(X). However, it is impossible to achieve in practice, so an additional variable z is introduced, and it is assumed that the posterior distribution p(Z|X) is a standard normal distribution. For any given sample X k , it is assumed that there is a distribution p(Z|X k ), transform the distribution p(X) of the original massage tactile data into a distribution of x generated by z: p(x) = ∑ z p(X|Z) · p(Z), so that Z k can be sampled from the distribution of z and reverted to X k . The two parameters µ and σ 2 of the normal distribution p(Z|X) can be obtained by establishing a neural network fitting. The variational auto-encoder can be improved to a conditional variational auto-encoder by modifying the loss function of the neural network that fits the parameters of the normal distribution so that the mean of the normal distribution is close to the given condition. The key frames of the massage map group extracted by the frame difference method are sent to the conditional variational auto-encoder, and the original massage data can be greatly expanded to create new massage tactile data that are close to the original data by sampling and reverting the latent variables. distribution so that the mean of the normal distribution is close to the given condition.
The key frames of the massage map group extracted by the frame difference method are sent to the conditional variational auto-encoder, and the original massage data can be greatly expanded to create new massage tactile data that are close to the original data by sampling and reverting the latent variables.
Each frame of massage map of each group is taken as the original sample of the variational auto-encoder and sent to the variational autoencoder neural network for training, and the network outputs new extended images. The original 100 sets of massage data for each type of massage technique are expanded into 900 sets of massage data. The original data and expanded data of the first type of massage technique are shown in Figure 5. distribution so that the mean of the normal distribution is close to the given condition.
Neural Network Structure
The key frames of the massage map group extracted by the frame difference method are sent to the conditional variational auto-encoder, and the original massage data can be greatly expanded to create new massage tactile data that are close to the original data by sampling and reverting the latent variables. Each frame of massage map of each group is taken as the original sample of the variational auto-encoder and sent to the variational autoencoder neural network for training, and the network outputs new extended images. The original 100 sets of massage data for each type of massage technique are expanded into 900 sets of massage data. The original data and expanded data of the first type of massage technique are shown in Figure 5. Figure 6 is the network structure diagram, and Figure 7 is the algorithm flow chart of the network. The k in Figure 7 indicates the trained epoch. According to the sequential massage maps collected by the massage glove, this paper combines a 2D convolutional neural network and recurrent neural network to design a method that can not only extract the spatial domain features of a single massage map but also extract timing domain features between each map.
Neural Network Structure
First, the preprocessed massage data are used as the input of the neural network, and the number of iterations of the network is set. The 2D convolutional neural network is used to extract the spatial domain features of the massage data, and the long short-term memory recurrent neural network is used to extract the time domain features of the massage data. The frame-attention mechanism is introduced between the two neural networks. The 2D convolutional neural network adopts the ResNet152 model pre-trained on the dataset ILSVRC-2012-CLS. Some weight parameters in the pre-trained ResNet-152 model are frozen to reduce the computational cost of training. The 2D convolutional neural network reduces the dimensionality of the massage map group corresponding to about 20 frames of each massage manipulation sample, and extracts features for encoding. Then, the encoded massage map group is arranged in chronological order, and the data dimension encoded by the 2D convolutional neural network becomes: (batch size, frames, CNN embed dim); the encoded data are pooled globally. After the frame attention block, the frame dimension of the massage map group obtain a weight value in the range of 0-1. At this time, the data dimension is: (batch size, frames, CNN embed dim); input the data into the Long Short-Term Memory (LSTM) recurrent neural network in the order of the frames dimension to train the timing domain features of the learning data, and add a linear layer after the recurrent neural network to reduce the data dimension to: (batch size, frames); After going through the softmax layer, the data output is normalized to 0-1, and the cross-entropy loss function is used to calculate the loss between the recognition result and the real label. The back-propagation algorithm updates the weight parameters of the network after each training epoch. After 50 rounds of training, a neural network that can recognize massage techniques is obtained. Figure 6 is the network structure diagram, and Figure 7 is the algorithm flow chart of the network. The k in Figure 7 indicates the trained epoch. According to the sequential massage maps collected by the massage glove, this paper combines a 2D convolutional neural network and recurrent neural network to design a method that can not only extract the spatial domain features of a single massage map but also extract timing domain features between each map.
First, the preprocessed massage data are used as the input of the neural network, and the number of iterations of the network is set. The 2D convolutional neural network is used to extract the spatial domain features of the massage data, and the long short-term memory recurrent neural network is used to extract the time domain features of the massage data. The frame-attention mechanism is introduced between the two neural networks. The 2D convolutional neural network adopts the ResNet152 model pre-trained on the dataset ILSVRC-2012-CLS. Some weight parameters in the pre-trained ResNet-152 model are frozen to reduce the computational cost of training. The 2D convolutional neural network reduces the dimensionality of the massage map group corresponding to about 20 frames of each massage manipulation sample, and extracts features for encoding. Then, the encoded massage map group is arranged in chronological order, and the data dimension encoded by the 2D convolutional neural network becomes: (batch size, frames, CNN embed dim); the encoded data are pooled globally. After the frame attention block, the frame dimension of the massage map group obtain a weight value in the range of 0-1. At this time, the data dimension is: (batch size, frames, CNN embed dim); input the data into the Long Short-Term Memory (LSTM) recurrent neural network in the order of the frames dimension to train the timing domain features of the learning data, and add a linear layer after the recurrent neural network to reduce the data dimension to: (batch size, frames); After going through the softmax layer, the data output is normalized to 0-1, and the cross-entropy loss function is used to calculate the loss between the recognition result and the real label. The back-propagation algorithm updates the weight parameters of the network after each training epoch. After 50 rounds of training, a neural network that can recognize massage techniques is obtained.
Convolutional Neural Network
The convolutional neural network can process multi-dimensional data and achieve the purpose of data dimensionality reduction and feature extraction by performing sliding convolution operations on the previous layer of data. A convolutional neural network is mainly composed of an input layer, convolution layer, pooling layer, Relu layer, and fully connected layer. The convolutional layer can be seen as a set of filters that can learn parameters. Each filter is relatively small in space, but the depth is consistent with the input data. Filters are activated when the network sees certain types of visual features, and the specific feature can be the boundary in certain directions, or the spots of certain colors, etc. The role of the convolutional layer is to reduce the dimensionality of the data and extract features in the spatial domain. Usually, a pooling layer is periodically inserted between consecutive convolutional layers; its function is to gradually reduce the spatial size of the data, so that the number of parameters in the network can be reduced, and the
Convolutional Neural Network
The convolutional neural network can process multi-dimensional data and achieve the purpose of data dimensionality reduction and feature extraction by performing sliding convolution operations on the previous layer of data. A convolutional neural network is mainly composed of an input layer, convolution layer, pooling layer, Relu layer, and fully connected layer. The convolutional layer can be seen as a set of filters that can learn parameters. Each filter is relatively small in space, but the depth is consistent with the input data. Filters are activated when the network sees certain types of visual features, and the specific feature can be the boundary in certain directions, or the spots of certain colors, etc. The role of the convolutional layer is to reduce the dimensionality of the data and extract features in the spatial domain. Usually, a pooling layer is periodically inserted between consecutive convolutional layers; its function is to gradually reduce the spatial size of the data, so that the number of parameters in the network can be reduced, and the computational resource consumption can be reduced. It can effectively control overfitting, and pooling can be divided into maximum pooling, average pooling, and L-2 pooling. The Relu layer introduces nonlinearity into the network and improves the generalization ability of the network. Usually, several linear layers are added at the end of the convolutional neural network to reduce the dimension of the data to the size that needs to be classified.
The main advantages of convolutional networks are [30]: (1) Sparse interaction, compared with fully connected matrix multiplication, the neurons in the layer are only connected to a small area in the previous layer, and the required weight parameters are greatly reduced, it can improve the overfitting phenomenon to a certain extent; (2) Parameter sharing, for the same convolution kernel, they share the same parameters, and can process all input data; (3) Equivariant representation, do translation in a small space for input data, it has less impact on the classification.
In this paper, the convolutional network module adopts the ResNet-152 model pretrained on the dataset ILSVRC-2012-CLS, removes the last linear layer of the original ResNet-152 model, and adds two linear layers. The three dimensions-color channel, horizontal, and vertical pixels-are reduced to an embedding layer. The original input data dimensions of the neural network are: (batch size, frames, channels, image size x, image size y). After the data's spatial features are extracted by the 2D convolutional neural network, the data's dimension becomes: (batch size, frames, CNN embed dim). Some weight parameters in the pre-trained ResNet-152 model are frozen during network training to reduce the computational cost of training.
Attention Mechanism
A two-dimensional image has three dimensions: length, width, and channel. The channel attention module can automatically obtain the weight of the image channel dimension through network training. The channel attention mechanism model is shown in Figure 8. X and X are input and output variables, respectively. C, H, and W represent the channel dimension, height dimension, and width dimension of variables, respectively. F tr represents any given transformation for variables, F ex represents excitation transformation for variables, F scale represents dimension reduction transformation for variables. After the convolution operation, the first step is to separate a bypass to perform a squeeze operation on the image and compress the length and width of the image separately into a real number, which is equivalent to a pooling operation with a global receptive field. The image channel dimensions remain unchanged. The second step is to perform the excitation operation on the squeezed image and generate weights for each channel through parameter w, which is learned to explicitly model the correlation between channels. Finally, the squeezed and excited data of the bypass are multiplied by the original convolved image data, and the weight is assigned to the image channel dimension. mechanism for tactile data, and the network can independently train the weights of massage data corresponding to different moments, which can enhance the network's learning of data features and improve the network overfitting phenomenon. The channel attention mechanism is improved to the frame attention mechanism for massage data, and the weights of the massage data in the frame dimension are assigned. The structure of the frame attention mechanism is shown in Figure 9. X and X are input and output variables, r stands for linear layer dimensionality reduction coefficient. The dimension of the massage tactile data extracted from the spatial feature by the 2D convolutional neural network becomes: (batch size, frames, CNN embed dim), and the data dimension is transformed into: (batch size, CNN embed dim, frames) through dimension transposition, the data dimension is reduced to: (1, 1, frames) through a global pooling layer, and the data dimension is changed to: (1, 1, frames/r) through a linear layer, and a ReLU activation function is used to introduce nonlinearity to the network. After a linear layer, the data dimension is restored to (1, 1, frames), and then a layer of Sigmoid function is used to normalize the value of the frames between 0 and 1. At this time, the data dimension is: (1, 1, frames), multiply the data at this time to the original data extracted from the spatial feature by the 2D convolutional neural network, and the frame dimension of the original data is given a weight between 0-1. Finally, the data dimension is restored to (batch size, frames, CNN embed dim) through dimension transposition. Therefore, the channel attention mechanism is improved to the frame attention mechanism for tactile data, and the network can independently train the weights of massage data corresponding to different moments, which can enhance the network's learning of data features and improve the network overfitting phenomenon. mechanism for tactile data, and the network can independently train the weights of massage data corresponding to different moments, which can enhance the network's learning of data features and improve the network overfitting phenomenon.
Recurrent Neural Network
The recurrent neural network obtains the timing domain dependencies between sequence data by expanding the computational graph in the time domain. During the expansion process, the data corresponding to different moments passes through the same RNN computing unit, and these units share weights so that the network learns the contextual relevance. However, due to the weight-sharing characteristics of RNN units, they have serious problems with gradient explosion and gradient disappearance. To solve this problem, the literature [31] proposed a gated unit-based long-short-term memory neural network that maintains a cell state and controls the forgetting, increasing, and outputting of information by using input gates, forgetting gates, and output gates. The derivative calculation in the traditional RNN multiplication form is changed into an accumulation
Recurrent Neural Network
The recurrent neural network obtains the timing domain dependencies between sequence data by expanding the computational graph in the time domain. During the expansion process, the data corresponding to different moments passes through the same RNN computing unit, and these units share weights so that the network learns the contextual relevance. However, due to the weight-sharing characteristics of RNN units, they have serious problems with gradient explosion and gradient disappearance. To solve this problem, the literature [31] proposed a gated unit-based long-short-term memory neural network that maintains a cell state and controls the forgetting, increasing, and outputting of information by using input gates, forgetting gates, and output gates. The derivative calculation in the traditional RNN multiplication form is changed into an accumulation form, thereby avoiding the problem of gradient disappearance, and LSTM can process longer time series data.
As shown in Figure 10, it is assumed that the features extracted by the convolutional network corresponding to a group of massage techniques are: X = (x 1 , x 2 , . . . , x T ) ∈ R m , i = 1, 2, . . . , T, the vectors in X are sequentially input into the LSTM network, and for the input xt at time t, the calculation process through each gate is as follows: (1) Input gate: (2) Forgetting gate: (3) Output gate: where h t−1 represents the memory output from the gating unit at the previous moment, x t represents the feature input at the current moment, i t represents the input gate value, f t represents the forgetting gate value, o t represents the output gate value, As shown in Figure 10, it is assumed that the features extracted by the convolutional network corresponding to a group of massage techniques are: ( ) , the vectors in X are sequentially input into the LSTM network, and for the input xt at time t, the calculation process through each gate is as follows: (1) Input gate: (2) Forgetting gate: (3) Output gate: where ht−1 represents the memory output from the gating unit at the previous moment, xt represents the feature input at the current moment, it represents the input gate value, ft represents the forgetting gate value, ot represents the output gate value, Ct represents the current cell state, Wi and WC are the input gate connection weights of the LSTM network, bi and bc are their biases, and Wf and bf are connection weights and biases of the forget gate of LSTM network, Wo and bo are connection weights and biases of the output gate of LSTM network, σ(.) is the neural network sigmoid activation function. The data whose frame dimension has been given weights (batch size, frames, CNN embed dim) input into the long short-term memory recurrent neural network to train the timing domain features of the data and connect the output hidden layer hn corresponding to the last time series of the long short-term memory recurrent neural network to a linear The data whose frame dimension has been given weights (batch size, frames, CNN embed dim) input into the long short-term memory recurrent neural network to train the timing domain features of the data and connect the output hidden layer h n corresponding to the last time series of the long short-term memory recurrent neural network to a linear layer to reduce the data dimension to (batch size, N Categories). The loss of the network adopts the cross entropy loss function, the network optimizer adopts Adam optimizer, and the network is trained to realize the purpose of identifying massage techniques.
Experiment and Results
In the experiment of this paper, the dataset generated by the variational auto-encoder is divided into a training set and a dataset according to the ratio of 3:1. There are three types of massage techniques: each type of massage technique contains 900 samples, and each sample contains 20 frames of massage maps in chronological order.
To verify the performance of the RestNet152+RNN neural network structure built in this paper, we also used two network structures, AlexNet+RNN [32] and 3DCNN [23], to train the massage dataset. The parameter settings of the three network structures are shown in Table 1. The learning rate, batch size, dropout probability, training epochs, optimizer, and loss function of the three network structures are set to be the same. Compared with the structures of ResNet152+RNN and AlexNet+RNN, the convolution modules of the two networks are different. The difference is that the former convolution module adopts the ResNet152 structure, including 151 convolution layers with a deep convolution depth, and the latter convolution module adopts the AlexNet structure, including 5 layers of convolution layers and 3 maximum pooling layers. The 3DCNN network structure also realizes the extraction of tactile image timing information by adding continuous frames of tactile images as depth channels of the convolution kernel. The curve of the recognition accuracy of the test set with the training epochs of the three networks is shown in Figure 11. Figure 11. Test data training score curves of the three networks. Figure 11. Test data training score curves of the three networks.
As shown in Figure 11, under the condition that the learning rate, batch size and other parameters are the same, the network recognition accuracy of the ResNet152+RNN structure converges to 100%; the recognition accuracy of the AlexNet+RNN network structure is stable at 66% under 50 training cycles; The accuracy of the 3DCNN network structure oscillates around 70% under 50 training cycles. The confusion matrix of the three networks is shown in Figure 12. From the confusion matrix, the ResNet152+RNN network can identify the three massage actions well. The AlexNet+RNN network and 3DCNN network have high recognition accuracy for the third massage action, but both have poor recognition ability for the first and second massage actions. The two kinds of actions will be confused during the recognition process, especially when the first kind of action is easily misidentified as the second kind of action. This should be because the first and second massage actions are both rubbing actions; only the directions are different. When a variational autoencoder is used to expand the dataset, some noise may be introduced into the massage dataset, making it difficult to distinguish between the two massage actions. The ResNet152 module with a deeper network structure can better extract the deep features of massage data as a convolutional layer, so we choose the ResNet152 module as the convolutional module of our network. Since the self-made massage dataset has few data categories, the data complexity is low, and additional noise will be introduced when the variational auto-encoder expands the dataset, and the designed neural network model has a complex structure and strong learning ability, it is easy to mistake the error of the data in the training set as the general law of the data itself, resulting in the phenomenon of overfitting and a decrease in the generalization ability of the model. Therefore, it is necessary to set the dropout probability of parameters in the training process to weaken the network's learning of data errors, or Since the self-made massage dataset has few data categories, the data complexity is low, and additional noise will be introduced when the variational auto-encoder expands the dataset, and the designed neural network model has a complex structure and strong learning ability, it is easy to mistake the error of the data in the training set as the general law of the data itself, resulting in the phenomenon of overfitting and a decrease in the generalization ability of the model. Therefore, it is necessary to set the dropout probability of parameters in the training process to weaken the network's learning of data errors, or introduce the attention mechanism into the model to assign weights to each frame of massage map to enhance the network's extraction of data features.
Based on the ResNet152+RNN network structure, four network structures are constructed: the dropout probability is 0.3, and the attention mechanism is not introduced; the dropout probability is 0.3, and the attention mechanism is introduced; the dropout probability is 0, and the attention mechanism is not introduced; the dropout probability of the parameters is 0, and the attention mechanism is introduced. Four kinds of network structures are used to train the massage dataset, respectively. The curve of the recognition accuracy of the test set with the training epochs is shown in Figure 13. introduce the attention mechanism into the model to assign weights to each frame of massage map to enhance the network's extraction of data features. Based on the ResNet152+RNN network structure, four network structures are constructed: the dropout probability is 0.3, and the attention mechanism is not introduced; the dropout probability is 0.3, and the attention mechanism is introduced; the dropout probability is 0, and the attention mechanism is not introduced; the dropout probability of the parameters is 0, and the attention mechanism is introduced. Four kinds of network structures are used to train the massage dataset, respectively. The curve of the recognition accuracy of the test set with the training epochs is shown in Figure 13. When the dropout probability of the model parameters is 0.3 and the attention mechanism is not introduced, the recognition accuracy of the massage test set converges in the 17th training cycle. Compared with the other three models, its convergence speed is slow and the overfitting phenomenon appears in the 16th training cycle; when the dropout probability of model parameters is 0.3 and the attention mechanism is introduced, the recognition accuracy of the massage test set does not converge, and the overfitting phenomenon is serious, indicating that the dropout probability of parameters may lead to the failure of learning some attention parameters, and some features of the data itself are not available, which makes the model structure worse; when the dropout probability of model parameters is 0 and the attention mechanism is not introduced, the recognition accuracy of the massage test set does not converge, and the recognition accuracy oscillates between 95% and 100%, the phenomenon of overfitting occurs; when the dropout probability of model parameters is 0 and the attention mechanism is introduced, the recognition accuracy of the massage test set converges in the 8th training cycle, the convergence speed is rapid, and there is no overfitting phenomenon during the training cycle, indicating that adding the frame attention mechanism can improve the model structure to some extent, enhance the extraction of data features, speed up the model convergence speed, and effectively improve the network overfitting phenomenon.
This paper uses three indicators to evaluate the four network structures: recognition accuracy, recall, and convergence epoch. Recognition accuracy indicates the proportion of all correctly identified samples in the massage dataset to the total number of samples; recall indicates the proportion of correctly identified samples in a certain type of massage When the dropout probability of the model parameters is 0.3 and the attention mechanism is not introduced, the recognition accuracy of the massage test set converges in the 17th training cycle. Compared with the other three models, its convergence speed is slow and the overfitting phenomenon appears in the 16th training cycle; when the dropout probability of model parameters is 0.3 and the attention mechanism is introduced, the recognition accuracy of the massage test set does not converge, and the overfitting phenomenon is serious, indicating that the dropout probability of parameters may lead to the failure of learning some attention parameters, and some features of the data itself are not available, which makes the model structure worse; when the dropout probability of model parameters is 0 and the attention mechanism is not introduced, the recognition accuracy of the massage test set does not converge, and the recognition accuracy oscillates between 95% and 100%, the phenomenon of overfitting occurs; when the dropout probability of model parameters is 0 and the attention mechanism is introduced, the recognition accuracy of the massage test set converges in the 8th training cycle, the convergence speed is rapid, and there is no overfitting phenomenon during the training cycle, indicating that adding the frame attention mechanism can improve the model structure to some extent, enhance the extraction of data features, speed up the model convergence speed, and effectively improve the network overfitting phenomenon.
This paper uses three indicators to evaluate the four network structures: recognition accuracy, recall, and convergence epoch. Recognition accuracy indicates the proportion of all correctly identified samples in the massage dataset to the total number of samples; recall indicates the proportion of correctly identified samples in a certain type of massage technique to the total number of samples of this type. The lowest recall of the three action categories is selected here; the convergence epoch represents the epoch in which the recognition accuracy is stable and does not decrease. High recognition accuracy and recall, as well as a short convergence epoch, indicate excellent model performance. The models of the last three training epochs of the four network structures were used to recognize the entire dataset, and the average recognition accuracy and recall was taken. Table 2 compares the three performance indicators of the four network structures. It can be obtained from the table that the network structure with the dropout probability of model parameters of 0 and introduction of the attention mechanism has the highest recognition accuracy and recall and the shortest convergence epoch. Compared with the network structure with dropout probability of model parameters of 0.3 and introduction of attention mechanism and the network structure with dropout probability of model parameters of 0 and no introduction of attention mechanism, the recognition accuracy is increased by 0.74% and 1.61%, respectively, and recall increased by 2.29% and 4.82%, respectively. Compared with the network structure with dropout probability of model parameters of 0.3 and no introduction of attention mechanism, its convergence epoch is 12 training epochs earlier. Therefore, the introduction of the attention mechanism can effectively improve the network structure, reduce the epoch required for network convergence, and improve the overfitting phenomenon to enhance the generalization ability of the network.
Conclusions
A method combining a convolutional neural network, long-term neural network, and attention mechanism is proposed to identify the massage techniques in this paper. The massage maps of masseur are collected through a self-made massage glove and its data acquisition system; the frame difference method and the conditional variational auto-encoder were used for data processing to make a massage dataset; by combining the convolutional neural network and the recurrent neural network, it realizes extracting features of massage maps in spatial and timing domain, and improves the channel attention mechanism into the frame attention mechanism for tactile images and introduces it into the network structure, so that the weight of each massage maps in the frame dimension is automatically trained and learnt to enhance the extraction of data features. By changing the dropout probability of model parameters during training and whether to introduce an attention mechanism, four network structures based on the ResNet152+RNN network structure are constructed to learn the dataset. Experiments show that compared with the other three network structures, the network structure with the dropout probability of model parameters of 0 and the introduction of the attention mechanism has the highest recognition accuracy and recall, reaching 100% in the self-made massage dataset. The convergence epoch is short, only 5 cycles are required, and there is no overfitting phenomenon. The proposed network structure can learn the features of the massage map group in both the spatial domain and timing domain and can effectively improve the overfitting phenomenon of the network when the dataset complexity is low. We also compare the ResNet152+RNN network structure with the AlexNet+RNN network structure and the 3DCNN network structure. The results show that the ResNet152+RNN network structure performs better in massage action recognition tasks. Our method has a good application in identifying massage techniques and provides a solid foundation for guiding robotic massage applications.
In future work, in the aspect of sensor and its data acquisition system: it can be considered to improve the flexibility and robustness of the sensor from the sensing principle, increase the speed of the data acquisition circuit and improve the crosstalk phenomenon in the data acquisition process; in the aspect of neural network: consider to improve the identification of only a single massage technique to segment and identify a single massage technique from a combination of massage techniques. | 2022-07-31T15:04:36.342Z | 2022-07-28T00:00:00.000 | {
"year": 2022,
"sha1": "20aee05b24283fbcc6e99610fd5d68967e677043",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/22/15/5632/pdf?version=1658987286",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0daab3757cfb5601e11b2b7c0eae996d1fe5545f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
13919059 | pes2o/s2orc | v3-fos-license | Banach Center Publications, Volume ** Institute of Mathematics Polish Academy of Sciences Warszawa 19** Divergences in Formal Variational Calculus and Boundary Terms in Hamiltonian Formalism
It is shown how to extend the formal variational calculus in order to incorporate integrals of divergences into it. Such a generalization permits to study nontrivial boundary problems in field theory on the base of canonical formalism. 1. Introduction. The Hamiltonian formulation of classical mechanics [Arn] is based on geometrical constructions which use such notions as differential forms, vector fields and multivectors, and such operations as differential, interior product, Lie derivative, Schouten-Nijenhuis bracket. Most of these constructions were extended to field theory in the process of studying nonlinear integrable models during the last 20 years [Olv86]. This approach has been called the formal variational calculus [GD] because it ignores any terms arising as a result of integration by parts. This is fully justified in case of periodic boundary conditions or fast decay of fields at spatial infinity, but unfortunately, this method is not applicable in its initial form to many other problems interesting from physical point of view. For example, massless fields are slowly decaying at infinity and, as a result, some important characteristics of these fields are expressed just through surface integrals (or volume integrals of spatial divergences). They are necessary to form canonical generators of the global gauge transformations or asymptotic symmetries of the Riemannian metric. The great efforts were started at the end of fifties to understand the role of surface terms in the Hamiltonian of General Relativity [ADM]. Only after 15 years of study the satisfactory explanation had been given [RT]. But even then not all questions were answered. For example, one might worry how to retain the surface terms which are necessary to realize the Poincaré algebra in asymptotically flat space [RT], [Sol85]
1. Introduction. The Hamiltonian formulation of classical mechanics [Arn] is based on geometrical constructions which use such notions as differential forms, vector fields and multivectors, and such operations as differential, interior product, Lie derivative, Schouten-Nijenhuis bracket. Most of these constructions were extended to field theory in the process of studying nonlinear integrable models during the last 20 years [Olv86]. This approach has been called the formal variational calculus [GD] because it ignores any terms arising as a result of integration by parts. This is fully justified in case of periodic boundary conditions or fast decay of fields at spatial infinity, but unfortunately, this method is not applicable in its initial form to many other problems interesting from physical point of view. For example, massless fields are slowly decaying at infinity and, as a result, some important characteristics of these fields are expressed just through surface integrals (or volume integrals of spatial divergences). They are necessary to form canonical generators of the global gauge transformations or asymptotic symmetries of the Riemannian metric. The great efforts were started at the end of fifties to understand the role of surface terms in the Hamiltonian of General Relativity [ADM]. Only after 15 years of study the satisfactory explanation had been given [RT]. But even then not all questions were answered. For example, one might worry how to retain the surface terms which are necessary to realize the Poincaré algebra in asymptotically flat space [RT], [Sol85] {H(ξ), H(η)} = H([ξ, η]) 1991 Mathematics Subject Classification: Primary 58F05; Secondary 70G50, 58G20. The paper is in final form and no version of it will be published elsewhere.
[1] 2 V.O. SOLOVIEV when doing local calculations of the constraints algebra {H(x), H(y)} = g ab (x)H b (x)δ ,a (x, y) − g ab (y)H b (y)δ ,a (y, x), {H(x), H a (y)} = −H(y)δ ,a (y, x), {H a (x), H b (y)} = H b (x)δ ,a (x, y) − H a (y)δ ,b (y, x). We will show in the following that all the main structures of the formal variational calculus can be extended to include nontrivial contributions of divergences through introduction of a new grading and new pairing compatible with it. So, it occurs possible to preserve the nice geometrical language in the more general case than before. After all the field theory Poisson bracket is given by a new formula which differs from the standard one by surface terms. Simultaneously we get the answer to the mentioned problem of disappearance of the surface contributions in local calculations with δ-function. The natural way to take the boundary terms into account is to introduce the characteristic function θ Ω (x) of the integration domain Ω. Then relations like give the solution. In its turn this is connected with the observation [Sol92] that transformations of the type (for example, transformation to Ashtekar's variables) in field theory are canonical only up to boundary contributions, because the standard Euler-Lagrange variational derivatives in general do not commute [And76], [And78].
We expect that boundary conditions should be treated in this formalism as a kind of constraints put on the initial data, i.e., they should be added to the Hamiltonian with some Lagrange multipliers and then checked for compatibility with the dynamics. The requirement of compatibility may lead to secondary boundary conditions or to fixing the Lagrange multipliers. But now this subject is not enough studied and our consideration is preliminary and limited to one example: the nonlinear Schrödinger equation.
2. New Poisson bracket formula. Below we use the local coordinate language and instead of the manifold with a boundary consider a domain Ω in R n having a smooth boundary ∂Ω. We do not expect that global formulation could meet with serious difficulties.
Definition 1. An integral over a finite domain Ω of a function of field variables φ A (x), A = 1, ..., p and their partial derivatives D J φ A up to some finite order is called a local functional.
R e m a r k 1. In contrast to the standard definition we do not treat these local functionals as equivalent if they differ by a divergence term.
All the functions f and φ A as well as their variations throughout the paper are supposed infinitely smooth, i.e. C ∞ (R n ). We use multi-index notations J = (j 1 , ..., j n )
3
The derivative operator D will denote below the full partial derivative taking into account also coordinate dependence of fields φ A (x). As the number of sums in some formulae of this paper is large enough we will write only a sign of summing without displaying the indices of summation. According to this rule, sum over all repeated indices should be understood. In those cases, where it is not so, we display the summation indices. Also, we do not show the limits of summation, because they are natural, i.e. outside them the summand is simply zero. Usually we omit d n x in the integrals and show the arguments only when they can be mixed. We denote as A the space of local functionals . It is important that this space includes functionals with integrands depending on derivatives of arbitrary order [And92]. Otherwise the Poisson brackets could go out of A. The following is the general definition of field theory Poisson bracket. The key idea of the new formula is in exploitation of the full variations which are free on the boundary. The variation of a local functional We propose to define field theory Poisson bracket by the formula where trace of two differential operators is used The important property of the trace is Tr(ÂDB) = D Tr(ÂB) = Tr(DÂB).
The individual structure of a Poisson bracket is given by matrix I AB . More general treatment of it will be given in the next Section and here it is simply constant antisymmetric matrix. Symmetrized covariant derivatives can also be used in the expression for the first variation of local functional . We can replace partial derivatives by covariant ones in the trace calculation if the curvature is zero or if one operator is simply a multiplication by a function. Then covariance of the new formula under changes of independent variables is evident. In general case a special consideration is necessary.
The new bracket differs from the standard one in the exact fulfilment of the Jacobi identity under arbitrary boundary conditions [Sol93]. In the same time its calculation is not more complicated than usual since we need not integrate by parts to get Euler-Lagrange derivatives.
We can also use another representation of the first variation [Olv86] δF = where the higher Eulerian operators [KMGZ], [Ald] E are used. The zero order operator is just the standard Euler-Lagrange variational derivative. Binomial coefficients for multi-indices are where ordinary binomial coefficients are otherwise. Let us mention that if J is not contained in K, then all quantities having multi-index (K − J) are zero. The sums over J and K above are really finite because local functional can depend only on a finite number of derivatives according to Definition 1.
A remarkable property of these operators is 3. Extension of the formal variational calculus. In dealing with terms arising in the integration by parts it is suitable to represent integrals over finite domain as integrals over infinite space with the help of the characteristic function 0 otherwise. We can understand it also as Heaviside function θ Ω (x) = θ(P Ω ), where equation P Ω (x) = 0 defines the boundary and
5
and, for example, the full variation of a local functional can be expressed in the form could be called the full variational derivative. Such a representation corresponds to the situation opposite to the standard one: here distributions are of finite support whereas test functions δφ A are arbitrary.
A grading in linear space L is a decomposition of it into direct sum of subspaces, with a special value of some function p (grading function) assigned to all the elements of any subspace [Dorf]. Elements of each subspace are called homogeneous.
In our case the factor D J θ Ω is responsible for the grading and the function p takes its values in the set of all positive multi-indices J = (j 1 , . . . , j n ) We always can return to the standard formal variational calculus by putting θ Ω (x) ≡ 1.
A bilinear operation x, y → x • y, defined on L, is said to be compatible with the grading if the product of any homogeneous elements is also homogeneous, and if 3.1.Local functionals and evolutionary vector fields. Here we will call the expression given in Definition 1 the canonical form of a local functional . We formally extend that definition by allowing local functionals to be written as follows where in accordance with the previous definition only a finite number of terms is allowed.
Here and below we simplify the notation for derivatives of θ and remove Ω. Of course, any such functional can be transformed to the form used above through integration by parts with So, the formal integration by parts over infinite space R n evidently changes the grading. It will be clear below that the general situation is from one side compatibility of all bilinear operations with the grading and from the other side compatibility of them with formal integration by parts. So, basic objects (local functionals etc.) are defined as equivalence classes modulo formal divergences (i.e., divergences of expressions containing θ-factors) and the unique decomposition into homogeneous subspaces with fixed grading function can be made only for representatives of these classes. But we will see that the pairing will be defined in such a way to avoid any ambiguity. We call expressions of the form the evolutionary vector fields. The expressions ψ J A are called characteristics of them. The value of the evolutionary vector field on a local functional is given by formula It is a straightforward calculation to check that this operation is compatible with the formal integration by parts, i.e. ψDiv(f ) = Div(ψf ), similarly to in the standard formal variational calculus. This relation is, of course, valid for integrands. It is easy to check that the evolutionary vector field with coefficients can be considered as the commutator of the evolutionary vector fields ξ and η with the Jacobi identity fulfilled for the commutator operation. Therefore the vector fields form a Lie algebra.
3.2.Differentials and functional forms. The differential of a local functional is simply the first variation of it here and below δφ It can also be expressed through the Fréchet derivative or through the higher Eulerian operators This differential is a special example of functional 1-form. A general functional 1-form can be written as A . Of course, the coefficients are not unique since we can do formal integration by parts.
Let us call the following expression the canonical form of functional 1-form Analogously, we can define functional m-forms as integrals, or equivalence classes modulo formal divergences, of vertical forms Am .
Define the pairing of an evolutionary vector field and 1-form as DIVERGENCES AND BOUNDARY TERMS 7 The interior product of an evolutionary vector field and functional m-form will be Am . Then the value of m-form on the m evolutionary vector fields will be defined by formula α(ξ 1 , . . . , ξ m ) = ξ m . . . ξ 1 α.
The differential of m-form given as satisfies standard properties The Lie derivative of a functional form α along an evolutionary vector field ξ can be introduced by the standard formula 3.3.Graded differential operators and their adjoints. We call linear differential operators of the formÎ Let us call linear differential operatorÎ * adjoint toÎ if for arbitrary set of smooth For coefficients of the adjoint operator we can derive the expression It is easy to check that the relation Operators satisfying relationÎ * = −Î will be called skew-adjoint . With the help of them it is possible to express 2-forms (and also 2-vectors to be defined below) in the canonical form It is clear that we can consider these representations of functional forms as formal decompositions over the basis derived as result of the tensor product of δφ A , with the totally antisymmetric multilinear operatorŝ being coefficients of these decompositions.
3.4.Multi-vectors and Schouten-Nijenhuis bracket.
Let us introduce the dual basis to |δφ A by formal relation and construct by means of the tensor product a basis δ δφ B1 (y) .
Then by using totally antisymmetric multilinear operators we can define functional m-vectors (or multi-vectors) Here a natural question arises: what is the relation between evolutionary vector fields and 1-vectors? Evidently, evolutionary vector fields lose their form when integrated by parts whereas 1-vectors conserve it. It is possible to prove the following Proposition [Sol94].
Proposition 1. There is a one-to-one correspondence between evolutionary vector fields and functional 1-vectors. The coefficients of 1-vector in the canonical form ξ J A are equal to the characteristic of the evolutionary vector field.
It is not difficult to show that we can define pairing (interior product) of 1-forms and 1-vectors and this pairing preserves the identification α(ξ) = θ (I+J) Tr(α I ξ J ).
When 1-vector is in the canonical form this result coincides with Eq.(1). The interior product of 1-vector and m-form or, analogously, of 1-form and m-vector is defined as
9
Then we also can define the value of m-form on m 1-vectors (or, analogously, m-vector on m 1-forms) where in this trace each entry of multilinear operator α acts only to the one corresponding ξ, whereas each derivation of the operator ξ acts on the product of α and all the rest of ξ's. It is possible to extend the differential onto m-vectors and analogously onto mixed objects. Evidently, d 2 ψ = 0.
With the help of the previous constructions we can define the Schouten-Nijenhuis bracket as follows for two multi-vectors of orders p and q. The result of this operation is p + q − 1-vector and it is analogous to the Schouten-Nijenhuis bracket in tensor analysis [Nij]. Its use in formal variational calculus is described in [Dorf]. However this bracket is defined only for operators there. We can recommend [Olv84] as an interesting source for treatment of the Schouten-Nijenhuis bracket for functional multi-vectors. Our construction of this bracket guarantees compatibility with the equivalence modulo divergences Div(ξ), η SN = Div ξ, η SN = ξ, Div(η) SN .
Proposition 2. The Schouten-Nijenhuis bracket of functional 1-vectors up to a sign coincides with the commutator of corresponding evolutionary vector fields.
P r o o f. Let us take two 1-vectors in canonical form
and compute We have Therefore, we obtain Proposition 3. (Olver's Lemma [Olv86]) The Schouten-Nijenhuis bracket of the two bivectors can be expressed in the form where the two differential operatorsÎ,K are the coefficients of the bivectors in their canonical form.
P r o o f. Let us consider the Schouten-Nijenhuis bracket for the two bivectors and without loss of generality take them in the canonical form where ξ A = δ/δφ A and operatorsÎ ,K are skew-adjoint. Then we have Now let us make integration by parts in the second term At last we change the order of multipliers under wedge product in the second term, make a replacement M → M − Q and organize the whole expression in the form Having in mind the definition of adjoint operator (2) we can represent the final result of the calculation as follows, therefore supporting in this extended formulation the method, proposed in [Olv86] for testing the Jacobi identity.
3.5.Poisson brackets and Hamiltonian vector fields. Let us call a bivector
formed with the help of the graded skew-adjoint differential operator The operatorÎ AB is then called the Hamiltonian operator.
We may call the value of the Poisson bivector on differentials of the two functionals The explicit form of the Poisson brackets can easily be obtained. It depends on the explicit form of the differential of the functionals, which can be changed by partial integration. Of course, all the possible forms are equivalent. Taking the extreme cases we have the expression through Fréchet derivatives or through higher Eulerian operators Theorem 1. The Poisson bracket defined above satisfy Definition 1.
P r o o f. It follows from the three facts: 1)from the previous formulas (3), (4) it is clear that {F, G} is a local functional, 2)antisymmetry of {F, G} as a consequence of skew-adjointness ofÎ AB and 3)equivalence of the Jacobi identity to the Poisson bivector property can be proved [Sol94].
The result of interior product of the differential of a local functional H and the Poisson bivector (up to the sign) will be called the Hamiltonian vector field (or the Hamiltonian 1-vector )Î Construct the adjoint graded operator to θD according to Eq.(2) (θD) * = −θD − Dθ, and the skew-adjoint operator iŝ can be treated [LR] as generated by the Hamiltonian where H = r ′ q ′ + kr 2 q 2 ,H =r ′q′ + kr 2q2 , and Poisson brackets are {q(x),r(y)} = 2iδ(x, y).
To return to the standard form of this equation we should put reality conditions ψ = q =r,ψ = r =q.
Let us calculate the full variational derivatives analogous formulas take place for the bar variables.
The natural boundary condition arises if we put δ-function contribution on the boundary to zero by taking q ′ = r ′ =q ′ =r ′ = 0. By considering Poisson brackets for integrals of the total spatial derivatives of canonical variables φ A = (q, r) with the Hamiltonian we expect to obtain dynamical equations on the boundary in functional form Using Newton-Leibnitz formula we geṫ This is different from the standard (bulk) equationṡ For the given Hamiltonian the formal equations for boundary values arė if we assume independent behaviour at the ends. It is remarkable that the boundary equations are different from the bulk ones despite the boundary condition. These equations can be easily integrated and give elementary oscillations at the ends where the initial value of ψ determines both amplitude and frequency of the oscillator. In this case the dynamics of boundary values is separated from the bulk dynamics. Of course, this situation is not general.
5. Discussion. There are not so many publications on problems where divergences play a nontrivial role in Hamiltonian formalism of field theory. After the classical paper by Regge and Teitelboim [RT] we can recommend the work by Jezierski and Kijowski [JK] (see also book [KT]) where the main criterion is also the disappearance of surface terms in the first variation of the Hamiltonian. Such functionals are called admissible or differentiable, but to be convinced in the full consistency of the formalism it is necessary to check the two points: 1)the space of admissible local functionals should be closed under the Poisson bracket; 2)the Jacobi identity should be fulfilled for arbitrary admissible functionals; and this check is not explicitly demonstrated in the cited works. For infinite domain the first requirement was studied by Brown and Henneaux [BH]. In the finite domain case the important contribution was made by Lewis, Marsden, Montgomery and Ratiu [LMMR] who showed that the standard bracket did not fulfil the Jacobi identity and proposed to modify it by adding some surface terms. Unfortunately these authors do not consider explicitly the first requirement. We hope that our results could serve as a generalization of their ansatz and could be useful in dealing with interesting problems of field theory.
6. Acknowledgements. The author is most grateful to Professor J.Kijowski for the invitation and to the Stephan Banach Center for hospitality and support.
Discussions with M.Asorey, J.Jezierski, I.Kanatchikov, J.Kijowski, J.Louko, J.Nester and other participants of Symposium on Differential Geometry and Mathematical Physics are gratefully acknowledged. | 2018-05-08T17:34:15.959Z | 0001-01-01T00:00:00.000 | {
"year": 1995,
"sha1": "00338f239fb34cb5b800f1fb1375f5f843a3d80c",
"oa_license": null,
"oa_url": "https://www.impan.pl/shop/publication/transaction/download/product/109761?download.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "00338f239fb34cb5b800f1fb1375f5f843a3d80c",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
33884883 | pes2o/s2orc | v3-fos-license | High Resolution Optical Spectra of HBC 722 after Outburst
We report the results of our high resolution optical spectroscopic monitoring campaign ($\lambda$ = 3800 -- 8800 A, R = 30000 -- 45000) of the new FU Orionis-type object HBC 722. We observed HBC 722 with the BOES 1.8-m telescope between 2010 November 26 and 2010 December 29 and FU Orionis itself on 2011 January 26. We detect a number of previously unreported high-resolution K I and Ca II lines beyond 7500 A. We resolve the H$\alpha$ and Ca II line profiles into three velocity components, which we attribute to both disk and outflow. The increased accretion during outburst can heat the disk to produce the relatively narrow absorption feature and launch outflows appearing as high velocity blue and redshifted broad features.
INTRODUCTION
The standard star formation model predicts a constant accretion rate (Shu 1977;Tereby et al. 1984;Shu et al. 1987). However, recent studies based on surveys toward nearby low-mass star forming regions (Dunham et al. 2010, and references therein) suggest that the luminosities of young stellar objects are systematically low compared to the standard model. In addition, the discovery of Very Low Luminosity Objects (i.e. VeL-LOs; Young et al. 2004;Bourke et al. 2006) and their associated strong outflows (Andre et al. 1999;Lee et al. 2010) raised questions about the steady accretion process. As a result, an alternate mechanism termed the episodic accretion process has been suggested, to account for these observational phenomena (Lee 2007, and references therein). The episodic accretion process is characterized by two phases: burst and quiescent accretion. FU Orionis-type objects (hereafter, FUors) have been proposed as prominent examples of burst accreting protostars, while VeLLOs have been proposed as objects in the quiescent phase of the episodic accretion process.
FUors are a class of low-mass pre-main sequence Corresponding Author : J.-E. Lee objects named after FU Orionis, which produced a 5 magnitude optical outburst in 1936 and has remained in its brightened state. As a consequence of eruptive accretion, these protostars exhibit large winds and outflows (Croswell, Hartmann, & Avrett 1987), which are inferred from P Cygni profiles of Hα and other lines. The spectral characteristics of FUors are broad blueshifted emission lines, IR excess, and near-IR CO overtone features, consequences of the energetic burst of accretion-driven viscous heating of the disk. Based on these characteristics, Hartmann & Kenyon (1996) and Reipurth & Aspin (2010) identified about dozen FUors, although in many cases the initial outburst had not been observed. Very little pre-outburst data exists for FUors; few have been studied from the pre-burst phase to the burst phase, and only one (V1057 Cyg) has a pre-outburst optical spectrum. HBC 722, also known as LkHα 188-G4 and PTF10qpf, was recently identified as a FUor by Semkov et al. (2010) and Miller et al. (2011). Its brightness excursion is ∆V=4.7 mag (Semkov et al. 2010), reaching its maximum brightness in September 2010, slowly decreasing since then. HBC 722 (RA = 20h 58m 17.0s, Dec = +43 • 53 ′ 42.9 ′′ , J2000) is located in an active star forming region of North American/Pelican Nebula, 520 pc away (Laugalys et al. 2006). Pre-outburst, HBC 722 was identified as an emission line object of spectral type K7-M0 in the classical T Tauri phase, with an interstellar reddening of A V = 3.4 mag (Cohen & Kuhi 1979). During the burst, the optical and NIR spectra of HBC 722 show consistency with a G-type giant/supergiant and M-type giant/supergiant, respectively (Miller et al. 2011). The burst has not yet exhibited far-infrared feedback, detectable with the instruments on board the Herschel Space Observatory (Green et al. 2011). HBC 722 is the best-characterized FUorlike object pre-outburst, and it provides the first opportunity to profile the burst phase of accretion across all wavelengths, allowing us to model the process in detail.
Here we report the high-resolution optical spectra of HBC 722, observed about two months after reaching the maximum brightness of its outburst.
OBSERVATIONS
We have carried out optical high-resolution spectroscopic observations of HBC 722 from November 26 till December 29, 2010 using the Bohyunsan Optical Echell Spectrograph (BOES; Kim et al. 2002Kim et al. , 2007 attached to the 1.8 m telescope in the Bohyunsan Optical Astronomy Observatory (BOAO) in Korea. We have also observed FU Orionis itself for comparison on January 26, 2011. All spectra were obtained with BOES using either the 200 (R ∼ 45,000) or the 300 µm (R ∼ 30,000) fiber. The observed spectral regions cover the optical bands in the 3800 -8800Å range. The typical signalto-noise ratio at 6700Å is ∼ 15. The observation log is listed in Table 1.
The observed spectra were reduced with the IRAF echelle package to produce the spectra for each order of the echelle spectrum. The echelle aperture tracing was performed using the master flat image, a combination of all flat images. After aperture tracing, the flat, the comparison, and the object spectra were extracted from each image, with the same aperture reference as the master flat image. In the flatfielding process, the interference fringes and the pixel-to-pixel spectral variations were corrected. Wavelength calibration was performed with the ThAr lamp spectrum, and the object spectra were normalized in each aperture using the continuum task.
ANALYSIS
First we identified lines from our BOES spectra of HBC 722, using the spectra presented in Miller et al. (2011) as a template. At wavelengths greater than 5000Å, the BOES spectra have relatively high S/N ratio, even at wavelengths greater than 8000Å. However, lines located at wavelengths shorter than 5000Å were difficult to identify because of low S/N ratio in this region. The line comparison between our spectra and those presented in Miller et al. (2011) is listed in Table 2. Figure 1 shows two K I lines near 7700Å and the Ca II triplet lines near 8500Å.
The spectra of FU Orionis are also plotted for comparison. Although these lines were detected with low-resolution spectroscopy (Miller et al. 2011), the high-resolution spectra presented here show clear blueshifted absorption and red-shifted emission components; this P Cygni profile was previously reported only in Hα. We selected two lines with strong P Cygni profiles, Hα and Ca II 8498, to perform a least-χ 2 fit with Gaussian profiles in order to examine the time-variations of central velocity, FWHM, and peak intensity for each component. The fitting results are shown in Figure 2 and Figure 3.
The Ca II 8498 lines can be decoupled into three components: a broad blue absorption feature, a broad red emission feature, and a relatively narrow central absorption feature. The broad blue absorption and red emission features are most likely associated with outflows, while the relatively narrow absorption feature may be associated with the disk. If the narrow absorption profile is really produced by the disk, we can determine the radial velocity of HBC 722 as the central velocity of the profile, which has a small variation between −22 to −25 km s −1 . This idea is supported by the fact that the central velocities of two outflow components differ by about ±50 km s −1 from the central velocity of the narrow absorption profile, which is considered as the disk component.
The Hα lines are also decoupled into three velocity components as seen in Figure 3: a very broad but shallow blue-shifted absorption feature, a relatively narrow blue absorption feature, and a very broad red emission feature. The very narrow emission feature superimposed on the broad emission one is a telluric emission line (Hanuschik 2003), which was included in the fitting of the Gaussian profiles. The relatively narrow absorption feature might be associated with the disk component, as suggested from the Ca II line. The central velocity of the very broad absorption feature is about −200 km s −1 , which is possibly the same fast outflow component as detected from the Hγ line in Miller et al. (2011). However, this very broad blue absorption feature was not detected on 23 December, 2010, probably due to a very low S/N in this spectrum. The very broad blue absorption and red emission features must be also associated with outflows, indicative of different velocity components within the outflow. The broad red emission line is very bright compared to the blue absorption line; this large intensity difference suggests that the outflow spans a large area.
In Figure 4, we plot the time variation of each parameter of these fitted Gaussian profiles in Hα and Ca II 8498 lines. The variation is not significant in most components, given the low S/N ratio; the only exception is the FWHM of the broad red emission feature of Hα, which varied considerably during our observations. In addition, the peak intensity of the broad red emission feature seems to correlate with the variation of its FWHM. The variation in the outflowing material is typically associated with a varying mass accretion rate (e.g. Kurosawa et al. 2006). Therefore, in order to understand the accretion process post-outburst, we
SUMMARY
We present a time series of BOES high-resolution optical spectra of HBC 722, a newly reported FUorlike outburst during the summer of 2010. The high resolution spectra of two K I lines near 7700Å and the Ca II triplet lines near 8500Å were covered uniquely by our observations. The P Cygni profiles were clearly detected in the Hα and Ca II lines, but only marginally in the K I lines. Using Gaussian fits to the observed line profiles, we show that the Ca II 8498 and the Hα 6563 lines trace the disk component as well as the outflow. The highest velocity component of the outflow (∼ −200 km s −1 ) was detected in the Hα line, and the broad red emission feature of the Hα line varies the most. In the future, monitoring the variation of spectral features will help our understanding of the kinematic structures and of their time variations, as HBC 722 returns to its pre-outburst state. | 2011-04-04T12:20:02.000Z | 2011-04-01T00:00:00.000 | {
"year": 2011,
"sha1": "3e78191757753a9884cb50e650b928149d27f98d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3e78191757753a9884cb50e650b928149d27f98d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
230650822 | pes2o/s2orc | v3-fos-license | Re-Presenting Protestors as Thugs: The Politics of Labelling Dissenting Voices
The use of the word “thugs”, has always precipitated the crisis that has existed longue durée in the history of America. The word carries diverse meanings in different spaces, histories, communities, and countries. When used as a stigmatizing label, it can define, classify, restrict and fix boundaries within a society. Through an assessment of political rhetoric, tweets, and media reports, this article evaluates the hegemonic power embedded in the word and its strategic use by the world leaders for nefarious purposes in the posttruth era. It also explores the racial underpinnings of the word and the covert intentions behind its usage. This paper critically interrogates the social circumstances in which the word is used to suppress dissent. The role of post-truth media as the intermediaries and purveyors of the real and the fake is analyzed. Labelling theory is applied to demonstrate how policy makers, mark out a group in order to rationalize the discourse of state violence. The methods and the outcomes of stigmatizing labelling is illustrated, paying special attention to the role it plays in triggering social unrest. The essay argues that the polemics around the word “thug” enables the administrators to shift focus from the real issues, and thereby deny racial minorities their right to challenge the government policies and actions.
Introduction
The import of the word "thug" is diverse in different spaces, histories, communities, and countries. The word was borrowed into the English vocabulary in the early part of nineteenth century. The Europeans used it to signify India's murderous cult, committed to looting. The original meaning of the Indian word, "thug"-"a deceiver" or "a scoundrel", underwent semantic changes during the British rule. The colonial underpinning of the word is coloured by imperial interests (Roy, 1999;Wagner, 2007;Woerkens, 2002). In America, the word entered the jargon of Hip-Hop culture and was popularized by Tupac Shakur in the latter part of twentieth century through the phrase, "thug life". The usage has become part of the "complex narratives that reveal interdependent beliefs about love, loss, morality, and the individual will to survive and triumph" (Jeffries, 2010, pp. 95-96).
heads of the state, levitate and normalize the state sanctioned brutality and exercise social control against the Afro-Americans. Moreover, they break free from the accountability and responsibility of addressing one of the endemic social problems that is latent in the American society-racial discrimination. "Such labelling is usually considered objective, efficient, routine and indispensable and, perhaps as a consequence, it continues wantonly, without contemplation of the politics involved and the potential adverse outcomes" (Moncrieff, 2007, p.1).
The label "thug" has been employed in American history by the political administrators to keep the protestors in place. By a careful analysis of the contexts in which the word has been used, this article aims to explore how the term imposes and fixes criminal traits on a dissenting group. When used as a stigmatizing label, it can define, classify, restrict and fix boundaries within a society. Through an assessment of political rhetoric, tweets, and media reports, this article evaluates the hegemonic power embedded in the strategic use of the word and how it is used for nefarious purposes by the world leaders in post-truth era. It also explores the racial underpinnings of the word and the covert intentions behind its usage. This paper critically interrogates the social circumstances in which the word is used to suppress dissent. Labelling theory is used to demonstrate how policy makers, mark out a group in order to rationalize the discourse of state violence. The methods and the outcomes of labelling is discussed in detail, paying special attention to the role it plays in triggering social unrest. The essay argues that the polemics around the word "thug" enables the administrators to shift focus from the real issues, and thereby deny racial minorities their right to challenge the government policies and actions. For this, two incidents from the recent history of America is analyzed to depict the debilitating effect of negative stereotyping: the institutional murder of Freddie Gray in Baltimore, and the homicide of George Floyd in Minneapolis.
Freddie Gray
On 12 April 2015, an African -American man named Freddie Gray aged twenty five, was arrested by Baltimore City Police Department for carrying an illegal switchblade which later was proved to be a legal pocket knife. Gray, who was in good health at the time of arrest, suffered severe spine and neck injuries while in the police vehicle and passed away on 19 April 2015. Protests flared in the streets of Baltimore. The American president Barack Obama, the Maryland governor, Larry Hogan and the Baltimore's Mayor, Stephanie Rawlings-Blake, called the protestors as thugs. Freddie Gray, was also branded as a thug in social media as he was earlier arrested on drug charges and minor crimes (Ford, 2015;Simpson, 2015).
George Floyd
On 25 May 2020, George Floyd, 46, died after being arrested by police for allegedly using a counterfeit bill, in Minneapolis, Minnesota. Footage of the arrest shows Floyd pinned to the floor and a white police officer, Derek Chauvin, kneeling on Floyd's neck. Transcripts of the police bodycam footage show Floyd repeating twenty times that he could not breathe. His death happened within 30 minutes. The four officers involved were fired from the job. The incident triggered widespread protests around the world. Social media campaigns "#Ican'tbreathe" gained momentum (Bayrasli, 2020;Muhammad, 2020). The president Donald Trump, in a tweet, called the protesters "thugs" and said "when the looting starts, the shooting starts". Twitter removed the tweet for "glorifying violence" (Donald J. Trump, 2020). The British prime minister responded to the protests in London by stating, "Racist thuggery', will be answered by "the force of law" (Sawer et al., 2020). Australian senator, Pauline Hanson, called, George Floyd "a thug" (Folley, 2020).
The systemic violence of neocolonial states finds expression in stereotyping the dissenting voices with violence. The word, "thug" is used "to dismiss Black life as less valuable and perpetuates a negative and criminal connotation in forms of micro-insults and microinvalidations" (Smiley & Fakunle, 2016, p. 351). The physical, emotional, and psychological ramifications of such usages extend beyond historical and geographical boundaries. The impact of the process of labelling can be detected not only between the state and people in a society, but also among people "through constructions of social othering and identity creation" (Wood, 2007, p. 20). The custodial murder of the two Afro-American men brings to fore the politics of stigmatization that results from labelling. Ronald L. Akers (2012) in his book Criminological Theories, states that society's perception of an individual is reflected in labelling and this societal process shapes the self-concepts and the character of a person. "When confronted with a label applied by those with power and authority, the individual has little power to resist or negotiate his or her identification with it" (Akers, 2012, pp. 101-102). Both Gray and Floyd are stamped as thugs immediately after their death, stating that they have been arrested before for minor crimes. As the fact-checking website Snopes points out, "The question of past arrests often surfaces among people who want to rationalize police officers' actions when Black men are killed in custody" (Lee, 2020). Richard Reddick, the Associate Dean of Equity at the University of Texas, considers this, as a part of communication strategy to dehumanize the victim so that the public need not be sorry for the victim and the police can escape from the responsibility. He observes that "the claims about Floyd were also a product of the era's highly polarized media environment, compounded by years of problematic storytelling by politicians and reporters that portrays Black men only as "criminal entities" instead of nuanced people" (Lee, 2020). In a case involving evident police brutality, researching the background of the victim can only lead to the standardization of state violence.
De-coding Thugs
Charles Hirschman(2004)concedes that, even though, racism is rejected as unscientific, the racial boundaries persists as significant social markers informing public opinions and the design of state policy (p. 400). There is a gamut of words to represent blackness like "thug," "ghetto," "brute", "hood," "sketchy," and "shady" without explicitly sounding racially biased (Smiley & Fakunle, 2016, p. 354). When a Black American uses the word "thug" to address himself, it signifies authenticity, power, and being cool. The American rappers use it creatively in their songs to communicate their genuine life. When Afro-Americans are labelled as a "thugs" by the dominant whites, the meaning undergoes a paradigmatic shift. Black men with criminal inclination are often branded as "thugs". The word is used in public sphere without reproach as it is thought to be racially neutral (Boyd, 2007) and at the same time facilitates racism that has become covert and implicit.
There is a disjuncture between how Afro-Americans' perceive the label "thug" and how the policy makers of the state view them. Vulnerability, insecurity, forced exclusion and alienation suffered by the racial minorities are coded in the word from an Afro-American perspective. Apart from the accepted meaning of exerting violence, the word carries multifarious meanings for the political administrators, coloured by racial undertones, which include: unemployed problemmakers, looters, people indulging in arson, drug-peddlers and the ones in need of dire rehabilitation. Every time the word is repeated these meanings are re-produced and reiterated. Equating blackbody with savages, criminals, and unmanageable ruffians, facilitate the white supremacy (Davis, 1998;Muhammad, 2010;Smiley & Fakunle, 2016). Thus, labelling "… inscribes in and enables the very construction of social reality" (Gupte& Mehta, 2007, p. 66). According to Ian Haney-López, author of Dog Whistle Politics, "racial code operates by appealing to deep-seated stereotypes of groups that are perceived as threatening. But they differ from naked racial terms in that they don't emphasize biology -so it's not references to brown skin or black skin" (Lopez, 2016). The word "thug" is used more to target a racially defined group rather than a specific behaviour, as a replacement for the n-word and has been repeatedly employed by politicians for oppressive shaming, which John Braithwaite (2006) warns, can result in "thought control and stultification of human diversity" (p. 12).
Post-truth Media
Mediated and embodied politics of media has become one of the key features of post-truth era. Representations by the media focus on the prevailing narratives and overlook a range of other potential and convincing interpretations. The CNN news segment, Outfront hosted by Erin Burnett cited the use of the word "thug" by the US President, Barack Obama and Baltimore's Mayor, Stephanie Rawlings-Blake, and asked the Baltimore City Council member, Carl Stokes, "isn't it the right word?" The reply he gave was, "so calling them thugs? Just call them n***" (Burnett, 2005). "Stokes was calling attention to the use of coded language that is in some ways explicitly and other ways implicitly used as a substitute for personally mediated racism" (Smiley & Fakunle, 2016, p.351). The process of labelling produces reductionist approaches that stigmatize, provoke and sustain discord in the society (Moncrieff, 2007, p. 3). Burnett associates "thugs" with a deviant behaviour and is ignorant of the history of this offending word to black Americans. Such hegemonic articulations are informed and constructed by previous representations circulated in the mass media (Hobart, 2007, p. 134). Stokes, in turn, is conscious of the structural underpinnings of the word and reflects the sentiments of a black person when the word "thug" is used to signify their existence.
Media functions as the intermediaries and purveyors of the real and the fake. The representations not only reflect the crisis but also "…actively (re)produce it, name the stakes and set the parameters of what is considered real-or not" (Overell & Nicholls, 2019, p. 7). Immediately after a fatal incident, media would resort to the collection of details that often trigger misconceptions about these individuals and would unreasonably label them as thugs. The first information that reaches the public would be difficult to alter. Such re-presentations would then focus on the protestors for hampering peace in the society. The same label that was given to the victim would be transferred to the people who raise voice against this injustice. "True understanding of the power of racialized language, both overt and covert, should be the new standard of journalistic integrity" (Smiley &Fakunle, 2016, p. 365).
Conclusion
The protest of the state against the protestors is reflected in the label "thugs". Every instance of remonstration, gives an opportunity for naturalizing and normalizing the clusters of meaning that designate the word. Language is the key to the art of governance in a liberal democracy. The word "thug" has replaced the n-word and has played a significant role in manufacturing the fear of "the other". Apart from this, race-coded words are used by the state for authoritarian social control. It can "trigger deep seated feelings of revulsion and give permission to vent frustration on targets lacking economic, social and political power" (Kitossa, 2018).Labelling is a process that is grounded in history and extends globally and is used by the administrators in perpetuating and legitimizing, systemic violence. Branding the dissenting groups as "thugs" is a political strategy of state policing to isolate and detain the protestors. However, itwould be an impediment to the struggles for social reformation against the racist totalitarianism, prevalent in the western countries and would undermine the democratic values of justice and equality. Instead of shifting the blame from the perpetrator to the victim, the state has to be accountable for every action it initiates. | 2020-11-21T18:25:04.127Z | 2020-10-17T00:00:00.000 | {
"year": 2020,
"sha1": "d55b1aeda77ad1bc28eaafa096dbb01d11834d6f",
"oa_license": "CCBYNC",
"oa_url": "http://rupkatha.com/V12/n5/rioc1s2n4.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d55b1aeda77ad1bc28eaafa096dbb01d11834d6f",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
10076868 | pes2o/s2orc | v3-fos-license | Inhibition of hepatitis B virus gene expression and replication by endoribonuclease-prepared siRNA
Endoribonuclease-prepared siRNA (esiRNA) is an alternative tool to chemical synthetic siRNA for gene silencing. Since esiRNAs are directed against long target sequences, the genetic variations in the target sequences will have little influence on their effectiveness. The ability of esiRNAs to inhibit hepatitis B virus (HBV) gene expression and replication was tested. EsiRNAs targeting the coding region of HBV surface antigen (HBsAg) and the nucleocapsid (HBcAg) inhibited specifically the expression of HBsAg and HBcAg when cotransfected with the respective expression plasmids. Both esiRNAs reduced the HBV transcripts and replication intermediates in transient transfected cells and cells with HBV genomes integrated stably. Compared with synthetic siRNA, esiRNA targeting HBsAg was less effective than the selected synthetic siRNA in terms of the inhibition of HBV gene expression and replication. However, while the ability of synthetic siRNAs for specific gene silencing was impaired strongly by the nucleotide substitutions within the target sequences. The efficiency of gene silencing by esiRNAs was not influenced by sequence variation. The transfection of esiRNA did not induce interferon-stimulated genes (ISGs) like STAT1 and ISG15, indicating the absence of off-target effects. In general, esiRNAs strongly inhibited HBV gene expression and replication and may have an advantage against HBV strains which are variable genetically.
Introduction
Hepatitis B virus (HBV) causes acute and chronic infection in humans. Although effective vaccines are available to prevent the transmission of HBV, HBV infection remains a global health problem due to about 350 million of chronically HBV-infected people worldwide. These individuals have a relatively high risk of developing end-stage liver diseases, such as liver cirrhosis and hepatocellular carcinoma (Seeger and Mason, 2000). To date, treatment regiments for chronic hepatitis B are costly and have limited effectiveness. Only about one-third of the patients treated with alpha-interferon show a sustained response. Nucleoside analogues do not eliminate the virus completely and may select resistant viral variants (Marcellin, 2002). Thus, it is urgent to develop new antiviral drugs against HBV.
RNA interference (RNAi) is a process whereby double-stranded RNA (dsRNA) induces a sequence-specific degradation of homologous messenger RNA (mRNA) (Hannon, 2002). This process is mediated by small interfering RNAs (siRNAs) of the length of 21-23 nucleotides. In the natural RNAi pathway, siRNAs are derived from the processing of long dsRNAs by the nuclease dicer into discrete 21-mers. Using chemically synthesized or vector-expressed siRNAs, many clinically important viruses including human immunodeficiency virus (HIV), severe acute respiratory syndrome coronavirus, HBV, and hepatitis C virus (HCV) could be inhibited in vitro (Haasnoot et al., 2003;Li et al., 2005;Randall and Rice, 2004;Stevenson, 2003;Wu and Nandamuri, 2004). A number of recent studies have demonstrated that the HBV gene expression and viral replication could be inhibited (Giladi et al., 2003;Guo et al., 2005;Hamasaki et al., 2003;Klein et al., 2003;Konishi et al., 2003;McCaffrey et al., 2003;Morrissey et al., 2005a,b;Shlomai and Shaul, 2003;Wu et al., 2005), or even cleared from liver of transgenic mice by siRNAs synthesized chemically or vector-expressed (Uprichard et al., 2005). In these studies, siRNAs targeting different region of the HBV genome were used.
The genetic variation of viral genomes may lead to escape from the silencing effect, as reported for poliovirus (Gitlin et al., 2005) HIV-1 (Boden et al., 2003;Das et al., 2004), and HCV (Wilson and Richardson, 2005). This problem could be overcome by targeting alternative sites on viral genomes as shown for HCV (Wilson and Richardson, 2005). Due to the genetic variability of HBV, a treatment with siRNAs faces the problem with naturally occurring or treatment-induced nucleotide substitutions in the HBV genomes. Thus, gene silencing strategies targeting multiple sites are warranted. Yang et al. (2002) showed that Escherichia coli RNase III can digest dsRNA efficiently into short pieces with the same end structures as siRNAs. These endoribonuclease-prepared siR-NAs (esiRNAs) are able to target multiple sites within an mRNA, and have been verified to silence target mRNA efficiently and specifically (Calegari et al., 2002;Kittler et al., 2004;D. Yang et al., 2004;H. Yang et al., 2004;Zhu and Jiang, 2005). In the present study, the ability of esiRNA to inhibit the HBV gene expression and replication was investigated. The esiRNAs targeting the HBV S and C gene were prepared by in vitro transcription and RNase III digestion. The inhibition of the HBV gene expression and the replication by esiRNA were demonstrated in transient cotransfection experiments and in a cell line with a stably integrated HBV genome. The effect of esiRNA targeting HBV S gene was compared further with that of chemical synthetic siRNAs.
Preparation of esiRNAs targeting the coding region of HBV surface antigen (HBsAg) and nucleocapsid (HBcAg)
The coding regions for HBsAg (nt 129-842) and HBcAg (nt 1901-2348) were amplified from a HBV genome of subtype ayw (GenBank accession no. U95551) and cloned into pCR2.1 vector (Invitrogen, Karlsruhe, DE). The primers S6C, S7D, HcNCO, and Hc-149s used for PCR amplification are listed in Table 1. Clones with the inserts in both orientations with regard to the T7 promoter were selected and sequenced to verify that the construction was correct. EsiRNAs targeting the coding region of HBsAg and HBcAg (SesiRNA and CesiRNA) were prepared by using Silencer siRNA Cocktail Kit (RNase III) (Ambion, Darmstadt, DE) according to the manufacturer's instructions. Briefly, single strand RNAs were transcribed from the plasmids using T7 polymerase and then annealed to form double strand RNA. EsiRNAs were generated finally by digestion of the purified dsRNA with RNase III at 37 • C for 1 h, and verified by gel electrophoresis on a 4% agarose gel. HCV-C5, a plasmid containing HCV core region (nt 631-900) (Lu et al., 1995) was used to prepare HCVesiRNA as a scramble esiRNA control.
Cell culture and transfection
BHK cells, HepG2 cells, and HepG2.2.15 cells (provided by Prof. G. Arc; HBV serotype ayw, genotype D; GenBank accession no. U95551) (Sells et al., 1987) were maintained in Eagle's minimal essential medium or RPMI 1640 medium, respectively, supplemented with 10% of fetal bovine serum, 100 U/ml penicillin, and 100 g/ml streptomycin. Cells were seeded in an 8-well chamber slide or 24-well plate at about 60% confluence. After 24 h, cells were transfected with lipofectamine 2000 (Invitrogen, Karlsruhe, DE) following the manufacturer's instructions. 0.2 g or 0.5 g of plasmid DNA and 0.5 l or 2 l of lipofectamine 2000 were placed in each well of an 8-well chamber slide or in a 24-well plate, respectively.
Construction of an HBV infectious clone and expression plasmids encoding HBsAg, HBeAg, and HBcAg
A replication-competent HBV construct pHY106 + wta was generated by inserting a full length wild type HBV genome from pSM2 (kindly provided by Prof. Hans Will, Genotype D, subtype ayw, Gen-Bank accession no. V01460) into pHY106 vector (H. Yang et al., 2004). The pHY106 vector contains a cytomegalovirus (CMV) promoter upstream of a short, recombinant HBV sequence that allows the in-frame insertion of a full length HBV genome following SapI digestion. The CMV promoter upstream of the precore initiation site allows an efficient transcription of the 3.5-kb pregenomic RNA following transfection of liver cell lines. Two vectors pHBeex and pHBcex were constructed for the expression of HBeAg and HBcAg, respectively. The regions of the HBV genome (nt 1814-2451) and (nt 1901-2451) were amplified with primers HpreC-EV1, HBc-EV2 and HBc-EV1, HBc-EV2, respectively (Table 1), digested with EcoRV and XhoI, and inserted into the pcDNA3 vector via the EcoRV and XhoI sites. The expression plasmids encoding HBsAg 1056Sp (Genotype D, subtype ayw), HK188 (Genotype C, subtype adr), and 91-4696 (Genotype A, subtype adw) were described by Ireland et al. (2000).
HBsAg and HBeAg chemiluminescent microparticle immunoassay (CMIA)
Levels of HBsAg and HBeAg in cell supernatants were determined by using the Architect system and HBsAg and HBeAg CMIA kits (Abbott Laboratories, Wiesbaden-Delkenheim, DE) according to the manufacturer's instructions.
Immunofluorescence (IF) staining for hepatitis B core antigen (HBcAg)
Hepatitis B core antigen was visualized by IF staining with specific antibodies. Transfected cells were cultured for 24 h, then fixed with 50% of methanol in phosphate-buffered saline (PBS), and stained with polyclonal rabbit anti-HBc antibody (DAKO, Hamburg, DE). Goat anti-rabbit immunoglobulin G-fluorescein isothiocyanate (Sigma, Munich, DE) was used as a secondary antibody for the experiments. Staining was visualized under a fluorescence microscope (Nikon, Tokyo, JP) with an excitation wavelength of 490 nm.
Isolation and analysis of viral RNA
Total RNA was extracted from transfected cells with TRIzol reagent (Invitrogen, Karlsruhe, DE) according to the manufacturer's instruction. Northern blot analysis was carried out by agarose-formaldehyde method. Briefly, 5 g of total RNA per sample were separated on 1% agarose-formaldehyde gel and blotted to a Hybond-N + nylon membrane (Amersham, Buckinghamshire, GB). HBV transcripts were detected by using a 32 P-labeled full length HBV probe. Hybridization signals were visualized and analyzed by a Phospho-Imager (Cyclon, Parkard Instrument).
Purification and analysis of HBV DNA from intracellular core particles
HBV replicative intermediates were purified from intracellular core particles according to the method described by Sterneck et al. (1998) with minor modification. Briefly, cells were washed in ice-cold PBS and lysed in 0.4 ml of lysis buffer containing 50 mM Tris-HCl, pH 7.4, 1 mM EDTA, 1% NP-40 at 4 • C for 15 min. Nuclei were pelleted by centrifugation. The supernatant was adjusted to 10 mM MgCl 2 and treated with 100 g/ml DNase I (Roche, Mannheim, DE) at 37 • C for 30 min. The reaction was stopped by the addition of EDTA to a final concentration of 25 mM. Proteins were then digested with 0.5 mg/ml proteinase K (Qiagen, Düsseldorf, DE) and 1% sodium dodecyl sulfate at 55 • C for 2 h. HBV nucleocapsidassociated DNA was purified by phenol/chloroform (1:1) extraction followed by isopropanol precipitation by adding 15 g of tRNA and 1/10 volume of 3 M sodium acetate, pH 5.2. The isolated HBV DNA was subjected to agarose gel electrophoresis, followed by denaturation and Southern blotting. HBV DNA was detected by hybridization with a 32 P-labeled full length HBV probe. Hybridization signals were visualized and analyzed by a Phospho-Imager (Cyclon, Parkard Instrument).
RT-PCR and real-time PCR
Two micrograms of total RNA per sample was reverse transcribed by using Moloney Murine Leukemia Virus Reverse Transcriptase (M-MLV RT) (Promega, Mannheim, DE) and Oligo (dT) (Invitrogen, Karlsruhe, DE) as primer. The cDNA fragments of STAT1 and ISG15 were amplified by using the primers listed in Table 1. Cycle parameters were (i) 1 cycle: 94 • C, 4 min; (ii) 30 cycles: 94 • C, 30 s; 55 • C, 30 s; 72 • C, 30 s; and (iii) 1 cycle: 72 • C, 10 min. PCR products were subjected to agarose gel electrophoresis and visualized by ethidium bromide staining. Quantitative PCR was carried out using Platinum SYBR Green qPCR SuperMix UDG (Invitrogen, Karlsruhe, DE) in a Roche Lightcycler V.3. The PCR was performed with the following cycling parameters over 45 cycles: 95 • C for 5 s, 58 • C for 10 s, and 72 • C for 10 s. The specificity of the PCR products was verified by melting curve analysis and agarose gel electrophoresis.
Inhibition of the HBV gene expression by specific esiRNAs
To examine the inhibitory effect of esiRNAs on the HBV gene expression, the expression plasmids encoding HBsAg and HBeAg were cotransfected with esiRNA into HepG2 cells. The expression levels of HBsAg and HBeAg in cell culture supernatants were determined 72 h later by CMIA. Cotransfection with SesiRNA and CesiRNA at the concentration of 100 nM reduced the expression levels of HBsAg and HBeAg to 20% and 10% of the control without transfection of esiRNA, respectively ( Fig. 1a and b). The inhibition of the HBcAg expression by CesiRNA was verified by IF staining of cells transfected with the HBcAg expression plasmid with and without CesiRNA (Fig. 1c). The HBcAg expression in transfected cells was strongly reduced by CesiRNA.
Inhibition of HBV replication in cell culture by specific esiRNAs
To determine whether esiRNAs are able to inhibit the HBV replication, HepG2 cells were transfected with pHY106 + wta in the presence of SesiRNA or CesiRNA. The expression of HBsAg and HBeAg in cell culture supernatants was reduced significantly by esiRNAs (Fig. 2a). A decrease of 76.0% and 68.1% of the HBsAg levels in supernatants was measured by cotransfection with SesiRNA and CesiRNA, respectively. Likewise, SesiRNA and CesiRNA suppressed the expression levels of HBeAg to 20% and 22% of the control, respectively (Fig. 2a). The amount of HBV replicative intermediates was reduced in HepG2 cells by both esiRNAs in a dose-dependent manner (Fig. 2b). Only 10% of HBV replication intermediates was detected in HepG2 cells treated with 100 nM of SesiRNA and CesiRNA. Northern blot analysis showed that HBV transcripts were reduced to a level of about 40% of the control (Fig. 2c).
SesiRNA and CesiRNA were then examined in HepG2.2.15 cells with a stably integrated HBV dimer. This cell line produces HBV RNA and replication intermediates at a stable level. HepG2.2.15 cells were transfected with 50 nM or 100 nM of SesiRNA and CesiRNA and incubated further for 72 h. The treatment with esiRNAs did not reduce the HBsAg concentrations in culture supernatants (Fig. 3a). The HBeAg concentrations in culture supernatants were not changed by transfection with SesiRNA. CesiRNA was effective in suppressing the HBeAg level to about 50%. The HBV replication intermediates were reduced to 29% and 35% of the control by SesiRNA and CesiRNA at a concentration of 100 nM, respectively (Fig. 3b). Similarly, the level of HBV transcripts decreased to 33% and 49% of the control. Thus, the silencing of HBV transcripts in cells transfected stably by a single transfection with esiRNA had a limited effect.
Comparison of esiRNAs with synthetic siRNAs
A number of specific siRNAs have been shown to knock down the corresponding HBV transcripts efficiently in vitro and in vivo. Therefore, the ability of esiRNA and synthetic siRNAs to inhibit HBV gene expression and viral replication was compared. Three synthetic siRNAs siHBs1-3 targeting the coding region of HBsAg inhibited the HBsAg expression with different effectiveness (Fig. 4a). However, the silencing of the HBsAg expression by synthetic siRNAs was impaired strongly by nucleotide variations (Fig. 4b). A G to A substitution at the position 17 in the target sequence reduced the ability of siHBs1 to inhibit the expression of HBsAg. Similarly, the two nucleotide substitutions in the target sequence of siHBs2 abolished the silencing effect completely. In contrast, these mutations did not affect the silencing effect of SesiRNA (data not shown, see below).
The ability of siHBs1 and SesiRNA to inhibit HBV gene expression and replication was compared in HepG2 cells by cotransfection with pHY106 + wta and 100 nM of SesiRNA or 12.5 nM of siHBs1. The production of HBsAg and HBeAg was suppressed strongly by both SesiRNA and siHBs1 (Fig. 4c). The level of HBV replicative intermediates was reduced to 20% of the control by SeiRNA vs. 2% by siHBs1 (Fig. 4d). Further, siHBs1 led to a significant decrease of the amounts of HBV transcripts to 10% of the control (Fig. 4e). Therefore, synthetic siRNAs may be more efficient for gene silencing than esiRNAs in the present use of synthetic siRNAs.
It is suggested that esiRNAs are directed to multiple sites of a target sequence. Therefore, the ability of SesiRNA to inhibit the expression of heterologous HBsAg sequences was examined. Three HBsAg expression plasmids encoding different HBsAg geno/subtypes were cotransfected with SesiRNA. The determination of the HBsAg expression level in supernatants showed that cotransfection with SesiRNA reduced equally the expression of HBsAg, regardless of the subtypes (Fig. 5).
EsiRNA did not induce the expression of interferon-stimulated genes (ISGs)
Large dsRNA with a length over 35 bp might induce the interferon response in cells. Thus, it is necessary to examine whether the use of esiRNAs induces ISGs in HepG2.2.15 cells to exclude the off-target effect. RT-PCR analysis was carried out to monitor the induction of STAT1 and ISG15 in cells. The expression of STAT1 and ISG15 in HepG2.2.15 cells was increased after a 24-h treatment with 100 units of IFN-␣. The transfection of HepG2.2.15 cells with SesiRNA and CesiRNA did not increase the expression of STAT1 and ISG15 (Fig. 6). Thus, esiRNA was not able to activate the interferon response in hepatoma cells.
Discussion
In this study, the ability of specific esiRNAs to silence the HBV gene expression and replication was examined in detail. The esiRNA directed against the coding sequences of HBsAg and HBcAg was capable of inhibiting the HBV gene expression and viral replication both in transient cotransfection system and in HepG2.2.15 cells. The inhibition effect was specific and dose-dependent. Theoretically, the esiRNA targeted at HBsAg should be more effective than its counterpart targeted at HBcAg. Since the HBV S region is shared by the major viral transcripts, all these RNA species were suppressed by SesiRNA. However, CesiRNA was as effective as or even more effective than SesiRNA. CesiRNA inhibited the HBsAg expression likely acting indirectly through reduction of HBV replication or by other unknown mechanisms. Interestingly, both SesiRNA and CesiRNA effectively inhibited HBV replication in HepG2.2.15 cells while only HBeAg expression was reduced by CesiRNA. One explanation is that only a small amount of HBsAg transcripts is needed for the synthesis of HBsAg.
These data demonstrated that it is advantageous to use esiR-NAs against genetically heterogeneous target sequences. The ability to use siRNAs targeted simultaneously at different regions of a viral genome may increase the efficiency of the treatment and, in addition, will prevent the appearance of resistant mutants. Eight genotypes A-H of HBV can be distinguished. Each genotype differs from the others by more than 8% at the nucleotide level. Phylogenetic analysis has shown that the eight genotypes can be subdivided further into genotypical subtypes (Schaefer, 2005). It is difficult to identify a highly effective siRNA target at a conserved 19-21 nt sequence in all these genotypes and genotypical subtypes. EsiRNAs generated from long dsRNA are mixtures of siRNAs with diverse specificities, and would be able to target the whole mRNA sequence and retain the ability for gene silencing. The results in the current study showed that esiRNA generated from HBV subtype ayw is applicable for different HBsAg subtypes tested. In contrast, single nucleotide mutation in the target sequence abrogates the silencing effect of chemically synthetic siRNAs. Similarly, Weinberg et al. (2007) used a vector to produce a dicer substrate that could generate multiple siRNAs. This approach would have the advantage of limiting escape and targeting a range of sequences found in different viral genotypes or quasispecies.
siRNA-specific features such as low G/C content, a bias towards low internal stability at the sense strand 3 -terminus, lack of inverted repeats, and sense strand base preference are likely to contribute to efficient RNAi process (Reynolds et al., 2004). According to these criteria, there would be only a limited number of optional sequences for RNAi targeted on a given mRNA. Processing of long dsRNA generates a variety of small dsRNA molecules with different degrees for RNA interference. Thus, it is understandable that esiRNAs would be less effective in comparison with a defined, optimized synthetic siRNA within the range of used concentrations. However, synthetic siRNAs may have different abilities for gene silencing, thus, some of them may be less efficient than esiRNAs (Xuan et al., 2006).
Recently, several in vivo studies based on the use of cationic liposomes, polyplexes, and chemical modified siRNAs showed improved effects of siRNA (Aigner, 2006;Morrissey et al., 2005a,b). Using liver-specific apo A-I-mediated siRNA delivery method, Kim et al. (2007) showed that administration of synthetic siRNAs reduced significantly HBV protein expression with the advantages of effectiveness at low doses and long-term effect. This unique approach to siRNA delivery creates a foundation for the development of a new class of promising therapeutic method against hepatitis viruses. The in vivo use of esiRNA needs to be investigated further using a liver-specific siRNA delivery method. | 2018-04-03T00:11:01.691Z | 2008-04-18T00:00:00.000 | {
"year": 2008,
"sha1": "82f50173f3b5c6a7cc1e1a4cc731a4149001f354",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.jviromet.2008.02.008",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ab348c96ffbf636e820649e7cc23be69a3d43ae",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
9386465 | pes2o/s2orc | v3-fos-license | Bioactive constituents and medicinal importance of genus Alnus
INTRODUCTION Betulaceae or the Birch family includes six genera of deciduous nut-bearing trees and shrubs, including the birches, alders, hazels, hornbeams, and hop-hornbeams, numbering about 130 species. These are mostly natives of the temperate Northern Hemisphere, with a few species reaching the Southern Hemisphere in the Andes in South America. Alnus (alders) is an important genus belonging to Betulaceae which comprises 30 species worldwide. [1,2] Almost all plants of this genus have been traditionally used as folk medicine in Ayurveda, Unani, and Chinese medical systems. OBJECTIVES OF THE REVIEW Alnus is one of the genera having potential medicinal values. The plants of this genus have been found active against many live-threatening disorders like hepatitis, HIV-1 viral replication, and cancer. The aim of the present review is to delineate the various plants with their chemical constituents and biological activities. Various traditional uses of some common species have also been summarized. These informations can create a center of attention for scientists and herbologists for this genus, and consequently this database might play a major role in future research. TRADITIONAL USES OF ALNUS SPECIES The members of genus Alnus are well known for their traditional medicinal values. These have been used for the treatment of various diseases including cancer and as an alterative, astringent, cathartic, emetic, febrifuge, galactogogue, hemostatic, parasiticide, skin tonic, vermifuge, etc. Alnus japonica is a popular folk medicine in Korea for cancer and hepatitis.[3] The bark of Alnus glutinosa is alterative, astringent, cathartic, febrifuge, tonic, and useful in mouth and throat infl ammations, the vinegar extract of inner bark of plant produces a useful wash to treat lice and a range of skin problems such as scabies and scabs.[4-6] The leaf, roots, and bark of A. nepalensis are used in dysentery, stomach ache, and diarrhea in Indian system of medicine (Ayurveda).[7] A decoction of the root of A. nepalensis is prescribed to treat diarrhea and paste from the leaves is applied on cuts and wounds as a hemostatic.[8] The mixture of leaves of Alnus jorullensis and branches of Polylepis Access this article online Quick Response Code: Website: www.phcogrev.com DOI: 10.4103/0973-7847.91115
INTRODUCTION
Betulaceae or the Birch family includes six genera of deciduous nut-bearing trees and shrubs, including the birches, alders, hazels, hornbeams, and hop-hornbeams, numbering about 130 species.These are mostly natives of the temperate Northern Hemisphere, with a few species reaching the Southern Hemisphere in the Andes in South America.Alnus (alders) is an important genus belonging to Betulaceae which comprises 30 species worldwide. [1,2]Almost all plants of this genus have been traditionally used as folk medicine in Ayurveda, Unani, and Chinese medical systems.
OBJECTIVES OF THE REVIEW
Alnus is one of the genera having potential medicinal values.
The plants of this genus have been found active against many live-threatening disorders like hepatitis, HIV-1 viral replication, and cancer.The aim of the present review is to delineate the various plants with their chemical constituents and biological activities.Various traditional uses of some common species have also been summarized.These informations can create a center of attention for scientists and herbologists for this genus, and consequently this database might play a major role in future research.
TRADITIONAL USES OF ALNUS SPECIES
The members of genus Alnus are well known for their traditional medicinal values.These have been used for the treatment of various diseases including cancer and as an alterative, astringent, cathartic, emetic, febrifuge, galactogogue, hemostatic, parasiticide, skin tonic, vermifuge, etc. Alnus japonica is a popular folk medicine in Korea for cancer and hepatitis. [3][6] The leaf, roots, and bark of A. nepalensis are used in dysentery, stomach ache, and diarrhea in Indian system of medicine (Ayurveda). [7]A decoction of the root of A. nepalensis is prescribed to treat diarrhea and paste from the leaves is applied on cuts and wounds as a hemostatic. [8]The mixture of leaves of Alnus jorullensis and branches of Polylepis Sati, et al.: Bioactive constituent and medicinal importance of genus Alnus racemosa R. et P is used to treat infl ammation of uterus, uterine cancer, and rheumatism. [9]The bark of Alnus hirsuta is used in Korean and Chinese traditional medicine as remedies for fever, hemorrhage, alcoholism, and diarrhea. [10,11]The decoction of A. glutinosa barks is used to treat swelling, infl ammation, and rheumatism. [12]It has also been used as an astringent, bitter, emetic, and hemostatic, and for the treatment of sore throat and pharyngitis. [13,14]Contemporary indigenous healers used the bark of Alnus rubra for various medicinal teas. [15,16]
CHEMICAL CONSTITUENTS OF GENUS ALNUS
The plants of the genus Alnus contain various types of plant secondary metabolites including terpenoids, fl avonoids, diarylheptanoids, phenols, steroids, tannins, and many others.The plants and their chemical constituents have been summarized below, whereas the chemical structures of various compounds isolated from different parts of genus Alnus are drawn in Figures 1 to 8.
CONCLUSION
The genus Alnus is widespread all over the world, and many species of this genus have been used as traditional herbal medicines.
Figure 1 : 2 :
Figure 1: Chemical structures of compounds isolated from genus Alnus Figure 2: Chemical structures of compounds isolated from genus Alnus
( 38 Figure 3 :
Figure 3: Chemical structures of compounds isolated from genus Alnus | 2018-04-03T05:12:36.954Z | 2011-07-01T00:00:00.000 | {
"year": 2011,
"sha1": "84f4659f11a0ea0c26709c879171e2a0345dcd90",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc3263052",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "59f6f6435026fccaf6c82236689d2b0673d75c89",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
149196830 | pes2o/s2orc | v3-fos-license | Pedagogic frailty: A concept analysis
: This paper adopts the approach of a map-enhanced concept analysis of pedagogic frailty with the intention of increasing clarity of purpose of the model and to promote more explicit discussion on how the term could be used positively within the educational research literature. Examples that are given here show that commonly used expressions such as ‘teaching excellence’ and ‘research-led teaching’ contain so much variation in meaning as to be misleading in their use. The maps offered show different perspectives in aspects of pedagogic frailty, such as those that may be perceived by an external examiner to a programme. The recurrence of frailty at varying levels of resolution and at different times within an evolving Higher Education context means that management of frailty and resilience should be embedded as a constant, dynamic activity within an institution, rather than a single-shot intervention.
Introduction
Since the introduction of the term 'pedagogic frailty' (Kinchin, 2015), the model has been explored by academics in practice across a range of disciplines as a tool to promote reflection upon teaching (e.g. Kinchin et al., 2016;Kinchin & Francis, 2017;Kinchin & Wiley, 2017). The model has shown promise as a tool to initiate dialogue about the interacting elements of the academic context that influence teaching. In addition, the pedagogic frailty has been explored by a range of international academics from a variety of theoretical and research perspectives . This has helped to gauge how the pedagogic frailty model can interact with, and possibly integrate, other perspectives on teaching and learning in higher education. As examination of the concept and the model has developed rapidly, it seems appropriate now to collate these observations, and preliminary data from on-going studies (Kinchin & Winstone, 2018) to offer a more refined analysis of the concept and to facilitate and maintain its continuing development to support the application of pedagogic frailty to professional development and the enhancement of university teaching.
This conceptual paper employs the overall method of concept analysis as developed by Walker and Avant (2014) as a tool in the development of this work. Concept analysis has been shown to be a valuable approach, particularly in the nursing education literature (e.g. Baldwin, 2008). There the approach has been used to explore various complex concepts in clinical practice (e.g. Bookey-Bassett, Markle-Reid, Mckey, & Akhtar-Danesh, 2017;Chabeli, Malesela, & Nolte, 2017;Garside & Nhemachena, 2013;Liu, Avant, Aungsuroch, Zhang, & Jiang, 2014;Phillips-Salimi, Haase, & Kooken, 2012), as well as concepts in clinical education that have a wider application to higher education theory such as critical thinking (Von Colln-Appling & Giuliano, 2017) and, importantly here, concepts that are of direct relevance to the elements of pedagogic frailty. This includes a concept analysis of stress (Goodnite, 2014), a concept that has a central role in pedagogic frailty. A recent synthesis of concept analyses is offered by Fitzpatrick and McCarthy (2016).
Frailty and resilience
Complementary to the idea of pedagogic frailty is the concept of resilience (e.g. Winstone, 2017). The importance of resilience as a factor in professional practice is reflected in the way it has been subjected to concept analysis by a number of authors (e.g. Garcia-Dia, DiNapoli, Garcia-Ona, Jakubowski, & O'flaherty, 2013;Gillespie, Chaboyer, & Wallis, 2007;Hicks & Conner, 2014;Windle, 2011), and this has helped to refine the term and its use in specific professional contexts so that it may be used as a tool in theory development. In their meta-analysis of the concept analyses of resilience, Caldeira and Timmins (2016) conclude that resilience is a fundamental concept that is closely related to health and wellbeing. Bhamra, Dani, and Burnard (2011) have stated that one area that requires greater attention for advancing resilience research is the relationship between human and organizational resilience. This relationship is key to appreciating the implications of pedagogic frailty for a university. Additionally, whilst there is a growing literature on academic resilience among students (e.g. Morales, 2008;Milne, Creedy, & West, 2016;, the literature on academic resilience among university teachers is conspicuous by its absence.
The dynamic relationship between resilience and frailty is one that needs to be managed carefully within an institution and requires appropriate processes of system maintenance that support the alignment of professional values across academic staff, academic developers and academic managers . Frailty and resilience are therefore two sides of the same coin and need to be considered together.
Whilst resilience is a concept that is important in the clinical context, it also has resonance with other disciplines such as ecology (Mori, 2016), linguistics (Goldin-Meadow, 2014) and economics (Bellini, Grillo, Lazzeri, & Pasquinelli, 2017). Whilst this familiarity with the term increases the possibility that colleagues from various disciplines will be able to find a route into engagement with the term, it also increases the possibility that colleagues will develop idiosyncratic, discipline-specific views of the concept as it applies to teaching. It has been suggested that repurposing disciplinary concepts to help engage with the scholarship of teaching may provide a helpful mechanism to support reflection on teaching (Kinchin & Francis, 2017).
I have used the analogy of 'clinical frailty' in the development of the pedagogic frailty model, and just as in the clinical context where frailty has been shown to be a predictor of negative health outcomes (Vermeiren et al., 2016), I argue that pedagogic frailty is a predictor of negative teacher-development outcomes (Kinchin et al., 2016). Though the term frailty has been widely used in the clinical environment for a long time and whilst there is tacit agreement on how it is used within this professional context, it does not have an agreed definition within the clinical literature (Conroy & Elliott, 2017). As an emergent concept driven by analogy with the clinical literature, the more recently coined term pedagogic frailty is less well established in its disciplinary literature. This concept analysis is presented with the intention of increasing clarity of purpose and promote more explicit (rather than tacit) agreement on how the term is used within the educational research literature. This is not with the intention of trying to close down dialogue and debate. These are essential for the further evolution of the model. Rather, this is to provide a clearer starting point for on-going discussion. Without such clarification, it is possible that the term pedagogic frailty will attract a range of meanings (and a range of associated practices) that might result in researchers talking past each other rather than talking to each other.
Concept mapping and higher order thinking
The concept of pedagogic frailty came into view as part of a wider knowledge structures perspective on teaching and learning at university, facilitated by the application of concept mapping (Kinchin, 2016a). The visualisation of the pedagogic frailty model was therefore dependent upon the use of concept maps, and it was essential that the concept maps that guided the evolution of this model were of the highest possible quality in order to yield rich and informative data. The quality of concept maps has been a focus for discussion in the literature and Cañas, Novak, and Reiska (2015) considered the qualities that contributed to the drawing of 'excellent' maps, rather than maps that are simply 'correct' or 'good'.
Within the literature on concept mapping there has been a tendency among some researchers to reduce the rich complexity of a concept map to a simple numerical score. This is typically for ease of comparison and/or as a way of measuring the effects of certain classroom interventions. I argue here that the higher numerical value that such scoring systems gives to larger maps (that include greater numbers of concepts) is not necessarily indicative of the higher order thinking skills (HOTS) that are associated with meaningful learning, but rather indicate the accumulation of information that is required for rote learning and factual recall -lower order thinking skills (LOTS). When developing expertise in concept mapping it seems that the ability to edit a map, and decide which information to exclude and which technical terms to apply in linking phrases to increase the explanatory power of the map are more indicative of HOTS. These include synthesising, evaluating, creating knowledge that are found in Bloom's Taxonomy. This would explain the observation that expert maps are often smaller than those constructed by disciplinary novices. This is of significance to studies of pedagogic frailty that require participants to produce succinct, excellent maps to act as prompts for their professional narrative.
Within the current work on pedagogic frailty (e.g. Kinchin et al., 2016;Kinchin & Francis, 2017;Kinchin & Wiley, 2017) those who have been interviewed are subject experts but novice mappers. The point of this work was not to develop the interviewees' concept mapping skills, but produce concise, explanatory concept maps that would represent their perceptions of the dimensions within the frailty model. If the interviewees had been left to produce maps on their own, experience has shown that it is likely they would have produced extensive maps (to include everything that might be of interest) and would have used simple linking phrases to join the concepts together. However, by employing map-mediated interviews (Kandiko and Kinchin, 2012;2013b) where the interviewer is an experienced concept mapper, the process is able to guide the interviewee to produce better quality concept maps. This is not by suggesting content to add, but by interrogating the map to ask the mapper if they could produce a link with greater explanatory power, and also to let them know that it was O.k., for example, not to include all the prompting concept labels. Some of the mappers also needed confirmation that it was O.k. to stop when the map had expressed everything they felt was important. In this way, the interviews yielded excellent maps -analogous to collecting a rich interview transcript. The concept maps were intended to be concise, clear, explanatory, and balanced so that they would be able to act as effective prompts for the interviewee to use them to frame their developing narrative about their teaching. This dependence on an 'expert mapper' represents a potential bottleneck to prevent the wider dissemination of the process that has already been recognised (see Aguiar & Correia, 2017).
It has to be remembered that the map has the function of prompting dialogue and its production is not a central aim of the process of exploring frailty and resilience. The map is the artefact that colleagues will use as a prompt or a frame for their own professional narrative about their teaching. As the participant may be constructing his/her narrative over a period of months after the initial interview, it is crucial that the map has high explanatory power and is not cluttered by a lot of unnecessary material that may obscure the main ideas.
It might be assumed that smaller concept maps take less time to construct than larger maps. This has not been found to be the case. During the map-mediated interviews used to chart the elements of pedagogic frailty, the interviews that have been undertaken to produce these sets of maps have typically each taken about two hours. During the interview the interviewee is often able to identify the concepts they want to include within the map relatively quickly, but then it takes time to arrange and link the concepts in a way that satisfies the interviewee.
This extra time spent on seeking clarity and increasing explanatory power of the maps has not always been explicitly included in published research protocols where subjects have been left to develop their own maps without dialogue or feedback. In such cases, we feel it is likely that mappers never reach the part of the map development curve described by Cañas, Reiska, and Novak (2016), where the content of the map is being refined and edited and the map is being reduced in size. So while extensive maps may include lots of content, this may be indicative of LOTS (Lower Order Thinking Skills). Those maps that have been subject to revision and refinement may be more likely to represent the underpinning HOTS (Higher Order Thinking Skills).
The mapping of academic perceptions of the dimensions of frailty in the manner described by Kinchin et al. (2016) and Kinchin and Francis (2017) is not intended to trace the outcomes against a pre-determined fixed route with which to judge colleagues, but rather to act experimentally in the manner supported by Deleuze and Guattari (2004, p. 13) when they suggest 'the map is open and connectable in all of its dimensions: it is detachable, reversible, susceptible to constant modification. It can be … reworked by an individual, group or social formation'. Indeed, it will be seen that the development of academic reflections upon frailty and resilience will map a path that is entangled, nonlinear and iterative as the academic travels in 'irregular ways through the landscapes of their experience', and 'bring those landscapes into relation with each other' (Taylor & Harris-Evans, 2016, p. 3). As such the act of scoring colleagues' concept maps adds nothing positive to the process as each participant will have unique starting points and be heading for unique destinations. I, therefore, suggest that scoring concept maps is inappropriate within studies of pedagogical frailty as it would confer false relative values to the views of participants.
The concept analysis
The general format of a concept analysis is applied here as specified by Walker and Avant (2014) to the structure of this paper. This has been developed from the standard concept analysis by including concept maps of key dimension to illustrate the connectivity of concepts and emphasise possible relationships between dimensions of the model. These concept maps offer foci for further discussion. The application of concept mapping (Novak, 2010) to enhance concept analysis methodology has been explored by All and Huycke (2007). Indeed, a number of the concept analyses presented by Fitzpatrick and McCarthy (2016) employ concept maps or similar graphics to summarise the connection between antecedents, attributes and consequences of the concept. The use of visual tools to highlight the dynamic relationships between the attributes of the concept resonates with the origins of pedagogic frailty as part of a wider knowledge structures perspective on teaching and learning (Kinchin, 2016a), and emphasises that the units of analysis within the pedagogic frailty model are the connections between elements that define the concepts (Kinchin, 2016b).
A concept analysis requires a 'determination of the concept' in question. The overall concept of pedagogic frailty has been taken from a clinical analogy as has been stated clearly by Kinchin et al. (2016) and Kinchin and Winstone (2017). The concept has been defined in terms of the quality of interactions between elements of the model (regulative discourse; discipline and pedagogy; research-teaching nexus; locus of control) and the observable outcomes relate to conservative approaches to pedagogy and teacher burn-out (e.g. Bailey, 2014;Howard & Johnston, 2004).
The intended use of the concept is to enable dialogue about teaching so that academics might be able to purposefully reflect on their teaching within a framework that will also allow them to engage in dialogue with colleagues from other disciplines. The defining attributes of the model, as explored by individual academics can be considered on various levels:
The content of each dimension. Which concepts they include in their maps and which, if any, is seen as the dominant concept. And importantly, which concepts are omitted.
The structure of each dimension. If concept maps are strongly linear they tend to be indicative of routine expertise, whereas highly integrated networks are more likely to indicate a level of adaptive expertise (Salmon & Kelly, 2015), and more likely to connect with the content of the other dimensions.
The consistency across dimensions (i.e. whether there is internal conflict within an individual profilewhere propositions within one dimension seen to contradict or be in conflict with propositions in other dimensions).
The level of language that is usedparticularly in the linking phrases included in a map.
However, even when an individual academic possesses a profile that exhibits appropriate content, integrated structure, strong consistency and explanatory language, the important aspect is how that profile fits within the network of other profiles. If everyone else in the department holds a conflicting sense of the teaching discourse, the research-teaching nexus and the level of regulation, then there is potential for frailty. This may indicate the need to find a balance between 'agency' (where an individual has a strong self-identity and the ability to direct their own professional activity) and 'frailty' (where that individual's views conflict with other views in the institution, including peers or centralised management).
In trying to identify model cases, the exemplars offered in Table 1 may help the discussion. Both profiles A and B (in Table 1) each exhibit internal consistency (i.e. there is little tension apparent between the four dimensions). Therefore, each of these individuals may exhibit agency if situated in an appropriate context in that they find themselves "empowered to the extent that they understand the choices they want to make, advocate their own rights, take control of their own destiny and demonstrate the competency necessary for acting in their own best interest" (O'Hair et al., 2003, p. 198). However, problems would surface where these contrasting profiles are held by academics working alongside each other within the same department, or where these profiles were each dominant in two departments within the same institution. In such instances there is seen to be potential for conflict between these profiles across all the dimensionsthis might be seen as a model case of pedagogic frailty (Walker & Avant, 2014). Differences between A and B are apparent in RD (efficiency vs. innovation), D+P (authenticity of teaching approaches), RTN (the degree of integration or separation of research and teaching) and LOC (the proximity of and engagement with the locus of control). Where profiles A and B above are in direct conflict with each other, it might represent an extreme case of frailty. However, in practice, individual profiles are much more variable and idiosyncratic than those portrayed in Table 1. Where everyone within a department was identified as profile A or B, there would be no indications of frailty and the tendency indicated towards resilience. The process of concept analysis requires the identification of additional cases that might be seen as borderline, related, contrary or even illegitimate (Walker & Avant, 2014). Borderline cases are those that contain most, but not all of the defining attributes of the concept. This may occur where colleagues with different roles in the university are engaged in only some of the aspects that might be described across the dimensions of the model. So a researcher in a biochemistry department may only have limited interaction with an administrator in the Politics department. Their interactions are not likely to influence the levels of frailty or resilience across the campus.
Contrary or illegitimate cases of pedagogic frailty might be considered where the term is used inappropriately or out of context. For example, if the term 'frailty' were to be applied to a teacher who is struggling to cope because they lack the basic skills of classroom management, this does not lie within the scope of pedagogic frailty because, as stated, the term is not to be used to describe an individual and his or her levels of competence. The term frailty refers to the links within the wider system. However, if a department includes novice teachers who are struggling with the practicalities of classroom teaching, and they are ignored and unsupported by more experienced colleagues who regard them as 'expendable departmental assets' who cover the teaching and allow them to get on with their research, then there is significant potential for pedagogic frailty. However, if a department has robust systems of academic development and peer support already in place, then the inclusion of novice teachers within the team may actually have the opposite effect and increase the level of organisational resilience as the act of mentoring may increase the level of reflective practice among all staff members novice and expert.
An additional illegitimate case may arise where employees who have no impact upon classroom teaching practice are included in the assessment of an institutional profile. For example, research assistants employed on short term contracts to work on particular research projects, and who have little interaction with the activities on campus (possibly working at remote research stations) and paid for by external funding might have little influence on undergraduate teaching. Their views, as non-participants, might therefore be seen to be of little direct relevance to the student experience.
The following examples show how the each of dimensions of the pedagogic frailty model may be mapped by academics, and illustrate cases to show the linkages that are possible with other dimensions and have the potential to increase the tendency towards pedagogic frailty or resilience.
The regulative discourse
The dominance of discussions on short-term aspects of the Instructional Discourse (e.g. the mechanics of teaching that considers timetabling, staffing, budgets, feedback and assessment practices) means that the underpinning aspects of the Regulative Discourse are often presumed to be in alignment within an institution. Clearly, colleagues do not have time to re-assert their teaching philosophy or their beliefs about teaching every time there is a meeting, but if these underpinning aspects are never explored, never shared and never made explicit then the gap will be filled by assumptions that may or may not be correct.
The concept maps of regulative discourse within the pedagogic frailty model that have been published so far have concentrated on individual academics and their personal perspectives. The map in Fig. 1, is produced by an external examiner -someone who has an overview of a programme without being directly involved in the teaching. This perspective might provide a wider view of the ways in which the academic discourse is conducted in a particular context.
The map in Fig. 1 indicates a clear split between the regulative discourse and the instructional discourse. The focus on the instructional discourse is directed by the institution rather than by the department in which the programme is being taught, with paperwork demanding comments about concrete activities within the programme. This is with the intention of assuring quality of the programme, relative to other programmes in the institution and relative to similar programmes in other institutions. The focus on the instructional discourse makes this process easier as the elements of this discourse are more tangible and 'assessable'. The assumptions about the regulative discourse are carried over into the selection of external examiners, usually from institutions that are considered to be similara tacit acknowledgement that there would be overlap in the regulative discourses within these institutions. Fig. 1. A concept map of the role of the external examiner in the context of Bernstein's regulative and instructional discourses (adapted from Kinchin, Kingsbury, & Buhmann, 2017) This, it is anticipated, would be a typical case given that the procedures for external examination in UK universities are similar from one institution to the next.
Indeed, for an institution to step outside the norm for such activities would present problems of equity across the higher education sector. As such, the conservative maintenance of the status quo seems to be one of the aims of the process.
The problem with this approach is that it does nothing to encourage an examination or reassessment of the Regulative Discourse within the institution. In terms of frailty/resilience, the maintenance of the system will be seen as helping to maintain a level of resilience across the sector even though this might also be seen to be promoting a routinization of expertise among individuals within the system, rather than a more questioning level of adaptive expertise. The numerous links between the Instructional Discourse and the students (whose voices carries considerable power in the UK system) also provides elements that students can comment on from their experience as learners. This is another factor that may help to keep the focus on the Instructional Discourse rather than the Regulative Discourse.
Pedagogy and discipline
A deep appreciation of the discipline is required of a university teacher in order to be able not just to teach the subject from the textbook, but also to embody the discipline (e.g. Hay, Weller, & Ashton, 2015). This understanding of the subject and its structure allows the teacher to arrange the content in such a way that it can enhance student learning. An example is shown in Fig. 2 Fig. 2. A concept map to show the role an integrating disciplinary concept (psychological literacy) in making links between a discipline and its pedagogy (adapted from If we consider the chain within the map in Fig. 2 in isolation (without links to the superordinate concept, 'Psychological Literacy), we are left with a linear structure (often problematic in itself), that ends with the proposition, 'teaching raises concerns'. This makes the teaching of this subject matter problematicfor students and teachers. The inclusion of the integrating concept 'psychological literacy' fundamentally changes the image. Now we can see why engagement with sensitive issues is necessary and why the concerns that are raised are an integral part of the subject.
If some teachers within a department lack the level of understanding of their discipline that others possess to allow them to identify integrating concepts in their teaching, the consequence will be different perceptions of what has to be taught; different perceptions of why it might be difficult for students and probably different perceptions of how best to teach it. Exploration of the discipline is therefore often a good starting point for teachers who wish to investigate the scholarship of teaching (Kinchin, 2017
Research-teaching nexus
The research-teaching nexus has been discussed widely in the literature, and has been seen to be an important area to focus on when considering pedagogic frailty (see Hosein, 2017). Terms such as research-led teaching, research-informed teaching and research-rich teaching are to be found within the literature and are terms that are often used by universities on their web sites to describe their own teaching philosophies. The differences between these terms and the ways in which they are used can mask underlying differences in what is meant, even within a single academic department (Kandiko & Kinchin, 2013a). The concept maps in Fig. 3 emphasize the need to analyze what colleagues really mean when they use the term research-led teaching.
Structurally the two maps in Fig. 3 are almost identical. Only on close examination of the content of the maps do we start to realize that these two academics have completely different conceptions of the research-teaching nexus. The top map shows the author to consider research as a product that can generate content to be taught passively consumed by students. In contrast, the author of the lower map considers research to be a process in which students can be actively engaged. In pedagogic terms, these two views of the research-teaching nexus are worlds apart, with the students getting very different experiences.
Locus of control
The locus of control refers to the site where rules and regulations that effect teaching practices are developed. It is clear that whilst some academics consider regulation to provide a strait jacket that restricts their ability to decide how and what to teach, others see regulation as liberating -"if someone else has to worry about the rules and regulations, I am free to concentrate on my subject". Similarly, while some academics want to be involved in decision-making, helping to direct the institution, others want to be as far removed as possible from 'bureaucrats and bean-counters'. The danger of removing oneself from the decision-making bodies in the university is that the university may move in a direction that does not fit with individual academic's aspirations. When institutional goals and individual goals are at odds with each other, there is potential for pedagogic frailty. Where the institutional goals and individual goals are aligned through shared values, there is greater potential for resilience.
In part, this can be related to the perceived distance between decision-making bodies of the university and the individual academic. Where decision-making is undertaken centrally there may be less opportunity for the individual to influence the outcome. The individual is likely to be closest to the point of decision-making when leadership is 'distributed' among the experts that are found across the university, rather than centralised within a team of leadership experts. The various tensions that are associated with the degree of centralisation of academic leadership are summarised in Fig. 4.
Whether academic leadership is centralised or distributed there are still various regulatory bodies that have to be considered when devising structures to support teaching. This is often complicated when in addition to the university management, professional bodies also influence the ways in which programmes are structured and teaching and assessment are organised. Some academics feel that they have two masters so that autonomy or independence is even more of a balancing act. The idea of 'pedagogic independence' has been described as illusory by Brookfield (2017) who goes on to describe how: I am alone while never being alone. By this I mean that I am physically alone in the classroom in the sense that I am usually teaching solo, either face-to-face or online. Yet my actions are always embedded in a web of networks that shape my decisions. So my room is symbolically stacked with holographic images of the multiple stakeholders whose agendas and priorities influence very directly the micro-decisions I constantly make as a teacher.
This was recently articulated to me by a student whose teaching in the clinical sciences has to meet the standards dictated by the Nursing and Midwifery Council (NMC), Health Care Professionals Council (HCPC), Royal College of Midwives (RCM), Higher Education Academy (HEA) Quality Assurance Agency (QAA). Clearly some colleagues have to navigate a path between the concerns of multiple masters.
Antecedents
The antecedents for pedagogic frailty arise from the tensions resulting from conflicting agendas of the numerous stake-holders (individuals and agencies) who are engaged with higher education. These tensions arise from various fundamental questions about the role of the university and, for example, whether it is: A critical commentator on society and a site for innovation for a broad 'social good', or an agent in the economic and political machinery of government.
A preparation for employment or a place to enjoy learning.
A place for research or a place for teaching.
For these and other questions there is no single response that would be accepted universally. Whilst these questions are expressed here as oppositional binaries, I acknowledge that they are not simple either/or questions and do not typically generate yes/no responses. They are complex issues that often require compromise. Professional role differences and disciplinary differences among staff will increase the diversity of responses that academics give, while the global nature of higher education and academic research mean that no single, short-term national perspective may be seen to provide 'the solution'. These tensions will be seen by some academics as providing excitement, dynamism and challenge to their role. Others will see them as causing problems and upsetting the status quo. This fluctuating environment and the various perceptions of its components provides the elements that contribute to pedagogic frailty.
Consequences
It is important to note that pedagogic frailty is not something that can be 'cracked' once by an institution and then ignored. The environment in which academics function is dynamic. Elements of higher education are constantly evolving and so the academic (and the institution) has to parallel this evolution within their own professional development. In addition, it is clear that higher education is a global industry and many academics are likely to move across international borders in the course of their careers. The movement of academics and the continuous change experienced by universities means that pedagogic frailty is likely to be a recurring theme within an individual's career (Lygo-Baker, 2017). Frailty is therefore not something to be overcome as much as something to be managed over time.
In conclusion
The commonplace use of terms such as 'teaching excellence' and 'research-led teaching' are misleading as they suggest a uniformity of purpose and understanding across the higher education sector that is not justified (e.g. Charles, 2017). Probing beneath these terms to see how ideas interact and how concepts are interconnected reveals an array of understandings that may be conflicting and contradictory (e.g. Hosein, 2017). The result is that different authors are using the same terms to mean different things. As the term pedagogic frailty is still a new addition to the higher education lexicon, it is appropriate at this time to attempt to clarify what is meant to avoid the miss-use of the term and the confusion this can generate. Although I am sure that colleagues will develop their own ideas of pedagogic frailty and will cultivate new methods to consider its effects, I hope that this concept analysis will reduce the likelihood of the term being used in conflicting and contradictory ways that might hinder its application to the development of teaching quality. | 2019-05-11T13:06:20.263Z | 2017-09-09T00:00:00.000 | {
"year": 2017,
"sha1": "786fbed2176888f21bce3e82044dc1b6557f04e4",
"oa_license": "CCBY",
"oa_url": "http://www.kmel-journal.org/ojs/index.php/online-publication/article/download/380/376",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2f48a6bb90c4a5dda6d6a4846edab2e62ac14a90",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
24431843 | pes2o/s2orc | v3-fos-license | Fourier-based interpolation bias prediction in digital image correlation
Abstract: Based on the Fourier method, this paper deduces analytic formulae for interpolation bias in digital image correlation, explains the well-known sinusoidal-shaped curves of interpolation bias, and introduces the concept of interpolation bias kernel, which characterizes the frequency response of the interpolation bias and thus provides a measure of the subset matching quality of the interpolation algorithm. The interpolation bias kernel attributes the interpolation bias to aliasing effect of interpolation and indicates that high-frequency components are the major source of interpolation bias. Based on our theoretical results, a simple and effective interpolation bias prediction approach, which exploits the speckle spectrum and the interpolation transfer function, is proposed. Significant acceleration is attained, the effect of subset size is analyzed, and both numerical simulations and experimental results are found to agree with theoretical predictions. During the experiment, a novel experimental translation technique was developed that implements subpixel translation of a captured image through integer pixel translation on a computer screen. Owing to this remarkable technique, the influences of mechanical error and out-of-plane motion are eliminated, and complete interpolation bias curves as accurate as 0.01 pixel are attained by subpixel translation experiments.
Introduction
Digital image correlation has evolved into a reliable and flexible full-field optical metrology [1][2][3], and this technique has found numerous and ever-increasing applications in scientific and engineering fields [4][5][6][7][8][9]. The combination of digital image correlation with other techniques such as microscopy, holography, and fringe projection has further extended the scope of its application and highlighted its potential [10][11][12].
Digital image correlation retrieves full-field deformation by matching subsets in reference and target images. The displacement and strain errors of digital image correlation were first reported by Bruck et al. [13]. Detailed evaluation presented by Sutton et al. indicated that the displacement measurement error of digital image correlation is approximately 0.01 pixels [14]. This metrological performance is influenced by many factors [15], such as image noise [16], shape function [17], correlation criterion [18], optimization algorithm [19], and interpolation accuracy [20]. Among these issues, interpolation plays an essential role. In the 1980s and 1990s, sinusoidal-shaped systematic error in digital image correlation was observed, which was periodic with a period of 1 pixel and an amplitude possibly exceeding 0.06 pixel [21,22]. Schreier et al. attributed this periodic error to imperfect interpolation [20]. To achieve subpixel accuracies, grayscale must be evaluated at non-integer locations in digital image correlation. Non-ideal interpolation will lead to this systematic error, which is called interpolation bias. As a result of sinc function's infinite support and slow decay, ideal interpolation cannot be performed in practice [23], and thus interpolation bias is inevitable. Schreier et al. also pointed out that interpolation bias can be evaluated by correlating subpixel shifted images [20]. Since then, diverse approaches have been presented to reduce interpolation bias. These approaches can be classified into three categories: more sophisticated interpolation algorithms, low-pass image filtering, and stochastic integration [24]. High-order interpolation algorithms generally produce better results but require long execution time, and therefore it is preferable to optimize the interpolation basis [1]. Luu et al. introduced an inverse gradient weighting version of the BSpline interpolation algorithm, thereby enhanced accuracy [25]. Pre-filtering the speckle image with a low-pass filter has also been suggested [20,26,27]. The low-pass filter can suppress interpolation bias dramatically, but consequent blur will reduce the spatial resolution of digital image correlation. In the digital image processing community, Rohde et al. proposed the usage of an integral form of the correlation function instead of the summation form; a stochastic integral was employed to approximate the continuous integral of the correlation function, thus enhancing accuracy [24].
Despite the large number of approaches that have been suggested to decrease interpolation bias, the explicit dependence of this bias upon the speckle pattern and the interpolation algorithm is still unknown, and the sinusoidal-shaped curves remain unexplained. In addition, reducing interpolation bias by speckle pattern optimization demands knowledge of interpolation bias [28,29]. However, quantitative discussions are rare in the literature because interpolation is difficult to tackle mathematically and correlation is inherently nonlinear. In Wang's pioneering work [30], the interpolation bias for linear and cubic interpolation was predicted using sample values. However, his theory is inapplicable to high-accuracy interpolation algorithms such as BSpline [20], OMOMS [25], which have an infinite convolution kernel. Moreover, because his formulae include the difference between interpolated and real grayscales, they are hard to employ in theoretical analyses such as interpolation quality assessment and speckle pattern optimization.
Another challenge is the purely experimental measurement of interpolation bias. Subpixel translation experiments have seldom been reported because the interpolation bias can easily be overwhelmed by other experimental uncertainties such as stage error, out-of-plane motion, and image noise. Recently, Mazzoleni et al. reported the best result obtained to date, but nevertheless they failed to attain the complete curve due to accidental and variable vibrations of the camera and its non-isolated support [26]. Measuring interpolation bias accurately by purely experimental techniques is still a challenge.
At present, numerical methods are prevalently utilized to investigate interpolation bias. The conventional approach is to correlate a series of subpixel-shifted speckle images generated by synthetic speckle [19], FFT [20] or binning methods [31], which are inconvenient and time-consuming. If the exact dependence of interpolation bias upon the speckle pattern and the interpolation algorithm can be determined analytically, the interpolation bias can be predicted readily, considerable physical insight into the problem will be gained and studies of speckle pattern assessment and interpolation algorithm optimization will be benefited.
Similar problems arise in image processing [24], computer vision [32], remote sensing [33], and flight time estimation [34]. Despite distinct backgrounds and distinct correlation criterions, imperfect interpolation inevitably introduces systematic error, which occurs in various research fields which are not confined to digital image correlation.
Previous studies on interpolation bias generally utilized spatial methods. Considering the sinusoidal nature of the interpolation bias, a frequency method is probably preferable. Facilitated by the Fourier method, this paper provides a thorough discussion of interpolation bias.
This paper is organized as follows. Section 2 introduces the principles of digital image correlation and convolution-based interpolation, after which the analytical formulae for 1-D interpolation bias are deduced through Fourier analysis and confirmed by numerical experiment. Section 3 extends the theoretical analysis to high-dimensional space. Exploiting the speckle spectrum and interpolation transfer functions, a novel interpolation bias prediction approach is proposed. Subset size effects are analyzed, and theoretical predictions are confirmed by numerical simulations and actual experiments. In the course of the experiments, a novel subpixel translation technique was developed. Section 4 draws conclusions from this work.
Principle of digital image correlation
The principle of digital image correlation is to match subsets in reference and target images using a correlation criterion. The choice of correlation criterion can be loosely divided into two classes: cross-correlation and sum of squared differences [2]. This work uses the sum of squared differences criterion.
Principle of convolution-based interpolation
Interpolation is the process of constructing a function where the interpolation coefficient k c is chosen so that the original and reconstructed functions have identical values at the sample points. Equation (1) is equivalent to: The relationship between ( ) r x and ( ) x ϕ is discussed in [35]. The Keys, BSpline and OMOMS interpolation algorithms are all convolution-based interpolation methods [23,36,37].
1-D interpolation bias analysis
It is illuminating to analyze the 1-D situation first. Figure 1 [35].). Denote a Fourier transform pair by ←⎯→ F ; then the process described above can be represented symbolically as: is the comb function; for a more detailed mathematical derivation of ( ) g ν , the reader is referred to [38]. Using the Poisson summation formulae again yields: Substituting Eq. (6) and Eq. (8) into Eq. (5) gives: Equation (9) The calculated result 0 e u u u = + . Recognizing that e u is generally small, to a first-order approximation, The explicit dependence of the interpolation bias e u upon the reference function ( ) f ν and the interpolation transfer function ( ) ϕ ν can be derived as follows: Equation (12) is referred to as the full estimate of interpolation bias in this work. Because the full estimate of the interpolation bias is complex and cumbersome, it is difficult to derive a physical meaning from the above expression. Moreover, it is computationally expensive due to its complexity, and therefore simplification is desirable. Recognizing that is a periodic function with period 1 and contains a direct component, and recognizing that ( ) ϕ ν is an approximation of an ideal low-pass filter [see can be approximated by (let 0 k m n = = = ): is a periodic function with period 1 analogous to and therefore can be expanded into a Fourier series, ( ) However, this function is still too complex, and hence a better strategy might be to simplify it under specific conditions. Digital systems demand a sufficient sampling frequency to meet the requirements of the sampling theorem, and therefore the band-limited hypothesis is a reasonable assumption. To eliminate phase distortion, the interpolation basis is invariably an even function so that the transfer Utilizing the band-limited hypothesis and the fact that ( ) ϕ ν is real in practice, Eq. (12) can be simplified to (let 0 k n = = ): Equation (14) is referred to as the band-limited approximation of the interpolation bias.
associates with the squared sum of the gray gradient because by Parseval's theorem incorporates two parts: the first where C is a constant determined by the reference function ( ) f ν and the interpolation algorithm ( ) ϕ ν exclusively. Equation (15) will be referred to hereafter as the sinusoidal approximation of the interpolation bias. Equation (15) is exclusively determined by the interpolation algorithm; it has been called the interpolation bias kernel by the author. This interpolation bias kernel plays a central role in this work and is of significant importance because it characterizes the bias response at specific frequencies, thus providing a measure of the subset matching quality of various interpolation algorithms. Figure 2 illustrates the interpolation basis [see Fig. 2(a)], the transfer function [see Fig. 2 so interpolation accuracy generally follows the order OMOMS > BSpline > Keys. Higherorder BSpline algorithms show less bias response in the low-frequency region and therefore yield better results. These phenomena have been noted before [1,20,25], but this work provides a quantitative explanation and reveals their inherent nature. The preceding discussion indicates that the interpolation bias kernel quantifies whether an interpolation algorithm is appropriate for digital image correlation.
Imperfect interpolation involves blurring and aliasing effects: ( ) Theoretical formulae can predict interpolation bias using the reference function and the interpolation transfer function, thus avoiding the generation and correlation of a series of subpixel-shifted speckle images. This theory can be employed to optimize the interpolation algorithm further. OMOMS is optimum in the asymptotic sense [37], but this does not imply that it is the best choice for digital image correlation. Moreover, different interpolation methods can be optimized for different types of speckle patterns. This theory can explain why a low-pass filter can decrease interpolation bias and can be used to direct the selection of filter size. Furthermore, it can be used for speckle pattern assessment and design.
1-D Simulation
To demonstrate the validity of the analytical results, numerical experiments were carried out. 1-D speckle patterns were generated by Zhou's algorithm [39]. The reference function where N is the total count of Gaussian speckles, k x is the center coordinate of speckle k, r is the Gaussian speckle radius, and k I is the intensity level of speckle k. Its corresponding spectrum is: The spectrum consists of two parts: The intensity level was fixed as 1 k I = ; speckles were generated in the interval (−50,50), with a speckle density of 65%, and speckle patterns with radii 1.5, 2.0, and 3.0 were produced. Figures 3(a1)-3(c1) illustrates the reference function and sample values. Exact sampled values were used to remove quantization error. The speckle centers were subjected to a translation of 0.05 unit each time, and 20 shifted speckle patterns were obtained. Correlations were implemented using a zero-order shape function by the forward Gauss-Newton method [40], zero order shape function is not fundamental because high order shape function will not induce additional systematic error [41]. The convergence criterion was that the successive iterative increase be less than 1 × 10 −10 . The subset was chosen as [-60, 60] and consisted of a total of 121 points. Keys, cubic BSpline, and cubic OMOMS interpolations were used; the theoretical predictions incorporated full estimation [Eq. (12)] and sinusoidal approximation [Eq. (15)]. The infinite sum was approximated by a discrete sum from −10 to 10, and the numerical integral was implemented using the trapezoidal method with step 1 × 10 −6 . Figures 3(a2)-3(c2) shows the digital image correlation error, the full estimate, and the sinusoidal approximation of the interpolation bias for Keys, cubic BSpline, and cubic OMOMS. Because the interpolation bias of Keys was much larger than the others, the subfigures show results only for BSpline and OMOMS. It is evident that for the Keys method, the full estimate shows excellent agreement with the digital image correlation error, whereas the sinusoidal approximation loses details of the interpolation bias curves, but shares the same order of magnitude. For the cubic BSpline and OMOMS methods, the interpolation bias curves show excellent agreement with all theoretical results. This demonstrates that the method proposed here can be pronounced effective. Obviously, decreasing r induces an increase in e u . For these representative speckle patterns, the interpolation bias follows the order Keys >> BSpline > OMOMS. These phenomena can be explained by the interpolation bias kernel ( ) ib E ν . Because speckles with sharper edges tend to larger interpolation bias [20], and the denominator of the constant C in Eq. (15) is associated with the sum of squared grayscale gradients, the numerator of C dominates its behavior. Hence, the interpolation bias was determined mainly by the integral of the product of the power spectrum ( ) , the vast majority of the energy is concentrated in the low-frequency region; because OMOMS has a better bias response at lower frequencies, the performance of OMOMS is superior to that of BSpline. Nevertheless, this is not the truth when high frequencies dominate the energy. This 1-D simulation has established the validity of the theoretical results presented earlier.
Interpolation bias prediction in high-dimensional space
Digital image correlation and digital volume correlation, which are both of practical importance, correspond respectively to two and three dimensions, and therefore it is essential to expand the preceding 1-D discussion to higher-dimensional space. In higher dimensions, coordinates, displacements, and frequencies are vectors rather than scalars. For Ndimensional space, suppose that the reference function is ( ) f x , its corresponding Fourier transform is ( ) f ν , the real displacement is 0 u , the interpolation bias is e u , the interpolation basis is ( ) ϕ x , its interpolation transfer function is ( ) ϕ ν . Taking an approach similar to the 1-D case and substituting variations for derivatives, the interpolation bias e u satisfies: If the fundamental frequency is considered exclusively: Equation (22) is similar to Eq. (15), where
( )
, ib x y E ν ν is the interpolation bias kernel in two dimensions and the aliasing effect along the y-direction is neglected.
Numerical verification
The preceding theoretical results can be employed to predict the interpolation bias. Nevertheless, experimental speckle images are captured by digital cameras, and therefore their original spectrum is unknown. To approximate the original spectrum, it is proposed to utilize the discrete Fourier transform of the subsets. Clearly, the accuracy of the spectrum approximation depends on the choice of subset size. This section analyzes the effect of subset size and compares this novel approach with the conventional one. Experimental speckle image of a concrete cylinder was captured with a resolution of 2448 × 2048. A series of 20 subpixel-shifted images were produced numerically from the original image using the FFT method. Successive images correspond to a shift of 0.05 pixel. Square subsets with side lengths of 51, 101, 251, 501 pixels were chosen as illustrated in Fig. 5(b). The speckle pattern of each subset is shown in Figs. 6(a1)-6(d1), magnified to identical size for clarification. The corresponding autocorrelation functions are shown in Fig. 5(a). They indicate that the fluctuation of larger subsets is less obvious and that the autocorrelation width is largely independent of subset size. Digital image correlation was implemented by the Gauss-Newton method with a zero order shape function. The Keys and cubic BSpline interpolation methods were used. Interpolation bias curves were obtained through correlation. A sinusoidal approximation [Eq. (22)] was used to predict the interpolation bias, and the original spectrum was approximated by the discrete Fourier transform. Figures 6(a2)-6(d2) shows the discrete Fourier transform of each subset. The continuous integral in Eq. (22) is approximated by the numerical integration function trapz in MATLAB as we show in Code File 1 (Ref. [42].). Figure 5(c) implies that the interpolation bias remains roughly invariant as the subset size varies. The predictions using Eq. (22) are larger than the correlation results, but the discrepancy decreases as the subset size increases. The phenomenon that theoretical predictions for larger subsets are more accurate can be explained as follows: first, it can be inferred from Figs. 6(a2)-6(d2) that discrete Fourier transforms of larger subsets approximate the original spectrum better; second, a larger subset size leads to a smaller numerical integration step and more integral points, so that the numerical integral is more accurate. Because the interpolation bias remains roughly invariant as subset size varies, larger subsets are recommended for theoretical prediction. However, when small subsets are chosen, safe predictions are obtained. Recently, the use of numerically designed speckle patterns was proposed; these patterns consist of randomly positioned circles [5]. Numerically designed speckle is more suitable for large fields of view, and a Gaussian pre-filter can decrease both interpolation bias and random error [26]. Numerically designed speckle patterns were subjected to further verification. Speckle patterns were generated, printed, and affixed onto the surface of a rigid plate. The plate was placed near to, then far from the camera to capture speckles with different radii. An IDS camera with resolution 2048 × 2048 was used. Figures 7(a1)-7(b1) shows the captured image, in which the frames indicate the correlation subsets; close-ups can be found in Figs. 8(a1)-8(b1). The speckle density is roughly 65%. The speckle radius is about ten pixels for coarse patterns and four pixels for fine patterns. The autocorrelation functions of subsets are shown in Figs. 8(a2)-8(b2). They suggest that the distance between the crest and the first trough is approximately equal to the speckle diameter, meaning that finer speckles tend to produce a sharper autocorrelation function. Comparing the autocorrelation functions for numerically designed speckles [see Figs. 8(a2)-8(b2)] with those of traditional spray-painted random patterns [see Fig. 5(a)], the fluctuations of numerically designed speckles are more significant. The discrete Fourier transforms of the subsets are illustrated in Figs. 7(a2)-7(b2) and indicate that the spectra of numerically designed speckle patterns show a ring-like shape which can be explained by the Fourier transform of a circle function, quite unlike traditional speckles. Traditional speckles are highly random patterns without well-defined particles. It is also apparent that that the energy in coarse patterns is concentrated at low frequencies, whereas fine patterns have more energy in the high-frequency domain. Fig. 7(b3)]. The interpolation bias of BSpline is much smaller than that of Keys, and the interpolation bias increases as the high-frequency components of the speckle patterns increase.
Rigid motions were mainly concerned in this work. If the deformations are not rigid motion, a number of digital image correlation techniques use the displacement gradient to describe the kinematics of the subset [15]. The existence of displacement gradients will lead to the coupling of the interpolation errors and the displacement fields. This coupling effects need further research.
Subpixel translation experiment
This subsection describes the subpixel translation experiment which was conducted to measure interpolation bias experimentally. To eliminate mechanical error and out-of-plane motion, a novel subpixel translation technique, implementing subpixel translation of the captured image through integer pixel translation on a computer screen, has been developed. Microsoft Surface Pro3 was used to visualize speckle patterns. The PPI of Surface Pro3′s screen is 216, so that a screen pixel corresponds to 117.6 μm. A JoinHope camera with 640 × 480 resolution was used. Its sensor pixel size is 7.1 μm. The focal length of the lens was 12 mm. The experimental setup is shown in Fig. 9(b). The experimental procedure was the following. First, the camera and the computer were placed on a vibration-isolated table, and the distance between the camera and the computer screen was adjusted so that 1 pixel on the screen corresponded to roughly 0.1 pixel in the image. Second, a laser pen was positioned just above the camera. To confirm that the laser direction was parallel to the camera optical axis, the laser pen was justified until the corresponding laser point was in the middle of the image. After this, a CD was affixed on the screen, and the screen was adjusted so that the incident light coincided with the reflected light, which guaranteed the perpendicular of the screen and the optical axis. Then the laser pen and CD could be removed. Third, speckle patterns were displayed on the screen; these speckle patterns were translated by 1 pixel each time [see Fig. 9(a)]. To decrease the influence of image noise, 100 frames were captured and averaged each time. Fine, medium, and coarse patterns were captured as illustrated in Figs. 10(a1)-10(c1); the corresponding speckle diameters were approximately 4, 6, and 8 pixels.
The Keys and cubic BSpline interpolation algorithms were used to retrieve the speckle displacement. Translation curves for different speckle radii were measured as shown in Figs. 10(a2)-10(c2), and the interpolation bias was evaluated by linear fit. Figures 10(a3)-10(c3) illustrates the digital image correlation results and the sinusoidal approximations of different speckle patterns. Because the environmental noise is relatively small compared to the bias of Keys (it can exceed 0.01 pixel), the interpolation bias of the Keys algorithm showed good agreement with theoretical predictions. For BSpline, the interpolation bias is small, and therefore it is susceptible to noise. However, the theoretical predictions of BSpline were of the same order of magnitude as the experimental results. These results also indicated that fine patterns result in large interpolation bias, which is consistent with the preceding discussion. For instance, the bias of Keys interpolation was about 0.02 pixel for fine patterns and decreased as the speckle radius increased. For medium patterns, the bias was 0.015 pixel, and the bias for coarse patterns was even smaller. In summary, the experimental results in this research have shown good agreement with theoretical predictions, indicating that the proposed method represents a reliable prediction approach.
Conclusions
The emphasis throughout this paper has been on analysis of interpolation bias in digital image correlation. An interpolation-bias prediction algorithm has been presented and verified by numerical simulation and actual experiments. The following conclusions can be drawn: 1) The explicit dependence of interpolation bias upon speckle pattern and interpolation algorithm has been characterized, analytic formulae for interpolation bias have been deduced, and the well-known sinusoidal-shaped curves of interpolation bias have been explained.
2) The concept of interpolation bias kernel has been introduced. An interpolation bias kernel characterizes the bias response at specific speckle frequencies, thus providing a measure of the subset matching quality of the interpolation algorithm. The properties of the interpolation bias kernel indicate that high-frequency components are the major source of interpolation bias. Further investigation attributed the interpolation bias to aliasing effect of interpolation.
3) A simple and effective interpolation bias prediction approach has been proposed. This approach exploits speckle spectrum and interpolation transfer function to predict interpolation bias. Significant acceleration has been accomplished compared to traditional methods, the effect of subset size has been analyzed, and both numerical simulations and experimental results show good agreement with theoretical predictions.
4)
A novel experimental translation technique has been developed which implements subpixel translation of a captured image by integer pixel translation on a computer screen. Owing to this noteworthy technique, the influences of mechanical error and out-of-plane motion are eliminated and a complete interpolation bias curve with an accuracy of 0.01 pixel can be attained by experimental subpixel translation. The primary motivation for this work is to standardize speckle pattern. This work can be applied not only to interpolation bias prediction, interpolation algorithm optimization, and speckle pattern optimization, but also to image processing and computer vision. | 2018-04-03T05:58:21.355Z | 2015-07-27T00:00:00.000 | {
"year": 2015,
"sha1": "07df9171772ecec4aa3c2d5f6e56326851e51b14",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.23.019242",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d9b5a5a7befd1a491dc513d3f6b84ebbd2669b7b",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
214611710 | pes2o/s2orc | v3-fos-license | Generating Natural Language Adversarial Examples on a Large Scale with Generative Models
Today text classification models have been widely used. However, these classifiers are found to be easily fooled by adversarial examples. Fortunately, standard attacking methods generate adversarial texts in a pair-wise way, that is, an adversarial text can only be created from a real-world text by replacing a few words. In many applications, these texts are limited in numbers, therefore their corresponding adversarial examples are often not diverse enough and sometimes hard to read, thus can be easily detected by humans and cannot create chaos at a large scale. In this paper, we propose an end to end solution to efficiently generate adversarial texts from scratch using generative models, which are not restricted to perturbing the given texts. We call it unrestricted adversarial text generation. Specifically, we train a conditional variational autoencoder (VAE) with an additional adversarial loss to guide the generation of adversarial examples. Moreover, to improve the validity of adversarial texts, we utilize discrimators and the training framework of generative adversarial networks (GANs) to make adversarial texts consistent with real data. Experimental results on sentiment analysis demonstrate the scalability and efficiency of our method. It can attack text classification models with a higher success rate than existing methods, and provide acceptable quality for humans in the meantime.
Introduction
Today machine learning classifiers have been widely used to provide key services such as information filtering, sentiment analysis. However, recently researchers have found that these ML classifiers, even deep learning classifiers are vulnerable to adversarial attacks. They demonstrate that image classifier [10] and now even text classifier [26] can be fooled easily by adversarial examples that are deliberately crafted by attacking algorithms. Their algorithms generate adversarial examples in a pair-wise way. That is, given one input x ∈ X , they aim to generate one corresponding adversarial example x ∈ X by adding small imperceptible perturbations to x. The adversarial examples must maintain the semantics of the original inputs, that is, x must be still classified as the same class as x by humans. On the other hand, adversarial training is shown to be a useful defense method to resist adversarial examples [31,10]. Trained on a mixture of adversarial and clean examples, classifiers can be resistant to adversarial examples.
In the area of natural language processing (NLP), existing methods are pair-wise, thus heavily depend on input data x. If attackers 1 Figure 1. An illustration of adversarial text generation. (a) Given one negative text which is also classified as negative by a ML model, traditional methods replace a few words (yellow background) in the original text to get one paired adversarial text, which is still negative for humans, but the model prediction changes to positive. (b) Our unrestricted method does not need input texts. We only assign a ground-truth class -negative, then our method can generate large-scale adversarial texts. which are negative for humans, but classified as positive by the ML model. want to generate adversarial texts which should be classified as a chosen class with pair-wise methods, they must first collect texts labeled as the chosen class, then transform these labeled texts to the corresponding adversarial examples by replacing a few words. As the amount of labeled data is always small, the number of generated adversarial examples is limited. These adversarial examples are often not diverse enough and sometimes hard to read, thus can be easily detected by humans. Moreover, in practice, if attackers aim to attack a public opinion monitoring system, they must collect a large number of high-quality labeled samples to generate a vast amount of adversarial examples, otherwise, they can hardly create an impact on the targeted system. Therefore, pair-wise methods only demonstrate the feasibility of the attack but cannot create chaos on a large scale.
In this paper, we propose an unrestricted end to end solution to efficiently generate adversarial texts, where adversarial examples can be generated from scratch without real-world texts and are still meaningful for humans. We argue that adversarial examples do not need to be generated by perturbing existing inputs. For example, we can generate a movie review that does not stem from any examples in the dataset at hand. If the movie review is thought to be a positive review by humans but classified as a negative review by the targeted model, the movie review is also an adversarial example. Adversarial examples generated in this way can break the limit of input number, thus we can get large scale adversarial examples. On the other hand, the proposed method can also be used to create more adversarial examples for defense. Trained with more adversarial examples often means more robustness for these key services.
The proposed method leverages a conditional variational autoencoder (VAE) to be the generator which can generate texts of a desired class. To guide the generator to generate texts that mislead the targeted model, we access the targeted model in a white-box setting and use an adversarial loss to make the targeted model make a wrong prediction. In order to make the generated texts consistent with human cognition, we use discrimators and the training framework of generative adversarial networks (GANs) to make generated texts similar as real data of the desired class. After the whole model is trained, we can sample from the latent space of VAE and generate infinite adversarial examples without accessing the targeted model. The model can also transforms a given input to an adversarial one.
We evaluate the performance of our attack method on a sentiment analysis task. Experiments show the scalability of generation. The adversarial examples generated from scratch achieve a high attack success rate and have acceptable quality. As the model can generate texts only with feed-forwards in parallel, the generation speed is quite fast compared with other methods. Additional ablation studies verify the effectiveness of discrimators, and data augmentation experiments demonstrate that our method can generate large-scale adversarial examples with higher quality than other methods. When existing data at hand is limited, our method is superior over the pairwise generation.
In summary, the major contributions of this paper are as follows: • To generate adversarial examples, we incorporate an adversarial loss to guide the vanilla VAE's generation process. • We adopt one discrimator for each class of data. When training, we train the discrimators and the conditional VAE in a min-max game like GANs, which can make generated texts more consistent with real data of the desired class. • We conduct attack experiments on a sentiment analysis task. Experimental results show that our method is scalable and achieves a higher attack success rate at a higher speed than recent baselines. The quality of generated texts is also acceptable. Further ablation studies and data augmentation experiments verify our intuitions and demonstrate the superiority of scalable text adversarial example generation.
Related Work
There has been extensive studies on adversarial machine leaning, especially on deep neural models [31,10,16,28,1]. Much work focuses on image classification tasks [31,10,5,11,33]. [31] solves the attack problem as an optimization problem with a box-constrained L-BFGS. [10] proposes the fast gradient sign method (FGSM), which perturbs images with noise computed as the gradients of the inputs. In NLP, perturbing texts is more difficult than images, because words in sentences are discrete, on which we can not directly perform gradient-based attacks like continuous image space. Most methods adapt the pair-wise methods of image attacks to text attacks. They perturb texts by replacing a few words in texts. [24,9,6] calculate gradients with respect to the word vectors and perturb word embedding vectors with gradients. They find the word vector nearest to the perturbed vector. In this way, the perturbed vector can be map to a discrete word to replace the original one. These methods are gradient-based replacement methods. Other attacks on texts can be summarized as gradient-free replacement methods. They replace words in texts with typos or synonyms. [16] proposes to edit words with tricks like insertion, deletion and replacement. They choose appropriate words to replace by calculating the word frequency and the highest gradient magnitude. [15] proposes five automatic word replacement methods, and use magnitude of gradients of the word embedding vectors to choose the most important words to replace. [26] is based on synonyms substitution strategy. Authors introduce a new word replacement order determined by both the word saliency and the classification probability. However, these replacement methods still generate adversarial texts in a pair-wise way, which restrict the adversarial texts to the variants of given real-world texts. Besides, the substitute words sometimes change text meanings. Thus existing adversarial text generation methods only demonstrate the feasibility of the attack but cannot create chaos on a large scale.
In order to tackle the above problems, we propose an unrestricted end to end solution to generate diverse adversarial texts on a large scale with no need of given texts.
Methodology
In this section, we propose a novel method to generate adversarial texts for the text classification model on a large scale. Though trained with labeled data in a pair-wise way, after it is trained, our model can generate an unlimited number of adversarial examples without any input data. Moreover, like other traditional pair-wise generation methods, our model can also transform a given text into an adversarial one. Unlike the existing methods, our model generates adversarial texts without querying the attacked model, thus the generation procedure is quite fast. Figure 2 illustrates the overall architecture of our model. The model has three components: a generator G, discrimators D, and a targeted model f . G and D form a generative adversarial network (GAN). When training, we feed an original input to the generator G, which transforms x to an adversarial output x . The procedure can be defined as follows:
Overview
G aims to generate x to reconstruct x. Then, we feed the generated x to the targeted model f , and f will classify x as a certain class, which we hope is a wrong label. Thus we have the following equation: Figure 3. The generator G. When training, we need input texts to train G After G is trained, we only need to sample z from the latent space, and use the decoder to generate adversarial texts unrestrictedly without original texts.
where yt = f (x) and Y is the label space of the targeted classification model.
In order to keep x being classified as the same class as x by human, we add one discrimator for each class y ∈ Y. With the help of the min-max training strategy of GAN framework, each class y's discrimator can make x close to the distribution of real class y data, thus x is made to be compatible with human congnition.
We now proceed by introducing these components in further details.
Generator
In this subsection, we describe the generator G for text generation. We use the variational autoencoder (VAE) [14,27] as the generator. The VAE is a generative model based on a regularized version of the standard autoencoder. This model supposes the latent variable z is sampled from a prior distribution.
As shown in Figure 2, the VAE is composed of the encoder q θ (z|x) and the decoder pτ (x|z), where τ is the parameters of p and θ is the parameters of q. q θ is a neural network. Its input is a text x, its output is a latent code z. q θ encodes x into a latent representation space Z, which is a lower-dimensional space than the input space. pτ is another neural network. Its input is the code z, it outputs an adversarial text x to the probability distribution of the input data x.
In our model, we adopt the gated recurrent unit (GRU) [7] as the encoder and the decoder. As in Figure 3, The input x is a sentence of words, we formulate the input for neural networks as follows: for a word at the position i in a sentence, we first transform it into a word vector vi by looking up a word embedding table. The word embedding table is randomly initialized and is updated during the model training. Then the word embedding vectors are fed into the GRU encoder. In the i-th GRU cell, a hidden state hi is emitted.
We use hN to denote the last GRU cell's hidden state, where N is the length of the encoder input. In order to get latent code z, we feed hN into two linear layers to get µ and σ respectively. Following the Gaussian reparameterization trick [14], we sample a random sample ε from a standard Gaussian (µ = − → 0 , σ = − → 1 ), and compute z as: Computed in this way, z is guaranteed to be sampled from a Gaussian distribution N (µ, σ 2 ). Then, we can decode z to generate an adversarial text x . Before feeding z to the decoder, we adopt a condition embedding c k to guide the decoder to generate text x of a certain class y k , which can be chosen arbitrarily. Suppose in a text classification task, there are |Y| classes. Specifically, we randomly initialize a class embedding table as a matrix C ∈ R |Y|×d and look up C to get the corresponding embedding c k of class y k . Then, we feed [z, c k ] into a linear layer to get another vector representation. The vector encodes the information of the input text and a desired class.
The decoder GRU uses this vector as the initial state to generate the output text. Each GRU cell generates one word. The computation process is similar to that of the GRU encoder, except the output layer of each cell. The output Oi of the i-th GRU cell is computed as: In the training phase, the GRU cell chooses the word index with the highest probability to emit: When training, the loss function of the VAE is calculated as: The first term is the reconstruction loss, or expected negative loglikelihood. This term encourages the decoder to learn to reconstruct the data. So the output text is made to be similar to the input text. The second term is the Kullback-Leibler divergence between the latent vector distribution q θ (z|x) and p(x). If the VAE were trained with only the reconstruction objective, it would learn to encode its inputs deterministically by making the variances in q(z|x) vanishingly small [25]. Instead, the VAE uses the second term to encourages the model to keep its posterior distributions close to a prior p(z), which is generally set as a standard Gaussian.
In the training phase, the input to the GRU decoder is the input text, appended with a special <GO> token as the start word. We add a special <EOS> token to the input text as the ground truth of the output text. The <EOS> token represents the end of the sentence. When training the GRU decoder to generate texts, the GRU decoder tends to ignore the latent code z and only relies on the input to emit output text. It actually degenerates into a language model. This situation is called KL-vanishing. To tackle the KL-vanishing problem in training GRU decoder, we adopt the KL-annealing mechanism [2]. KL-annealing mechanism gradually increase the KL weight α from 0 to 1. This can be thought of as annealing from a vanilla autoencoder to a VAE. Also, we randomly drop the input words into the decoder with a fixed keep rate k ∈ [0, 1], to make the decoder depend on the latent code z to generate output text.
Notably, if we randomly sample z from a standard Gaussian, the decoder can also generate output text based on z. The difference is that there is no input to the GRU decoder, but we can send the word generated by the i-th GRU cell to the (i + 1)-th GRU cell as the (i + 1)-th input word. Specifically, in the inference phase, we use beam-search to generate words. The initial input word to the first GRU cell is the <GO> token. When the decoder emits the <EOS> token, the decoder stops generating new words, and the generation of one complete sentence is finished.
In this way, after G is trained, theoretically, we can sample infinite z from the latent space and generate infinite output texts based on these z. This is part of the superiority of our method.
Algorithm 1 Text Adversarial Examples Generation
Input: Training data of different classes X0, ..., X |Y|−1 Output: Text Adversarial Examples 1: Train a VAE by minimizing LV AE on X0, ..., X |Y|−1 with KLannealing mechanism and word drop 2: Initialize G with the pretrained VAE 3: Initialize the targeted model with a pretrained TextCNN 4: Freeze the weights of the targeted model 5: repeat 6: for y k = y0, y1, ..., y |Y|−1 do 7: sample a batch of n texts {xi} n i=0 of class y k from X k 8: G generates {x } n i=0 with condition c k 9: Update weights of G by minimizing Ljoint 13: until convergence 14: if With inputs for the encoder then 15: Encode inputs and decode the corresponding adversarial texts 16: else 17: Randomly sample z ∈ N (0, 1) and choose a class y k ∈ Y 18: The decoder takes [z, c k ] and generates the adversarial text from scratch
Targeted Model
Since the TextCNN model has good performances and is quite fast, it is one of the most widely used methods for text classification task in industrial applications [34]. As we aim to attack models used in practice, we take the TextCNN model [13] as our targeted model.
Suppose we set the condition of the VAE to be y k , the decoder generates the output text x , then we feed the text into the targeted model, and the targeted model will predict a probability Ptarget(yi) for each candidate class yi. We conduct targeted attack and aim to cheat the targeted model to classify x as class yt (yt = y k ), we can get the following adversarial loss function: This is a cross entropy loss that maximize the probability of class yt.
Recall that words in the adversarial text x are computed in Equation 6, in which Function arg max is not derivative. So we can not directly feed the word index computed in Equation 6 into the targeted model. In this paper, we utilize the Gumbel-Softmax [12] to make continuous value approximate discrete word index. The embedding matrix W fed to TextCNN is calculated as: where E ∈ R |V |×dw is the whole vocabulary embedding matrix, ui is from Equation 4, g k is drawn from Gumbel(0, 1) distribution [12] and t is the temperature.
Discrimator Model
Until this point, ideally, we suppose the generated x should have many same words as x of class y k (thus be classified as y k by humans) and be classified as class yt by the targeted model. But this assumption is not rigorous. Most of the time, x is not classified as y k by humans. In natural language texts, even a single word change may change the whole meaning of a sentence. A valid adversarial example must be imperceptible to humans. That is, humans must classify x as class y k . Suppose X k is the distribution of real data of class y k and X k is the distribution of generated adversarial data transformed from x ∈ X. We utilize the idea of GAN framework to make x similar to data from X k . Thus x will be classified as y k by humans and classified as yt at the same time.
Specifically, we adopt one discrimator D k for each class y k ∈ Y. D k aims to distinguish the data distribution of real labeled data x of class y k and adversarial data x generated by G with desired class y k : The overall training objective is a min-max game played between the generator G and the discrimators L 0 disc , L 1 disc , ..., L |Y|−1 disc , where |Y| is the total number of classes: D k tries to distinguish X k and X k , while G tries to fool D k to make x ∈ X k be classified as real data by D k . Trained in this adversarial way, the generated adversarial text distribution X k is drawn close to distribution X k , which is of class y k . Thus x is mostly likely to be similar to data from X k and is classfied as y k by human as a result.
We implement the discrimators with multi-layer perceptions (MLPs). Because arg max function is not derivable, similar to Equation 9 and 10 in Section 3.3, we first use Gumbel-Softmax to transform the decoder output ui from Equation 4 into a fixed-sized matrix V = [w1, w2, . . . , wm] T . Then, D k calculate the probability of a text being true data of class y k as:
Model Training
Combining Equations 7, 8, 12, we obtain the joint loss function for model training: We first train the VAE and the targeted model f with training data. Then we freeze weights of the targeted model and initialize the G's weights with the pretrained VAE's weights. At last, the generator G and all the discrimators L 0 disc , L 1 disc , ..., L |Y|−1 disc are trained in a min-max game with loss Ljoint. The whole training process is summarized in Algorithm 1.
Experiments
We report the performances of our method on attacking TextCNN on sentiment analysis task, which is an important text classification task. Sentiment analysis is widely applied to helping a business understand the social sentiment of their products or services by monitoring online user reviews and comments [23,4,21]. In several experiments, we evaluate the quality of the text adversarial examples for sentiment analysis generated by the proposed method.
Experiments are conducted from two aspects. Specifically, we first follow the popular settings and evaluate our model's performances of transforming an existing input text into an adversarial one. We observe that our method has higher attack success rate, generates fluent texts and is efficient. Besides, we also evaluate our method on generating adversarial texts from scratch unrestrictedly. Experimental results show that we can generate large-scale diverse examples. The generated adversarial texts are mostly valid, and can be utilized to substantially improve the robustness of text classification models.
We further report ablation studies, which verifies the effectiveness of the discrimators. Defense experiment results demonstrate that generating large-scale can help to make model more robust.
Experiment Setup and Details
Experiments are conducted on two popular public benchmark datasets. They are both widely used in sentiment analysis [32,19,8] and adversarial example generation [15,29,30].
Rotten Tomatoes Movie Reviews (RT) [22]. This dataset consists of 5, 331 positive and 5, 331 negative processed movie reviews. We divide 80% of the dataset as the training set, 10% as the development set and 10% as the test set.
IMDB [17]. This dataset contains 50,000 movie reviews from online movie websites. It consists of positive and negative paragraphs. 25,000 samples are for training and 25,000 are for testing. We held out 20% of the training set as a validation set as [15].
Comparing With Pair-wise Methods
In most of the existing work [26,18,1], text adversarial examples are generated through a pair-wise way. That is, first we should take a text example, and then transform it into an adversarial instance.
To compare with the current methods fairly, we limit our method to pair-wise generation. In this experiment, we set φ = 9. Specifically, we first feed an input text into the GRU encoder, and set the condition c k as the ground-truth class of the text. After that, the decoder can decode [z, c k ] to get the adversarial output text.
We choose four representative methods as baselines: • Random: Select 10% words randomly and modify them.
• Fast Gradient Sign Method (FGSM) [10]: First, perturbation is computed as εsign( xJ ), where J is the loss function and x is the word vectors. Then, search in the word embedding table to find the nearest word vector to the perturbed word vector. FGSM is the fastest among gradient-based replacement methods. • DeepFool [20]: This is also a gradient-based replacement method.
It aims to find out the best direction, towards which it takes the shortest distance to cross the decision boundary. The perturbation is also applied to the word vectors. After that, nearest neighbor search is used to generate adversarial texts. • TextBugger [15]: TextBugger is a gradient-free replacement method. It proposes strategies such as changing the word's spelling and replacing a word with its synonym, to change a word slightly to create adversarial texts. Gradients are only computed to find the most important words to change.
Attack Success Rate. Following the existing literature [10,20,15], we evaluate the attack success rate of our method and four baseline methods. We summarize the performances of of our method and all baselines in Table 1. From Table 1, we can observe that randomly changing 10% words is not enough to fool the classifier. This implies the difficulty of attack. TextBugger and our method both achieve quite high attack success rate. While our method performs even better than TextBugger, which is the state-of-the-art method.
We show some adversarial examples generated by our method and TextBugger to demonstrate the differences in Figure 4.
We can observe that TextBugger mainly changes the spelling of words. The generated text becomes not fluent and easy to be detected by grammar checking systems. Also, though humans may guess the original meanings, the changed words are treated as out of vocabulary words by models. For example, TextBugger changes the spelling of 'awful', 'cliches' and 'foolish' in Figure 4. These words are important negative sentiment words for a negative sentence. It is natural that changing these words to unknown words can change the prediction of models. Unlike TextBugger, our method generates meaningful and fluent contents. For example, in the first example of Figure 4, we replace 'read the novel' with 'love the book', the substitution is still fluent and make sense to both humans and models. Generation Speed. It takes about one hour and about 3 hours to train our model on RT dataset and IMDB dataset respectively. We also evaluate the time cost of generating one adversarial example. We take the FGSM method as the representative of gradient-based methods, as FGSM is the fastest among them. We measure the time cost of generating 1, 000 adversarial examples and calculate the average time of generating one. Results are shown in Table 2. We can observe that our method is much faster than others. That is mainly because our generative model is trained beforehand. After the model is trained, the generation of one batch just requires one feed-forward.
Unrestricted Adversarial Text Generation
As mentioned in Section 3.2, after our model is trained, we can randomly sample z from latent space, choose a desired class y k ∈ Y , get the embedding vector c k of y k , then feed [z, c k ] to the decoder to generate adversarial texts unrestrictedly with no need of labeled text. Attack Success Rate. When training, we can tune φ in Equation 14 to affect the model. After trained with different φ, we observe the generated texts are different. We randomly generate 50,000 examples and compute the proportion of adversarial examples with different φ. The results are shown in Figure 5 Text: Inside one the films conflict like powered plot there is a decent moral trying to get out but its not that it's the tension first that keeps makes you in feel your seat affleck and jackson are good is magnificent sparring partners Dataset: IMDB. Method: Ours( = 9). Ground-truth: Negative. Original prediction: 0.98 Negative. Adversarial prediction: 0.94 Positive.
Text: i read the novel love the book some years ago and i liked loved it a lot when i saw the read this movie i couldnt believe was cared it they changed was thrown everything i liked expected about the novel book even the plot i wonder what if did isabel allende author did say about the this movie but i think it sucks Dataset: IMDB. Method: TextBugger. Ground-truth: Negative. Original prediction: 0.99 Negative. Adversarial prediction: 0.81 Positive.
Text: I love these awful awf ul 80's summer camp movies. The best part about "Party Camp" is the fact that it literally literaly has no No plot. The cliches clichs here are limitless: the nerds vs. the jocks, …, the secretly horny camp administrators, and the embarrassingly embarrassing1y foolish fo0lish sexual innuendo littered throughout. This movie will make you laugh, but never intentionally. I repeat, never. Text: a lot of fun to watch this movie is about a virus who crashes in the himalayas unlucky enough to take a trip to the old house in the woods in the himalayas unlucky enough to be a photographer and wanted to prevent the freezing man in a limb in a limb in a limb in a limb in a limb in his assignment to stop him he decides to take him out of his apartment with his wife Figure 6. Adversarial examples generated from scratch unrestrictedly. Humans should classify adversarial texts as the chosen emotional class y k .
the model is a vanilla VAE and it is not trained continually after pretrained. From Figure 5(a), we can observe that the attack success rate of the vanilla VAE is only 10.3% and 20.1% respectively, this implies that only randomly generating texts can hardly fool the targeted model. When φ is greater than 0, the attack success rate is consistently better than the vanilla VAE. This reflects the importance of L adv . Also, the attack success rate increases as φ becomes larger. It is because the larger φ is, the more important role L adv will plays in the final joint loss Ljoint. So, the text generator G is more easily guided by the L adv to generate an adversarial example.
To evaluate the quality of the generated adversarial texts with different φ, we adopt three metrics : perplexity, validity and diversity.
Perplexity. Perplexity [3] is a measurement of how well a probability model predicts a sample. A low perplexity indicates the language model is good at predicting the sample. Given a pretrained language model, it can also be used to evaluate the quality of texts. Similarly, a low perplexity indicates the text is more fluent for the language model. We compute perplexity as: where V is the number of words in one sentence. P (x j ) is the probability of j-th word in x computed by the language model.
We train a language model with the training data of IMDB and RT, and use it as P in Equation 15. We measure and compare the perplexity of the generated 50,000 texts and data of the original training set. Results are shown in Figure 5(b). We can observe that the perplexity is only a bit higher than the original data's, which means that the quality of generated texts are acceptable. Also, as φ gets larger, the perplexity gets bigger. This is perhaps because L adv can distort the generated texts. Validity. If we feed [z, c k ] to the decoder, then a valid generated adversarial text is supposed to be classified as class y k by humans but be classified as class yt = y k by the targeted model. We randomly select 100 generated texts for each φ and manually evaluate their validity. The results are shown in Figure 5(c). From Figures 5(c), we can observe that the validity rates of our method on both datasets are higher than 70% and much higher than that of the vanilla VAE. This implies our methods can generate highquality and high-validity texts with high attack success rate. Diversity. We first generate one million adversarial texts. To compare generated texts with train data, we extract all 4-grams of train data and generated texts. On average, for each generated text, less than 18% of 4-grams can be found in all 4-grams of train data on all datasets. This shows that there exists some similarity and our model can also generate texts with different words combinations. To compare generated texts with each other, we suppose that if over 20% of 4-grams of one generated text don't exist at the same time in any one of the other generated texts, the text is one unique text. We observe more than 70% of generated texts are unique. This proved that the generated texts are diverse. Adversarial Examples. We show some valid adversarial examples generated by our method in Figure 6. We can view that the adversarial examples generated by the vanilla VAE is more likely neutral, and the confidence of the targeted model is not huge. On the contrary, the generated examples of our method have high confidence of the targeted model. This shows L adv is important to attack success rate. Besides, the fluency and validity of texts generated by our method are acceptable.
Ablation Study
In this section, we further demonstrate the effectiveness of discrimators. We now report the ablation study.
We first remove discrimators and L disc , then train our model. We compare it with the model trained with Ljoint in a min-max game. We evaluate their attack success rate, perplexity and validity. Results are show in Table 3. The attack success rates of models trained with and without L disc are close. But the validity of the model trained without L disc is much lower than that of the model with L f ilter . The reason of this phenomenon is as follows. When training the generator G with only LV AE and L adv , suppose we want to generate positive adversarial texts and the targeted model must classify it as negative, the easiest way to achieve this goal is to change a few words in the generated text to negative words, such as "bad". But texts generated this way can not fool humans. If we add discrimators to draw distribution of adversarial texts close to the distribution of real data, this phenomenon can be controlled. This shows that discrimators and the min-max game min max L k disc can improve the validity greatly.
Defense With Adversarial Training
Using the adversarial examples to augment the training data can make models more robust, this is called adversarial training. On RT dataset, we randomly generate 4k adversarial texts to augment the training data and 1k to test the model. On IMDB dataset, we randomly generate 10k, of which 8k for training and 2k for testing. Results are shown in Figure 7(a) and Figure 7(b).
Through adversarial data augmentation, test accuracy on the original test data is stable. Also, the accuracy on the adversarial data is improved greatly (from 0 to > 90%). It implies that adversarial training can make models more robust without hurting its effectiveness.
Then, on RT dataset, we first augment training data with adversarial examples generated by pair-wise generation. The adversarial examples are generated through transforming training data. Note that we have 8k training data in RT dataset. When we set bigger φ, the attack success rate is higher, so we can generate more adversarial examples in the pair-wise way. But with any φ, unrestricted generation from scratch can result in infinite adversarial data. We compare the adversarial data augmentation performances of pair-wise and unrestricted generation from scratch. We use the same number of adversarial examples generated by the two modes, and hold out 20% of generated data for testing. Results are shown in Figure 7(c).
We can see that with pair-wise generation, if training data is limited, we need to generate more adversarial examples to improve the adversarial test accuracy. Higher adversarial test accuracy requires higher φ. But higher φ results in bigger perplexity, which means low text quality. Differently, with unrestricted generation from scratch, we can generate infinite adversarial texts using very small φ, with high fluency and similar adversarial test accuracy. Thus, under similar adversarial test accuracy, the text fluency of pair-wise generation is worse than that of unrestricted generation from scratch. This indicates the advantage of our proposed method.
Conclusion
In this paper, we have proposed a scalable method to generate adversarial texts from scratch attacking a text classification model. We add an adversarial loss to enforce the generated text to mislead the targeted model. Besides, we use discrimators and GAN-like training strategy to make adversarial texts mimic real data of the desired class. After the generator is trained, it can generate diverse adversarial examples of a desired class on a large scale without real-world texts. Experiments show that the proposed method is scalable and can achieve higher attack success rate at a higher speed compared with recent methods. In addition, it is also demonstrated that the generated texts are of good quality and mostly valid. We further conduct ablation experiments to verify effects of discrimators. Experiments of data augmentation indicate that our method generates more diverse adversarial texts with higher quality than pair-wise generation, which can make the targeted model more robust. | 2020-03-24T01:01:09.663Z | 2020-03-10T00:00:00.000 | {
"year": 2020,
"sha1": "847ebdae8752e2a4c3d853ff39e062ea780fd9de",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "847ebdae8752e2a4c3d853ff39e062ea780fd9de",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
202746065 | pes2o/s2orc | v3-fos-license | Electrically Insulating Plasma Polymer/ZnO Composite Films
In this report, the electrical properties of plasma polymer films functionalized with ZnO nanoparticles were investigated with respect to their potential applications in biomaterials and microelectronics fields. The nanocomposite films were produced using a single-step method that combines simultaneous plasma polymerization of renewable geranium essential oil with thermal decomposition of zinc acetylacetonate Zn(acac)2. The input power used for the deposition of composites were 10 W and 50 W, and the resulting composite structures were abbreviated as Zn/Ge 10 W and Zn/Ge 50 W, respectively. The electrical properties of pristine polymers and Zn/polymer composite films were studied in metal–insulator–metal structures. At a quantity of ZnO of around ~1%, it was found that ZnO had a small influence on the capacitance and dielectric constants of thus-fabricated films. The dielectric constant of films with smaller-sized nanoparticles exhibited the highest value, whereas, with the increase in ZnO particle size, the dielectric constant decreases. The conductivity of the composites was calculated to be in the in the range of 10−14–10−15 Ω−1 m−1, significantly greater than that for the pristine polymer, the latter estimated to be in the range of 10−16–10−17 Ω−1 m−1.
Introduction
Recent progress in material technologies have resulted in the development of a multitude of preparation approaches and potential applications for novel polymer-nanoparticle composite films. The properties of these composite materials are often superior to that of pristine polymer films, allowing them to display greater mechanical strength, high elastic modulus, large surface areas, enhanced density, and controlled optoelectronic properties [1,2]. Among all composites, metal/plasma polymer composite films have revealed interesting optical, electrical, and biological properties [3]. These nanocomposites remarkably merge advantages of low-dimensional organic films with a great surface area of embedded nanoparticles, offering a wide range of possible applications [4,5]. A specific application highly depends on the inherent properties of the polymer and the unique surface electronic structure of the embedded nanoparticles.
In the biomedical field, the in vivo applications of electronic devices is experiencing a strong growth due to their unique diagnosis and treatment capabilities [6]. However, many of the currently used implants (designed for both short and long term usage) need to be properly isolated/protected from interacting with biofluids once they are inserted into living systems [7]. Prominent examples include an artificial cardiac pacemaker, a battery-powered device that assists the heart in maintaining a regular rhythm, and other practical devices for sensing and drug delivery functions [8]. As these implants require electrical power to operate, an insulating material is often used to prevent any electrical interference with adjacent bio-objects (e.g., muscles, bones, etc.) [9]. Furthermore, other devices are implanted in/near electrically active tissues, such as in the spine and brain [10]. Hence, proper insulating coatings are required to ensure that no electrical leaks take place from or to the device as these may interfere with proper device functioning or pose local or systemic health risks to the patient.
From the perspective of cell-surface interactions, it is often necessary to ensure adequate antimicrobial activity of the insulation coating to minimize the risk of implant-associated infections by reducing bacterial colonization and subsequent formation of active biofilms. Indeed, a diversity of opportunistic pathogens can initiate 'implant-associated infections', with the likelihood of the infection determined by various factors including the type of the device and other conditions of the surgical site [11]. It has been estimated that, in the United States, direct costs for healthcare-related infections ranging from US $28 billion to $45 billion in 1 year with a rise of 60% of these being connected to the use of synthetic medical devices [12].
The development of multifunctional surface coatings that reveal both insulation and antimicrobial properties could be accomplished through the use of metal/polymer nanocomposite films. In our previous study [13], a novel nanocomposite film was fabricated from ZnO nanoparticles (NPs) and renewable geranium oil using a single-step plasma-enabled approach. We demonstrated that a significant antibacterial activity could be achieved by incorporating a low concentration (~1%) of zinc oxide nanoparticles into inherently antibacterial geranium thin films [13]. In this report, we investigated the electrical properties of ZnO/geranium polymer (Zn/Ge) films with the intention to design composite coatings, which are electrically insulating and biologically active, serving as a relevant material for encapsulation of microelectronic systems and implantable devices. The electrical characteristics of such composites primarily rely on both the metal volume fraction and the chemical structure of the polymer, where the doping level of the composite is typically determined by the filling factor [14]. Therefore, the electrical characteristics of the fabricated ZnO/Ge composites were studied using percolation theories.
Precursor Materials
Geranium essential oil (an oil rich in secondary plant metabolites) was purchased from Australian Botanical Products (ABP, Victoria, Australia) and used in the as-received condition. The precursor is a multi-component mixture that contains various hydrocarbon-rich components with broad spectrum antimicrobial activity (e.g., citronellol and geraniol) [15,16]. Geranium essential oil was selected as a precursor due to its high volatility at room temperature, which ensures no external heating or transporter gases is needed to carry the precursor molecules to the deposition region of the chamber.
Zinc nanoparticles were formed from zinc acetylacetonate Zn(acac) 2 . Zn(acac) 2 hydrate powder, which is a reasonably low-priced and commercially available Zn source. It was purchased from Sigma-Aldrich (Darmstadt, Germany) and was used without further modification. Zn(acac) 2 compound was chosen owing to its relatively low decomposition temperature. This property, in particular, renders it suitable to be used for gas phase nanoparticle formation within the plasma polymer system without the need for any catalyst [17].
Material Fabrication
Prior to sample fabrication, substrates (typically glass slides (26 mm × 76 mm)) were cleaned and sonicated in a bath of water and commercially available decon for 20 min. Then, the substrates were washed by acetone and dried out using compressed air. Aluminum electrodes were fabricated on the top of the glass slides utilizing thermal evaporation instrument (HINDHIVAC 12A4D, Bangalore, India) under a vacuum of 7 × 10 −5 torr. Pristine polymers and ZnO/polymer thin films were fabricated on the aluminum layer employing a modified plasma-enhanced chemical vapor deposition (M-PECVD) technique (MKS Instruments, Andover, MA, USA), as presented in Figure 1. A radio frequency, RF, signal generator (13.56 MHz) delivered power to a glass tube via a pair of external electrodes (made of copper). In the case of composite preparation, the system was modified with an external heater to achieve thermal decomposition of Zn(acac) 2 powder (0.05 g), which was positioned inside the glass tube. Zinc oxide nanoparticles were generated in the vapor phase and incorporated within the polymer matrix as it grew. A quantity of 0.5 g of geranium oil was used in each deposition to yield a film thickness of around~500-700 nm, with the monomer flow rate estimated to be approximately 16 cm 3 /min. Finally, another aluminum electrode was deposited on top of the result polymer films using a copper shadow mask that produced the required configuration for the metal-insulator-metal (MIM) structure, as presented in Figure 1.
top of the glass slides utilizing thermal evaporation instrument (HINDHIVAC 12A4D, Bangalore, India) under a vacuum of 7 × 10 −5 torr. Pristine polymers and ZnO/polymer thin films were fabricated on the aluminum layer employing a modified plasma-enhanced chemical vapor deposition (M-PECVD) technique (MKS Instruments, Andover, MA, USA), as presented in Figure 1. A radio frequency, RF, signal generator (13.56 MHz) delivered power to a glass tube via a pair of external electrodes (made of copper). In the case of composite preparation, the system was modified with an external heater to achieve thermal decomposition of Zn(acac)2 powder (0.05 g), which was positioned inside the glass tube. Zinc oxide nanoparticles were generated in the vapor phase and incorporated within the polymer matrix as it grew. A quantity of 0.5 g of geranium oil was used in each deposition to yield a film thickness of around ~500-700 nm, with the monomer flow rate estimated to be approximately 16 cm 3 /min. Finally, another aluminum electrode was deposited on top of the result polymer films using a copper shadow mask that produced the required configuration for the metalinsulator-metal (MIM) structure, as presented in Figure 1.
All thin films were derived from geranium essential oil at input power 10 W and 50 W, so the resulting pristine polymers were termed as Ge 10 W and Ge 50 W, while the counterpart ZnOpolymers were termed as Zn/Ge 10 W and Zn/Ge 50 W, respectively.
Electrical Measurements
Dielectric properties of the resultant MIM devices were investigated between frequencies of 10 Hz and 100 KHz using a Hioki 3522 LCR meter (Hioki, Ueda, Japan). From estimated the thickness and area of the device, and measured C values, dielectric constant was calculated. Besides, currentvoltage (I-V) measurements were conducted on the MIM structures by employing a Keithley 2636A source meter (Keithley, Cleveland, OH, USA). Data were recorded between 0 and 20 V, with steps of 0.2 V for each point, at room temperature. Figure 1. Schematic representation of the modified-plasma system used to manufacture plasma polymer/ZnO films. The metal-insulator-metal (MIM) design that was used to investigate the electrical properties of the resultant composites is also shown.
Results and Discussion
Figure 2a clearly shows that ZnO NPs were formed in a ball-like structure. The average particle size was 60 nm and 80 nm for the samples fabricated at 10 W and 50 W, respectively. Furthermore, we observed some unavoidable aggregations of nanoparticles within polymers due to high cohesive All thin films were derived from geranium essential oil at input power 10 W and 50 W, so the resulting pristine polymers were termed as Ge 10 W and Ge 50 W, while the counterpart ZnO-polymers were termed as Zn/Ge 10 W and Zn/Ge 50 W, respectively.
Electrical Measurements
Dielectric properties of the resultant MIM devices were investigated between frequencies of 10 Hz and 100 KHz using a Hioki 3522 LCR meter (Hioki, Ueda, Japan). From estimated the thickness and area of the device, and measured C values, dielectric constant was calculated. Besides, current-voltage (I-V) measurements were conducted on the MIM structures by employing a Keithley 2636A source meter (Keithley, Cleveland, OH, USA). Data were recorded between 0 and 20 V, with steps of 0.2 V for each point, at room temperature.
Results and Discussion
Figure 2a clearly shows that ZnO NPs were formed in a ball-like structure. The average particle size was 60 nm and 80 nm for the samples fabricated at 10 W and 50 W, respectively. Furthermore, we observed some unavoidable aggregations of nanoparticles within polymers due to high cohesive energy of metals, as seen Figure 2b. These aggregations statistically represented less than 10% of the overall number of nanoparticles. Figure 2c,d show the bacterial viability of gram-negative Staphylococcus aureus cells when seeded on the surfaces of control (sterilized glass substrates) and composite films, respectively. It was found that around 80% of S. aureus cells were active on the control, while the viability of the counterpart cells were almost 31% and 42% on Zn/Ge 10 W and Zn/Ge 50 W, respectively. These results indicate a significant antibacterial activity of Zn/Ge 10 W samples owing to a combination of inherently antibacterial properties of the polymer film and the presence of ZnO NPs within the matrix of the polymer. The atomic force microscopy image (AFM) in Figure 2e shows that the pristine polymer was uniform and smooth, with an average roughness of 0.25 nm for an input power of 10 W. In contrast, composite films revealed a porous surface, as seen in Figure 2e, with a random distribution of ZnO NPs, where the average roughness was at 33.7 ± 2.1 nm for the input power of 10 W. SEM and AFM data were briefly presented in this report only to demonstrate that the material contained ZnO NPs with antimicrobial properties. A more in-depth investigation of the release of ZnO NPs and morphological, surface, chemical, and antimicrobial properties of Zn/Ge composites films can be found in our previous report [13]. in Figure 2e shows that the pristine polymer was uniform and smooth, with an average roughness of 0.25 nm for an input power of 10 W. In contrast, composite films revealed a porous surface, as seen in Figure 2e, with a random distribution of ZnO NPs, where the average roughness was at 33.7 ± 2.1 nm for the input power of 10 W. SEM and AFM data were briefly presented in this report only to demonstrate that the material contained ZnO NPs with antimicrobial properties. A more in-depth investigation of the release of ZnO NPs and morphological, surface, chemical, and antimicrobial properties of Zn/Ge composites films can be found in our previous report [13]. As the antibacterial properties were confirmed, studying the electrical properties was essential to ascertain the potential of this material as an encapsulation coating for microelectronic systems and medical implantable devices. The electric characteristics of pristine and composite polymer films were measured using capacitance measurements. The data were obtained utilizing an LCR device across a wide range of frequencies between 10 Hz to 100 KHz. In Figure 3, it can be seen that the capacitance values for pristine and composites films were approximately 10 −9 and 10 −10 F, which sharply decreased at low frequencies, approaching a constant value of around 10 −10 F at high As the antibacterial properties were confirmed, studying the electrical properties was essential to ascertain the potential of this material as an encapsulation coating for microelectronic systems and medical implantable devices. The electric characteristics of pristine and composite polymer films were measured using capacitance measurements. The data were obtained utilizing an LCR device across a wide range of frequencies between 10 Hz to 100 KHz. In Figure 3, it can be seen that the capacitance values for pristine and composites films were approximately 10 −9 and 10 −10 F, which sharply decreased at low frequencies, approaching a constant value of around 10 −10 F at high frequencies for all tested films. Regardless of frequency, presence of ZnO nanoparticles or the RF power used for film deposition had only a minor effect on the capacitance value of the pristine and composites films. The subsequent dielectric constant of the pristine and composite thin films was calculated as a function of the given frequency and the results are presented in Table 1. For all samples, the value of dielectric constant decreased with the increase of frequency for different power of deposition. At the high frequency range (10 4 Hz), the decreasing trend was not too sharp as compared with the lower frequency region. The decrease trend was more noticeable in ZnO/Ge composite films, since the dielectric constant inherent to ZnO nanoparticles also decreased with increasing frequencies of the applied voltage. However, no percolation behavior was noticed for the permittivity, which was observed in other studies for ZnO nanoparticles integrated within polymers [18]. SEM images showed that the ZnO nanoparticles were not ideally distributed within the polymer matrix, but rather were touching other particles creating interfaces between ZnO nanoparticles. It had been hypnotized that the interface dipole moments originate from the electrons that are confined at the ZnO/ZnO interface electronic states [18]. Indeed, the energy levels of ZnO/ZnO were dissimilar to those of ZnO/polymer interface states, where the electrons in those states respond to different frequencies. Hence, the interface dipoles related to ZnO/ZnO interfaces were possibly responsible for the variation in the dielectric constants values especially at low frequencies.
In order to theoretically estimate the contribution of the dielectric constant of pure ZnO nanoparticles to that of the composite, we used the modified Rother-Lichtenecker equation. The measured dielectric constant is given by the relation [ The subsequent dielectric constant of the pristine and composite thin films was calculated as a function of the given frequency and the results are presented in Table 1. For all samples, the value of dielectric constant decreased with the increase of frequency for different power of deposition. At the high frequency range (10 4 Hz), the decreasing trend was not too sharp as compared with the lower frequency region. The decrease trend was more noticeable in ZnO/Ge composite films, since the dielectric constant inherent to ZnO nanoparticles also decreased with increasing frequencies of the applied voltage. However, no percolation behavior was noticed for the permittivity, which was observed in other studies for ZnO nanoparticles integrated within polymers [18]. SEM images showed that the ZnO nanoparticles were not ideally distributed within the polymer matrix, but rather were touching other particles creating interfaces between ZnO nanoparticles. It had been hypnotized that the interface dipole moments originate from the electrons that are confined at the ZnO/ZnO interface electronic states [18]. Indeed, the energy levels of ZnO/ZnO were dissimilar to those of ZnO/polymer interface states, where the electrons in those states respond to different frequencies.
Hence, the interface dipoles related to ZnO/ZnO interfaces were possibly responsible for the variation in the dielectric constants values especially at low frequencies.
In order to theoretically estimate the contribution of the dielectric constant of pure ZnO nanoparticles to that of the composite, we used the modified Rother-Lichtenecker equation. The measured dielectric constant is given by the relation [19]: where ε measured , ε 1 , and ε 2 represent the dielectric constant of the ZnO/Ge composite, the polymer medium, and the ZnO nanoparticle, respectively, f 2 represents the volume fraction of the ZnO nanoparticle, and k is the shape dependent factor (k = 0.5). Considering the differences in the particles size (as~60 nm particles formed at 10 W and~80 nm formed at 50 W), the dielectric constant for ZnO NPs was evaluated at room temperature to be ε = 6.7 and ε = 6.1, respectively. It can be understood that the dielectric constant of smaller-sized nanoparticles exhibited a higher value, whereas, with an increase in particle size, the dielectric constant decreased. This is in agreement with previous findings [19]. In contrast, other studies reported that the dielectric constant increases with an increase in the size of nanoparticles [20]. The dielectric constant of pure ZnO NPs can be varied depending on the experimental conditions; for example, it was measured to be around~10 at high frequency region (for particles size of 20 to 35 nm at 30 • C) [21]. The calculated dielectric constant in the current study could be slightly different from the real value. This discrepancy is not unusual. It is understood that ZnO has a typical metal excess defect, where oxygen is easily adsorbed on the surface of ZnO, resulting in the creation of high resistivity covers (as Schottky barriers) on the surface of the ZnO particles [22]. The bigger the ZnO particle, the smaller the ratio of surface area to particle volume. Accordingly, ZnO nanoparticle retains a larger dielectric constant. In addition, it is worth to mention that the Rother-Lichtenecker equation is valid for ideal well-dispersed particles. The non-ideal distribution of nanoparticles through the geranium polymers and the dissimilarities the particles dimension/shape could affect the accuracy of the results. Figure 4 displays the variation of current density (J) in the pristine and composite polymer films as a function of the applied voltage (V) for materials produced at 10 W and 50 W. The current density (J) of the films is calculated through Equation (2) [23,24]: where, J 0 : the low field current density, V: applied voltage, T: absolute temperature, k B : Boltzmann's constant, and d: thickness of the film. The factor β represents the field dropping coefficient for Richardson-Schottky (RS) conduction or Poole-Frenkel (PF) conduction mechanisms, which is given by [25] as: where, q: the electronic charge, ε 0 : the free space permittivity, and ε r : the dielectric constant. The parameters β PF and β RS are the field dropping coefficient for Poole-Frenkel and Richardson-Schottky conduction respectively. In nanoparticles/polymer composites, there is a critical volume/weight concentration of fillers known as the percolation threshold. According to percolation theory, when the content of conductive filler is near the percolation threshold, the fillers connect with each other to build a continuous conducting pathway, providing the potential for electrons/carriers to transport among the fillers [26]. Thus, the composite always reveals a rapid increase in electrical properties [27]. The percolation threshold is determined by the filler shape, size distribution, interlayer thickness, temperature, physicochemical properties, and applied external field [28]. Yet, the correlation between the filler concentration and conductivity of the composite is not fully understood [29].
The conductivity (σ) near the percolation threshold (ϕc) can be given by the following power law: where, σc: electrical conductivity of the composites, σf: the conductivity of the filler, ϕf: the volume portion of the filler, ϕc: the percolation concentration, and t: the critical exponent (a parameter determining the power of the conductivity based on ϕc). The critical exponent t depends on the dimension of the tested system, and is set between 1.6 and 2 for the three-dimensional structure. The value of ∅ was adjusted until the best linear fit was achieved in log σ vs. log σ (∅ − ∅ ). The range of critical exponent values fitted from experimental data achieved by different studies indicate that the t is not universal, as it varied in the range of 0.9 to 2 [30][31][32]. Based on the percolation threshold equation, we estimated the percolation threshold of Zn/Ge composites to be ~2.67%. It is clear that the Zn/Ge composites did not reach the percolation threshold since the conductivity was kept at relatively low values (10 −14 Ω −1 m −1 ), rather than increasing rapidly with the addition of the particles. The relative increase in conductivity after introducing ZnO NPs could be linked to the increase in the number of dipoles, where the reformation of trap structure is induced by ZnO nanoparticles. This suggests that the formed composites do not follow the behavior of such structures as 2D or 3D conducting particles, but show more complex charge tunneling transport mechanisms that govern their conductivity [33]. The electrical conduction could also increase due to the electronic and impurity contributions arising from the zinc precursor during the thermal breakdown of zinc acetylacetonate (Zn(acac)2).
Some studies showed that the percolation threshold for conductivity of the ZnO/polymer system to be 15 wt.% of the polymer volume fraction [34]. Different researchers found the percolation concentration to be 2.8 vol% (ZnO = 200 nm) [35], and 0.05% for ZnO nano-rods (d = 400 nm and L = 2 µm) [36]. DC conductivity (σ) of the pristine and composited films was estimated using the current-voltage data through the following relation: where, d: the thickness of the film, J: is the calculated current density, and V: the applied voltage. The conductivity for the composite films were measured in the range of 10 −14 -10 −15 Ω −1 m −1 , compared to pristine polymer films that revealed the conductivity of 10 −16 -10 −17 Ω −1 m −1 .
In nanoparticles/polymer composites, there is a critical volume/weight concentration of fillers known as the percolation threshold. According to percolation theory, when the content of conductive filler is near the percolation threshold, the fillers connect with each other to build a continuous conducting pathway, providing the potential for electrons/carriers to transport among the fillers [26]. Thus, the composite always reveals a rapid increase in electrical properties [27]. The percolation threshold is determined by the filler shape, size distribution, interlayer thickness, temperature, physicochemical properties, and applied external field [28]. Yet, the correlation between the filler concentration and conductivity of the composite is not fully understood [29].
The conductivity (σ) near the percolation threshold (φ c ) can be given by the following power law: where, σ c : electrical conductivity of the composites, σ f : the conductivity of the filler, φ f : the volume portion of the filler, φ c : the percolation concentration, and t: the critical exponent (a parameter determining the power of the conductivity based on φ c ). The critical exponent t depends on the dimension of the tested system, and is set between 1.6 and 2 for the three-dimensional structure. The value of ∅ c was adjusted until the best linear fit was achieved in log σ vs. log σ (∅ f − ∅ c ). The range of critical exponent values fitted from experimental data achieved by different studies indicate that the t is not universal, as it varied in the range of 0.9 to 2 [30][31][32]. Based on the percolation threshold equation, we estimated the percolation threshold of Zn/Ge composites to be~2.67%. It is clear that the Zn/Ge composites did not reach the percolation threshold since the conductivity was kept at relatively low values (10 −14 Ω −1 m −1 ), rather than increasing rapidly with the addition of the particles. The relative increase in conductivity after introducing ZnO NPs could be linked to the increase in the number of dipoles, where the reformation of trap structure is induced by ZnO nanoparticles. This suggests that the formed composites do not follow the behavior of such structures as 2D or 3D conducting particles, but show more complex charge tunneling transport mechanisms that govern their conductivity [33]. The electrical conduction could also increase due to the electronic and impurity contributions arising from the zinc precursor during the thermal breakdown of zinc acetylacetonate (Zn(acac) 2 ).
Some studies showed that the percolation threshold for conductivity of the ZnO/polymer system to be 15 wt.% of the polymer volume fraction [34]. Different researchers found the percolation concentration to be 2.8 vol% (ZnO = 200 nm) [35], and 0.05% for ZnO nano-rods (d = 400 nm and L = 2 µm) [36].
As shown in Figures 5 and 6, the fitting result of ln(J)-(V 1/2 ) and ln(J)-ln(V) indicates that the conduction mechanism in the high voltage range could be related to the Richardson-Schottky (RS) or Poole-Frenkel (PF) conduction. Furthermore, the fitting of lnJ vs. lnV for all manufactured polymer and composites demonstrates a similar behavior as those in lnJ vs. V 1/2 . By observing power law index rates and the linear fitting of the J-V plots for pristine polymer that was previously reported in [37], the Schottky mechanism could specifically dominate the charge transport of geranium polymers in the higher field region. It suggests that the ZnO NPs did not significantly affect the transport at high fields, but may indirectly change the carrier transport properties at lower fields. This outcome is expected since the ZnO NPs incorporated within the matrix were at low concentration (below the percolation threshold). As shown in Figures 5 and 6, the fitting result of ln(J)-(V 1/2 ) and ln(J)-ln(V) indicates that the conduction mechanism in the high voltage range could be related to the Richardson-Schottky (RS) or Poole-Frenkel (PF) conduction. Furthermore, the fitting of lnJ vs. lnV for all manufactured polymer and composites demonstrates a similar behavior as those in lnJ vs. V 1/2 . By observing power law index rates and the linear fitting of the J-V plots for pristine polymer that was previously reported in [37], the Schottky mechanism could specifically dominate the charge transport of geranium polymers in the higher field region. It suggests that the ZnO NPs did not significantly affect the transport at high fields, but may indirectly change the carrier transport properties at lower fields. This outcome is expected since the ZnO NPs incorporated within the matrix were at low concentration (below the percolation threshold). These results provide a potential pathway to adjust the bulk and surface properties of other plasma polymers that have previously been shown to display antibacterial activity [38][39][40][41] or attractive optoelectronic properties [42][43][44][45]. For example, plasma polymerized γ-terpinene thin films have shown sufficient optical transparency and photostability to be used for the encapsulation of PCPDTBT: PC 70 BM solar cells to prevent loss of efficiency [46,47], whereas polymers derived from terpinen-4-ol and linalool have been proposed as insulating interlayers in flexible electronic devices [48,49]. It can also potentially provide the means for in situ functionalization of vertically aligned graphene networks that have been fabricated from essential oils using the same plasma set-up [50,51] to improve their properties for such applications as sensing and energy storage. Indeed, there is a large range of materials that can be fabricated using plasma-enabled techniques [52][53][54], with an equally broad range of potential applications spanning medicine, electronics, energy and other fields [55][56][57][58]. These results provide a potential pathway to adjust the bulk and surface properties of other plasma polymers that have previously been shown to display antibacterial activity [38][39][40][41]
Conclusions
The electrical properties of pristine and ZnO doped plasma polymerized geranium oil-derived thin films were systematically investigated across the frequency range of 10 Hz to 100 KHz using metal-insulator-metal structures. The dielectric constant value was found to diminish with increasing RF input power for all samples. Irrespective of the RF power, the studied samples had almost the same frequency dependence on the dielectric constant, as it quickly declined within the low frequency zone. In addition, the mechanism of charge transport was examined via the typical current−voltage approach, which showed that the Schottky mechanism possibly dominants the charge transport in the higher field region. The resultant material demonstrated a moderately low conductivity value (10 −16 -10 −17 Ω −1 m −1 ), establishing characteristics of a classical insulator. Incorporation of ZnO nanoparticles into the geranium polymer thin films did not change the nature of charge transport, as the nanocomposite films still behave as an insulator. The aforementioned properties, in addition to the antibacterial activities and other valuable features of Zn/Ge thin films (e.g., low density, relatively strong adhesion to substrates, and the high optical energy gap) introduce them as an appropriate candidate for various dielectric needs in innovative microelectronics. | 2019-09-25T13:16:35.995Z | 2019-09-23T00:00:00.000 | {
"year": 2019,
"sha1": "dac3272a63b1f38b0f3201fea9b70468da805ec7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/12/19/3099/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "90341785a8ea4013a33b2f708640e883bbeb868a",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
29151575 | pes2o/s2orc | v3-fos-license | The association of angiogenic factors and chronic kidney disease
Background There are limited data on the associations of circulating angiogenic factors with chronic kidney disease (CKD). We investigate the associations of circulating vascular endothelial growth factor (VEGF)-A, angiopoietin-1, angiopoietin-1/VEGF-A ratio, VEGF receptor 1 (VEGFR-1), VEGFR-2, and pentraxin-3 with CKD. Methods We recruited 201 patients with CKD and 201 community controls without CKD from the greater New Orleans area. CKD was defined as estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73 m2 or presence of albuminuria. Multivariable quantile and logistic regression models were used to examine the relationship between angiogenesis-related factors and CKD adjusting for confounding factors. Results After adjusting for covariables including traditional cardiovascular disease (CVD) risk factors, C-reactive protein, and history of CVD, the medians (interquartile range) were 133.08 (90.39, 204.15) in patients with CKD vs. 114.17 (72.45, 170.32) pg/mL in controls without CKD (p = 0.002 for group difference) for VEGF-A; 3951.2 (2471.9, 6656.6) vs. 4270.5 (2763.7, 6537.2) pg/mL (p = 0.70) for angiopoietin-1; 25.87 (18.09, 47.90) vs. 36.55 (25.71, 61.10) (p = 0.0001) for angiopoietin-1/VEGF-A ratio; 147.81 (122.94, 168.79) vs. 144.16 (123.74, 168.05) ng/mL (p = 0.25) for VEGFR-1; 26.20 (22.67, 29.92) vs. 26.28 (23.10, 29.69) ng/mL (p = 0.31) for VEGFR-2; and 1.01 (0.79, 1.49)vs. 0.89 (0.58, 1.18) ng/mL (p = 0.01) for pentraxin-3, respectively. In addition, an elevated VEGF-A level and decreased angiopoietin-1/VEGF-A ratio were associated with increased odds of CKD. Conclusions These data indicate that plasma VEGF-A and pentraxin-3 levels were increased and the angiopoietin-1/VEGF-A ratio was decreased in patients with CKD. Future prospective studies are warranted to examine whether angiogenic factors play a role in progression of CKD. Electronic supplementary material The online version of this article (10.1186/s12882-018-0909-2) contains supplementary material, which is available to authorized users.
Background
Chronic Kidney Disease (CKD) is a highly prevalent disease, affecting over 26.3 million adults in the US alone and 497.5 million in the world [1,2]. CKD has been associated with increased risks of end-stage renal disease (ESRD), cardiovascular disease (CVD), and premature death [3,4]. Traditional risk factors only partially explain excess risk of CKD and associated ESRD and CVD in the general populations [5]. Identification of novel risk factors for CKD will further the understanding of CKD pathogenesis and provide additional targets for therapies [6,7].
The findings regarding the association of angiogenic factors and CKD in human are somewhat inconsistent likely due to diversities in sample size, study population, sources of angiogenic factors, and covariables used in the analyses. It was recently reported that vascular endothelial growth factor (VEGF)-A predicted CKD progression in diabetic patients in a small cohort study [19]. However, another study suggested VEGF expression was reduced in the biopsied kidney tissue from patients with diabetic nephropathy [20]. Elevated soluble VEGF receptor-1 (sVEGFR-1) and reduced VEGFR-2 were associated with mortality in dialysis patients [21,22]. Angiopoietin-1 mediates migration, adhesion, and survival of endothelial cells, and co-expression of angiopoietin-1 and VEGF enhances angiogenesis [23]. Decreased angiopoietin-1 and increased angiopoietin-2 levels have been identified in patients with CKD [24,25]. Associations of angiopoietin-2 [24][25][26][27] and angiopoietin-1 [27] with subclinical CVD have been reported in CKD. Angiopoietin-2 was found to be associated with increased mortality among CKD patients [24]. Pentraxin-3 can bind fibroblast growth factor-2 (FGF2) and act as a FGF2 antagonist to inhibit FGF2-dependent angiogenesis [26].
This study aims to examine the association between multiple circulating angiogenic factors and CKD in a larger pre-dialysis CKD population.
Study participants
Two hundred one patients with CKD and 201 controls without CKD were recruited between 2007 and 2010 in the greater New Orleans area. The patients with CKD were recruited from nephrology and internal medicine clinics by trained research staff in the study area. These patients were between 21 and 74 years of age. All of the eligible CKD cases identified through the referral clinics were invited to participate. CKD was defined as an eGFR < 60 ml/min/1.73m 2 or presence of albuminuria (> 30 mg/24-h). Exclusion criteria were a history of chronic dialysis, acute kidney injury, kidney transplant, pregnancy, immunotherapy in the preceding 6 months, chemotherapy in the preceding 2 years, HIV or AIDS, being unable or unwilling to provide informed consent, and participating in a current clinical trial that might have an impact on CKD. Controls were recruited through mass mailing to residents between 21 and 74 years of age residing in the same area, determined by zip code. Control eligibility for participation was assessed by a clinic screening visit.
The Institutional Review Board of Tulane University approved the conduct of this study, and written informed consent was obtained at the screening visit from all participants.
Data collection
Trained staff administered a questionnaire at a clinical visit to obtain demographic information, lifestyle factors (e.g., cigarette smoking, alcohol consumption, and physical activity), medical history (CVD, diabetes, hypercholesterolemia and hypertension), and the use of medications including aspirin and antihypertensive, hypoglycemic, and lipid-lowering agents.
Three blood pressure (BP) measurements were obtained by trained and certified staff at a clinical visit according to a standard protocol adapted from American Heart Association recommendations [27]. BP was measured using a standard mercury sphygmomanometer, with one of four cuff sizes based on the patient's arm circumference, on the patient in a seated position and after 5 min of rest. Height and weight were measured twice in patients in lightweight indoor clothing without shoes during the clinical visit and were used to calculate body mass index (BMI).
An overnight fasting blood sample was collected to measure glucose, creatinine, cholesterol, triglycerides, and angiogenesis-related biomarkers. Samples were stored at − 80 degrees Celsius (− 80°C). All samples were measured after being stored for less than 5 years. All of the biomarkers have been previously reported to be stable when stored at − 80°C [28,29]. Multiple freeze thaw cycles can increase concentrations of VEGF-A [30]. Biomarkers were measured after the first thaw of the samples to minimize the opportunity for freeze-thaw cycle related changes. Serum creatinine was measured using the Roche enzymatic method (Hoffman-La Roche, Basel, Switzerland). eGFR was estimated based on serum creatinine (SCr), sex, age, and race using the CKD-Epi equation [31]. A 24-h urinary sample was collected to measure creatinine and albumin. Serum cholesterol and triglyceride levels were assayed using an enzymatic procedure on the Hitachi 902 automatic analyzer (Roche Diagnostics, Indianapolis, IN, USA). Serum glucose was measured using a hexokinase enzymatic method (Roche Diagnostics, Indianapolis, IN, USA). Urinary concentrations of albumin and creatinine were measured with a DCA 2000 Analyzer (Bayer AG, Leverkusen, Germany). Plasma VEGF-A, sVEGFR-1, VEGFR-2, and angiopoietin-1 were measured using a sandwich immunoassay on a Meso Scale Discovery Instrument (Meso Scale Diagnostics, LLC., Rockville, MD, USA). Plasma pentraxin-3 was measured by the ELISA assay from R & D Systems (Minneapolis, MN, USA). A stringent quality control process was applied in all laboratory tests. All biomarkers were measured in duplicate, with inter-assay coefficients of variation 23.4% for VEGF-A, 2.4% for sVEGFR-1, 2.6% for VEGFR-2, 10.13% for angiopoietin-1, and 7.1% for pentraxin-3, respectively. All laboratory measures were conducted at the Laboratory for Clinical Biochemistry Research, the University of Vermont.
Statistical analysis
Characteristics of CKD cases and non-CKD controls were compared using Chi-square tests for categorical variables and t-tests for the continuous variables. Medians and interquartile ranges of the angiogenesis-related biomarkers were calculated for the CKD patients and controls and the differences were compared using the Mann-Whitney test [32]. Quantile regression was used to obtain adjusted-medians and interquartile ranges [33]. The Wald test was used to assess differences in the adjusted medians between CKD patients and controls [33]. The covariates included in the multivariable quantile regression model were age, race, gender, current cigarette smoking, weekly alcohol consumption, physical activity, BMI, LDL-cholesterol, HDL-cholesterol, C-reactive protein, fasting plasma glucose, systolic BP, self-reported history of CVD, and use of aspirin and hypoglycemic, antihypertensive, and lipid-lowering agents.
Multivariable logistic regression models were used to assess adjusted-odds ratios comparing the highest tertile of angiogenesis-related biomarkers to the lower two tertiles between CKD patients and the controls (except for angiopoietin-1/VEGF-A ratio, in which the lowest tertile was compared to the higher two). The same panel of covariates used in the multivariable quantile regression was included in the multivariable logistic regression models. This analysis was also performed stratified by diabetes status.
Associations between the angiogenesis biomarkers and stage of CKD were assessed using polytomous logistic regression and quantile regression. Stages 4 and 5 were combined due to small sample size in each category.
A sensitivity analysis was performed among CKD participants in which the medians of the angiogenesis-related biomarkers were compared between diabetic CKD cases and non-diabetic CKD cases.
Results
Characteristics of the study participants are presented in Table 1. Those with CKD were older, more likely to report a history of CVD, hypertension, diabetes, and dyslipidemia, and to have self-reported use of antihypertensive, hypoglycemic, lipid-lowering agents, or aspirin, and were less likely to drink alcohol, have a high-school education, or be physically active. Those with CKD had higher average BMI, systolic BP, and fasting glucose, but lower LDLcholesterol and HDL-cholesterol. The age-gender-race adjusted and multivariableadjusted medians of the angiogenesis-related biomarkers are presented in Table 2. After adjusting for potential confounding factors, the medians of VEGF-A and pentraxin-3 were significantly higher in CKD patients than controls while that of the angiopoietin-1/VEGF-A ratio was significantly lower in CKD patients than in controls. The medians of angiopoietin-1, VEGFR-1, and VEGFR-2 were not significantly different between CKD patients and controls.
In multivariable logistic regression analysis adjusting for important confounding factors, the odds of CKD were more than doubled for subjects with the highest tertile of VEGF-A compared to those in the lower two tertiles (Table 3). In addition, the odds of CKD were more than three times greater for subjects with the lowest tertile of the angiopoietin-1/VEGF-A ratio compared to those in the higher two tertiles. The levels of angiopoietin-1, VEGFR-1, VEGFR-2 and pentraxin-3 were not significantly associated with increased odds of CKD in the multivariable analysis.
In multivariable adjusted quantile regression models stratified by stage of CKD, median VEGF-A significantly increased with increasing severity of CKD (Table 4). Median angiopoietin-1/VEGF-A ratio significantly decreased with increasing CKD severity. PTX-3 increased with increasing CKD severity, though this did not achieve statistical significance. VEGFR-1 level increased significantly with increasing severity of CKD. Medians for angiopoietin-1 and VEGFR-2 did not differ by stage of CKD.
In the sensitivity analysis, medians of the biomarkers were compared for diabetic and non-diabetic cases. There was no significant difference in the medians of the biomarkers between the diabetic and non-diabetic CKD patients for angiopoietin-1, VEGF-A, angiopoietin-1/VEGF-A ratio, VEGFR-1, VEGFR-2, or pentraxin-3.
Discussion
The present study indicated that higher VEGF-A and pentraxin-3 levels and a lower angiopoietin-1/VEGF-A ratio may be associated with increased risk of CKD. These associations remained after adjustment for established CKD risk factors as well as CVD and the use of antihypertensive, antidiabetic, lipid-lowering medications, and aspirin. These findings suggest that abnormal angiogenesis is present in patients with CKD.
Our study reports that plasma VEGF-A is significantly higher in patients with pre-dialysis CKD compared to controls. Animal and laboratory studies have suggested that increased VEGF-A expression causes glomerular hypertrophy, proliferation of podocytes, mesangial cell proliferation, extracellular matrix expansion, interstitial fibrosis, and proteinuria [34,35]. The therapeutic effects of anti-VEGF-A and anti-angiogenic factors in experimental diabetic nephropathy have been reported, including amelioration of increases in urinary albumin excretion, glomerular volume, glomerular basement membrane thickening, in addition to decreased slit pore density and nephrin quantity [36][37][38]. Urinary VEGF was reported to be elevated in patients with diabetic nephropathy and positively associated with proteinuria [39]. Furthermore, plasma VEGF-A levels have previously been found to be associated with progression to ESRD in 67 patients with diabetic CKD [19]. However, Lindenmeyer et al. reported a decrease in mRNA expression of VEGF-A in the renal interstitium of patients with diabetic nephropathy in a small study [20]. A study of murine folic acid induced nephropathy found depleted VEGF-A in kidney tissue, but increased circulating VEGF-A, possibly from damage to the systemic vasculature induced by folic acid [17]. Our study findings support the hypothesis that increased circulating VEGF-A may be associated with increased risk of CKD. Further studies are warranted to examine the causal relationship of VEGF-A and the progression of CKD. Additionally, our study suggests that the treatment targeting VEGF in CKD needs further careful evaluation due to inconsistency between increased circulating VEGF-A and decreased renal expression of VEGF-A in findings from different studies.
Our study identified lower angiopoietin-1 in CKD patients than non-CKD controls, and lower angiopoietin-1 with increased CKD severity, though these differences did not achieve statistical significance. Decreased angiopoietin-1 has been reported in pre-dialysis CKD in children [25], but was not associated with eGFR [40] or mortality among patients with CKD [24]. Animal studies indicate that treatment with angiopoietin-1 might reduce kidney damage in unilateral ureteral obstruction, streptozotocin-induced type-1 diabetes, and folic acid induced nephropathy [15][16][17]. Deletion of angiopoietin-1 from mice embryos coupled with injury or microvascular stress caused organ damage, accelerated angiogenesis and fibrosis, suggesting angiopoietin-1 may balance the angio-fibrogenic response associated with elevated VEGF-A and angiopoietin-2 levels from tissue injury and microvascular disease, like Dichotomized as upper tertile compared to lower two tertiles for all biomarkers except the ratio of angiopoietin-1/VEGF-A, which was dichotomized as lowest tertile compared to upper two tertiles b Adjusted for age, race, gender, current cigarette smoking, weekly alcohol consumption, physical activity ≥twice/week, BMI, LDL-cholesterol, HDL-cholesterol, C-reactive protein, fasting plasma glucose, systolic BP, use of aspirin or lipid-lowering, antihypertensive, or antidiabetic medications, and history of CVD that observed in diabetes [18]. More studies are warranted to confirm the relationship of angiopoietin-1 and CKD in humans. A low ratio of angiopoietin-1 to VEGF-A was significantly associated with odds of CKD in our study. Similar associations were observed in both diabetic and non-diabetic subjects, but the associations achieved significance only in nondiabetic subjects likely due to the small sample size of the diabetic group. Lower angiopoietin-1, relative to VEGF-A concentrations, may be associated with impaired angiogenesis and enhanced endothelial leakage induced by VEGF-A as co-expression of angiopoietin-1 and VEGF-A enhances angiogenesis [19] and angiopoietin-1 can potently block VEGF-induced endothelial permeability in vitro [41]. Podocyte specific repletion of angiopoietin-1 in a model of type 1 diabetes decreased glomerular endothelial cell proliferation, hyperfiltration and albuminuria by 70% [16]. Angiopoietin-1 deficiency and VEGF-A excess is thought to destabilize endothelia in type 1 diabetic mice, and the improvements observed in mice treated with angiopoietin-1 may be attributable to vascular stabilization from attenuation of VEGF-A signaling by increased angiopoietin-1 [16,18]. Our study sample was found to have a similar growth factor milieu, with angiopoietin-1 deficiency relative to excess VEGF-A among CKD patients. A recent study suggested that low angiopoietin-1 level was positively associated with abnormal cardiac structure in stages 3-5 CKD patients [42]. Further studies are warranted to investigate whether imbalanced angiopoietin-1 and VEGF-A may be associated with an increased risk of ESRD and CVD in CKD patients.
The VEGFR-1 and VEGFR-2 levels were not significantly different between CKD patients and controls. VEGFR-1 was significantly elevated in subjects with stage 4 and 5 CKD compared to the controls, suggesting that high VEGFR-1 may be associated with more severe CKD, which may conform with the observation that elevated VEGFR-1 was associated with inflammation and mortality in dialysis patients in previous studies [21,22]. Unlike our study, a previous study reported VEGFR-2 to be lower in dialysis patients [22]. The underlying explanation for this inconsistency between our study finding and the other's may be due to differences in the study population and the severity of CKD.
Pentraxin-3 levels were significantly higher in patients with CKD compared to the controls after adjusting for multiple confounding factors including C-reactive protein in our study, even though pentraxin-3 did not increase substantially with increased severity of CKD and the odds of CKD associated with pentraxin-3 did not achieve statistical significance in multivariable logistic analysis. When logistic regression models were run separately in diabetic and nondiabetic subjects, a statistically significant doubling of odds of CKD among those with high pentraxin-3 was observed in non-diabetic subjects, while a much larger but not statistically significant increase in odds of CKD was observed among diabetic subjects. The inconsistency of these findings is likely due to limited statistical power in the subgroup analyses. Pentraxin-3 has been associated with endothelial dysfunction, decreased eGFR, and proteinuria in previous studies [43,44]. It is also associated with inflammation in cardiovascular disease [45,46]. However, the association observed between PTX-3 and CKD is independent of inflammatory and endothelial dysfunction biomarker C-reactive protein in our study, suggesting that PTX-3 might play a role in pathogenesis of CKD via an additional pathway such as abnormal angiogenesis. Future study is needed in this important area.
There are several noteworthy strengths of this study. Our study is among the largest of early studies that have found plasma VEGF-A levels were significantly increased in patients with CKD and that there might be a significant imbalance of VEGF-A and angiopoetin-1 in CKD patients. These associations were independent of multiple covariables for CKD that were carefully measured in our study. Furthermore, our study included a racially diverse group of patients with variable degrees of renal function. This study also had several limitations. First, this is a cross-sectional analysis, which prevents the determination of direction of the relationship between these angiogenic factors and CKD. Second, our study has a relatively small sample size. There is limited statistical power to do subgroup analyses by severity of CKD and diabetes status. Third, the majority of our CKD cases have stage 3 CKD, with only a minority in stage 4 or 5. An underrepresentation of more severe CKD may have limited our ability to identify associations between biomarkers and severe CKD. A larger prospective cohort study might provide more definitive evidence for the association of plasma angiogenic factors with CKD.
Conclusions
In conclusion, this study shows that higher circulating VEGF-A and pentraxin-3 levels as well as a lower angiopoetin-1/VEGF-A ratio may associated with an increased risk of CKD. Future prospective studies are warranted to examine whether these angiogenic factors play a role in progression of CKD and these abnormalities of angiogenic factors may serve as therapeutic targets for treatment of CKD.
Additional file
Additional file 1: Table S1. Age, Race and Gender Adjusted and Multivariable-adjusted Odds Ratios of Chronic Kidney Disease in Patients with and without Diabetes by Dichotomized* Angiogenesis-related Factors. The data presented in the table describe the associations between the angiogenesis related factors and CKD in diabetics and non-diabetics. (DOCX 16 kb) | 2018-05-23T04:14:20.941Z | 2018-05-21T00:00:00.000 | {
"year": 2018,
"sha1": "aa210b502dca28d3829f62aca96e337c70d11ca6",
"oa_license": "CCBY",
"oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/s12882-018-0909-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa210b502dca28d3829f62aca96e337c70d11ca6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119484304 | pes2o/s2orc | v3-fos-license | The Energy Eigenvalues of the Two Dimensional Hydrogen Atom in a Magnetic Field
In this paper, the energy eigenvalues of the two dimensional hydrogen atom are presented for the arbitrary Larmor frequencies by using the asymptotic iteration method. We first show the energy eigenvalues for the no magnetic field case analytically, and then we obtain the energy eigenvalues for the strong and weak magnetic field cases within an iterative approach for $n=2-10$ and $m=0-1$ states for several different arbitrary Larmor frequencies. The effect of the magnetic field on the energy eigenvalues is determined precisely. The results are in excellent agreement with the findings of the other methods and our method works for the cases where the others fail.
I. INTRODUCTION
The study of the two dimensional hydrogen atom in a magnetic field has been a subject of considerable interest over the years. Within the framework of the non-relativistic quantum mechanics, many works have been carried out in order to solve the eigenvalue equation and to find out the correction on the energy eigenvalues in the presence of a constant magnetic field [1,2,3,4,5]. The solution of this problem is very interesting and popular because of the technological advances in nanofabrication technology which has enabled the creation of low-dimensional structures such as quantum wires, quantum dots and quantum wells in semiconductor physics. Recent developments in nanostructure technology has also permitted one to study the behavior of electrons and impurities in quasi two-dimensional configurations (quantum wells) [1,2,3,4,5,6,7,8,9,10,11].
The canonical Hamiltonian for a charge moving in a constant magnetic field can be written as: where µ is the mass, e is the electric charge, p is the momentum of the particle, A is the vector potential, c is the light velocity and V(r) is the cylindrical potential [12]. The hamiltonian for the 2D Hydrogen atom in the magnetic field includes the Coulomb interaction −Z/r between a conduction electron and donor impurity center when a constant B magnetic field is applied perpendicular to the plane of the motion. If the vector potential in the symmetric gauge is chosen as A = 1 2 B × r, the full Hamiltonian for this system can be derived, in the CGS system and in atomic units = µ = e = 1, as: and the Schrödinger equation becomes Since the problem pertains to two dimensions, it is adequate to study in polar coordinates (r, φ) within the plane and to use the following ansatz for the eigenfunction Here, the radial wavefunction R(r) must satisfy the following radial Schrödinger equation: where ω L = B/2c is the Larmor frequency, E is the energy eigenvalue and m is the eigenvalue of the angular momentum.
As it is seen from equation (5), we need an effective potential of αr −2 + βr −1 + γr 2 type, which is a hybrid of Coulomb plus harmonic oscillator potential, in order to describe the two dimensional hydrogen atom in a magnetic field. This potential can not be solved analytically except for particular cases and there are no general closed form solutions to equation (5) in terms of the special functions [13]. There are analytic expressions for the eigenvalues for particular values of w L and m [14,15].
Therefore, in this paper, in order to find the energy eigenvalues for the two dimensional hydrogen atom in a constant magnetic field with the arbitrary Larmor frequencies w L [16,17,18,19], we use a more practical and systematic method, called the Asymptotic Iteration Method (AIM) for different n and m quantum numbers. This is precisely the aim of this paper.
In the next section, we briefly outline AIM with all necessary formulae to perform our calculations. In section III, we first apply AIM to solve the Schrödinger equation for the case ω L = 0: no magnetic field and to obtain an analytical expression for any n and m states.
Then, we show how to solve the resulting Schrödinger equation for the case w L = 0: strong and weak magnetic fields where there are no analytical solutions. Here, for any n and m quantum numbers, we show the effect of the magnetic field on the energy eigenvalues and compare our results with the findings of other methods [15]. Finally, section IV is devoted to our summary and conclusion.
II. BASIC EQUATIONS OF THE ASYMPTOTIC ITERATION METHOD (AIM)
AIM is proposed to solve the second-order differential equations of the form [20,21,22].
where λ 0 (x) = 0 and the functions, s 0 (x) and λ 0 (x), are sufficiently differentiable. The differential equation (6) has a general solution [20] y if k > 0, for sufficiently large k, we obtain the α(x) values from where The energy eigenvalues are obtained from the quantization condition. The quantization condition of the method together with equation (9) can also be written as follows For a given potential, the radial Schrödinger equation is converted to the form of equation (6). Then, s 0 (x) and λ 0 (x) are determined and s k (x) and λ k (x) parameters are calculated by using equation (9). The energy eigenvalues are determined by the quantization condition given by equation (10).
III. AIM SOLUTION FOR THE TWO DIMENSIONAL HYDROGEN ATOM
Applying the scale transformation r = r 0 ρ (r 0 = 1 2Z ) to equation (5) and by using the following ansatz : we can easily find In what follows, we show how to obtain the energy eigenvalues from this equation for two different cases, depending on the values of ω L and show the effect of the ω L on the eigenvalues.
When ω L = 0, equation (12) becomes . In order to solve this equation with AIM, we should transform this equation to the form of equation (6). Therefore, the reasonable physical wave function we propose is as follows equating it into equation (13) leads to . By means of equation (9), we may calculate λ k (ρ) and s k (ρ). This gives ...etc Combining these results with the quantization condition given by equation (10) yields . . . etc If the above expressions are generalized, ε ′ turns out as (ε ′ ) n = 1 2(l ′ + n + 1) n = 0, 1, 2, 3...
If one inserts values of ε ′ and l ′ into equation (18), the eigenvalues of the 2D hydrogen atom in the case w L = 0 becomes This analytical formula is in agreement with the previous works [12]. We discuss the results of the w L = 0 case together with the findings of the case w L = 0 in the next subsection.
B. Case w L = 0 : strong and weak magnetic fields Before applying AIM to this problem, we have to obtain asymptotic wavefunction and then transform equation (12) to an amenable form for AIM. We transform equation (12) to another Schrödinger form by changing the variable as ρ = u 2 and then by inserting R(u) = u 1/2 χ(u) into the transformed equation. Thus, we get another Schrödinger form which is more suitable for an AIM solution: where Λ = 2l ′ + 1 2 . It is clear that when u goes to zero, χ(u) behaves like u Λ+1 and exp − α 4 u 4 at infinity, therefore, the wavefunction for this problem can be written as follows: If this wave function is inserted into equation (20), we have the second-order homogeneous linear differential equation in the following form Where α = 2β. By comparing this equation with equation (6), λ 0 (u) and s 0 (u) values can be written as below In AIM, we calculate the energy eigenvalues from the quantization condition given by equation (10). It is important to point out that the problem is called "exactly solvable" if this equation is solvable at every u point. In our case, since the problem is not exactly solvable, we have to choose a suitable u 0 point and to solve the equation δ k (u 0 , ε) = 0 to find ε values. In this work, we obtain the u 0 from the maximum point of the asymptotic wavefunction which is the same as the root of λ 0 (u) = 0, thus u 0 = Λ+1 α 1/4 . The results obtained by using AIM are shown in Tables I and II in comparison with the results of Ref. [15] for Z = 1, n = 2 − 10, m = 0 and m = 1 with different Larmor frequencies ω L . Ref. [15] was able to solve this equation analytically for particular values of ω L , n and m quantum numbers. However, he could not obtain the ground state energy eigenvalue and the energy eigenvalues also diverged for ω L =0: No solution could be obtained. In Table III, we have shown the eigenvalues for several Larmor frequencies for the ground state and second excited states, which could not be obtained by Ref. [15]. In order to show that our method can Besides showing the applicability of a new method to solve the radial Schrödinger equation in the magnetic field for any n and m quantum numbers, one of the novelties of this paper is that we have shown that it is possible to obtain the ground state energy eigenvalues where others works such as [15] have failed to obtain. We have also shown that it is possible to solve the w L = 0 and w L = 0 case simultaneously where, in general, the w L = 0 case makes the energy eigenvalues diverge and non-physical results are obtained (see Ref. [15] for details.) It is clearly shown in this paper that the method presented in this study is a systematic one and it is very efficient and practical to obtain the eigenvalues for the Schrödinger type equations in a magnetic field and without the magnetic field. It is worth extending this method to the solution of other problems. | 2019-04-14T03:22:49.381Z | 2006-09-01T00:00:00.000 | {
"year": 2007,
"sha1": "b440ffdf0013939888fcfaa4df71aee245ae7dc2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/quant-ph/0703102",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b440ffdf0013939888fcfaa4df71aee245ae7dc2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
218614037 | pes2o/s2orc | v3-fos-license | Embeddings into left-orderable simple groups
We prove that every countable left-ordered group embeds into a finitely generated left-ordered simple group. Moreover, if the first group has a computable left-order, then the simple group also has a computable left-order. We also obtain a Boone-Higman-Thompson type theorem for left-orderable groups with recursively enumerable positive cones. These embeddings are Frattini embeddings, and isometric whenever the initial group is finitely generated. Finally, we reprove Thompson's theorem on word problem preserving embeddings into finitely generated simple groups and observe that the embedding is isometric.
INTRODUCTION
A group is simple if it has no proper non-trivial normal subgroups. Infinite finitely generated simple groups were discovered in [Hig51]. In fact, every countable group embeds into a finitely generated simple group [Hal74,Gor74], see also [Sch76,Tho80].
1.1. Left-order preserving embeddings into simple groups. A group is left-ordered if it has a linear order that is invariant under multiplications from the left.
By [KKL19,Theorem 4.5], every finitely generated left-ordered groups embeds into a finitely generated left-ordered group whose derived subgroup is simple.
Infinite finitely generated simple and left-ordered groups were discovered by Hyde and Lodha in [HL19], see also [MBT18,HLNR19]. We extend the construction of such groups [HL19,MBT18] as follows.
Theorem 1. Every countable left-ordered group G embeds into a finitely generated left-ordered simple group H. Moreover, the order on H continues the order on G.
We also study additional geometric and computability properties of such embeddings, see Remark 1.1 and Theorem 2.
A subgroup G of H is called Frattini embedded if any two elements of G that are conjugate in H are also conjugate in G. Also, if there exist finite generating sets X and Y of G and H, respectively, such that the word metric of G with with respect to X coincides with the word metric of G with respect to Y , then it is said that G is isometrically embedded in H.
Remark 1.1. The embedding of Theorem 1 can be chosen to be a Frattini embedding. If G is finitely generated, the embedding is also isometric.
A systematic study of computability aspects of orders on groups was initiated in [DK86], see also [Dow98]. A left-order is computable if it is decidable whether a given element is positive, negative or equal to the identity. In particular, a finitely generated computably left-ordered group has a decidable word problem.
The following theorem is the computable version of Theorem 1.
Theorem 2. Every countable computably left-ordered group G embeds into a finitely generated computably left-ordered simple group H. Moreover, the order on H continues the order on G.
In addition, the embedding is a Frattini embedding, and if G is finitely generated, then it is isometric.
Boone-Higman and Thompson's theorem revisited. A landmark result on computability in groups
is the Boone-Higman theorem. It states that a finitely generated group has decidable word problem if and only if it embeds into a simple subgroup of a finitely presented group. Thompson strengthened Boone-Higman's theorem by showing that the simple group can be chosen to be finitely generated [Tho80].
The next theorem is a version of Thompson's theorem that, in addition, preserves the geometry of the group.
Theorem 3 (cf. Theorem A.1). Every countable group G embeds into a finitely generated simple group H such that if G has decidable word problem, then so does H.
In addition, the embedding is a Frattini embedding. If G is finitely generated, then the embedding is isometric.
Remark 1.2. Belk and Zaremsky [BZ20, Theorem C] recently proved that every finitely generated group isometrically embeds into a finitely generated simple group, but they did not study the Frattini property or computability properties of their embedding. Their result and Theorem 3 strengthen a theorem of Bridson, who proved that every finitely generated group quasi-isometrically embeds into a finitely generated group without any non-trivial finite quotient [Bri98]. Remark 1.3. If the group G in Theorem 3 is not finitely generated, instead of saying G has decidable word problem, it is more common to say that G is a computable group (see Definition 2.2 below).
Bludov and Glass obtained a left-orderable version of the Boone-Higman theorem by showing that a left-orderable group has decidable word problem if and only if it embeds into a simple subgroup of a finitely presented left-orderable group [BG09,Theorem E]. In this context, it is natural to ask whether the simple group can be made finitely generated, cf. [Gla81,p. 251,Problem 4].
The next theorem answers this question in the positive given that the set of positive elements is recursively enumerable. Namely, the following theorem holds.
Theorem 4. Let G be a left-orderable finitely generated group that has a recursively enumerable positive cone with respect to some left-order. Then G has decidable word problem if and only if G embeds into a finitely generated simple subgroup of a finitely presented left-orderable group.
Remark 1.4. The existence of left-orderable groups with decidable word problem that do not embed in a group with computable left order was shown in [Dar19].
Also, the existence of finitely generated left-orderable groups with decidable word problem but without recursively enumerable positive cone is first shown in [Dar19]. Earlier, the analogous result for countable but not finitely generated groups was shown in [HT18].
The question whether Theorem 4 holds without the assumption that G has a left-order with recursively enumerable positive cone remains open. Also it is open whether a finitely generated left-orderable simple group with decidable word problem but without recursively enumerable positive cone exists.
1.3. Sketch of the embedding constructions. We sketch the proof of Theorems 1 and 2. We start with a countable computably left-ordered group G.
Step 1 (Embedding into a finitely generated group). By a classical wreath product construction [Neu60] every countable left-ordered group embeds into a 2-generated left-ordered group. A version of this embedding construction with additional computability properties was established in [Dar15]. We use the construction from [Dar15] (see Theorem 5.15) to embed the initial left-orderable countable group G into a two-generated left-orderable group that also preserves the computability properties of the left-order on G.
Step 2 (Embedding into a perfect group). A group is perfect if it coincides with its first derived subgroup.
By
Step 1 we assume that G is finitely generated. We let T (ϕ) be a finitely generated left-ordered simple group of [MBT18]. We note that T (ϕ) is computably left-ordered and G embeds into a finitely-generated left-orderable perfect subgroup G 1 of G R T (ϕ) that preserves the computability property of the left-order on G, see Theorem 5.1. Our construction might be considered as a modification of a similar embedding result from [Tho80].
Step 3 (Embedding into a simple group of piecewise homeomorphisms of flows). Finally, let G 1 be a finitely generated (computably) left-ordered perfect group in which G embeds. We embed G 1 into a finitely generated (computably) left-ordered simple group. To this end, we extend the construction of [MBT18]. In [MBT18], Matte-Bon and Triestino construct a finitely generated left-orderable simple group T (ϕ) of piecewise linear homeomorphisms of flows of the suspension of a minimal subshift ϕ, see Subsection 3.2.
The main observation is that every group H of piecewise homeomorphisms of an interval with countably many breakpoints (see Definition 3.8) embeds into a subgroup T (H, ϕ) of piecewise homeomorphisms of flows of the suspension, see Definition 3.13. We then study the subgroup T (H, ϕ). In particular, it is finitely generated if H is so. Just as in [MBT18], a standard commutator argument implies that it is simple given that H is perfect, and if H preserves the orientation of the interval, then it is also left-orderable.
Finally, we use the dynamical realisation of left-orderability: every left-ordered group embeds into the group of orientation preserving homeomorphisms of an interval. We use this embedding to conclude that G 1 , and hence also G, embeds into the finitely generated left-ordered simple group T (G 1 , ϕ). To analyze the required computability aspects as well as to show that the embeddings are isometric and Frattini, we use a modified version of the dynamical realization of left-orderability, see Proposition 6.7.
If G has decidable word problem, it embeds into a group of computable piecewise homeomorphisms of an interval [Tho80,§3]. If we use this embedding in Step 3 of the above construction, then we obtain the aforementioned result of [Tho80], Theorem 3.
1.4. Plan of the paper. In Section 2, we review computable groups and computably left-ordered groups.
In particular, we explain the computability of the standard dynamical realization of left-orderabitity.
After that, we come to the main parts of our paper. In Section 3, we discuss Step 3, that is, we extend Matte-Bon and Triestino's construction of left-orderable finitely generated simple groups in order to embed perfect groups into finitely generated simple groups.
Step 2, our version of Thompson's splinter group construction, is discussed in Section 5.
Step 1 is reviewed in Section 5.4. Finally, we prove Theorems 1, 2 and 4. To analyze the computability aspects required by Theorem 2 as well as to obtain the isometry and Frattini properties of the embeddings, we introduce a stronger version of the standard dynamical realization of left-orderability that we call modified dynamical realization(see Section 6). In Section 7, we prove Theorem 3 using the groups of piecewise homeomorphisms of flows discussed in Section 3.
Acknowledgements. We thank Y. Lodha, M. Triestino and M. Zaremsky for their interest and useful comments on a previous version of this work. The first named author thanks Université Rennes-I for hospitality and financial support and was supported by ERC-grant GroIsRan no.725773 of A. Erschler. The second named author was supported by ERC-grant GroIsRan no.725773 of A. Erschler and by Austrian Science Fund (FWF) project J 4270-N35.
A function f : N → N is computable if there is a Turing machine such that it outputs the value of f on the input. A subset of N is recursively enumerable if there is a computable map (i.e. enumeration) from N onto that set. Moreover, it is recursive if, in addition, its complement is recursively enumerable as well.
Similarly, a function f : Q → Q is computable if there is a Turing machine that, for every input Moreover, if J is an interval in R, then we call a function f : J → R computable if its restriction to the rational numbers in J maps to Q and this restriction is computable.
2.1. Group presentations and the word problem. Let S be a finite set. We denote by (S ∪ S −1 ) * the set of all finite words over the alphabet S ∪ S −1 .
Definition 2.1 (word problem). Let G = S be a finitely generated group. The word problem is decidable The decidability of the word problem does not depend on the choice of the finite generating set.
2.2. Computable groups. For a countable group G = {g 1 , g 2 , . . .}, let m : N × N → N be the function such that Remark 2.5. In case G = S , |S| < ∞, G is computably left-orderable with respect to some enumeration if and only if there is a left-order on G such that the set {w ∈ (S ∪ S −1 ) * | 1 w} ⊆ (S ∪ S −1 ) * is a recursive set. In this case is called computable left-order on G, and its computability property does not depend on the choice of the finite generating set, see [Dar19] for details.
Remark 2.6. Every computably left-orderable group is computable. In particular, every finitely generated computably left-ordered group has decidable word problem.
By [HT18] there is a left-orderable computable group without any computable left-order. In fact, there is a finitely generated orderable computable group without any computable order [Dar19].
Example 2.7. The natural order on the group of rational numbers is computable.
Example 2.8 (Thompson's group F ). A dyadic point in R is one of the form n 2 m , for some n , m ∈ Z. An interval is dyadic if its endpoints are dyadic.
Let J be a closed dyadic interval in R. We denote by Q J the set of the rational points on J. We denote by F J the group of piecewise linear homeomorphisms of J that are differentiable except at finitely many dyadic points and such that the respective derivatives, where they exist, are powers of 2.
We define the left-order on F J in the following way, cf. [CFP96, Theorem 4.11]: let Q J = {q 1 , q 2 , . . .} be a fixed recursive enumeration. Let f, g ∈ F J be distinct and let i 0 be the minimal index such that In fact, this order is computable: indeed, let f, g ∈ F J be given as words in a finite generating set. As the word problem in F J is decidable, the case of f = g can be computably verified. We note that the elements of F J are computable functions. In addition, an element of F J is uniquely determined by its restriction to the rationals. Thus, if f = g, the minimal index i 0 such that f (i 0 ) = g(i 0 ) exists and can be computably determined. Therefore, the order is computable.
2.4. Positive cones. If G is left-ordered, then the positive cone is the set of all positive elements of G.
We note that the positive cone is a semigroup. In fact, if G admits a linear order such that the positive elements generate a semigroup in G, then the linear order is a left-order on G, see [DNR14,CR16].
Lemma 2.9. Let G = {g 1 , g 2 , . . .} be a finitely generated group with a fixed enumeration, and '≺' be a left-order on G. Then '≺' is computable if and only if its positive cone is recursively enumerable and the word problem in G is decidable.
Proof. If the order is computable, then the word problem is decidable, see Remark 2.6. In addition, there is a partial algorithm to confirm that a positive element, given as a word in the generators of G, is positive. This implies that the positive cone is recursively enumerable.
On the other hand, if the positive cone is recursively enumerable and the word problem decidable, let w be a word in the generators of G. We first computably determine whether or not w = 1. If w = 1, we stop.
Otherwise either w or w −1 is in the positive cone of G. As the positive cone is recursively enumerable, there is a partial algorithm to confirm that a positive element is in the positive cone. We simultaneously run this algorithm for w and w −1 . As one of these elements is positive, it stops for w or w −1 . We thus know whether w is positive or negative. This completes the proof.
2.5. Dense orders. A linear order on a set S is dense if for any g ≺ h ∈ S, there exists g ∈ S such that g ≺ g ≺ h.
Recall that by Q J the set of the rational points on an interval J ⊂ R. We fix a recursive enumeration Q J = {q 0 , q 1 , . . .} such that the natural order on Q J is computable with respect to this enumeration. If, in addition, the order on S is computable, then the map i → Φ(s i ) is computable.
We recall the proof of this lemma, that we will later modify to prove Lemma 6.8.
Proof. We define Φ : S → Q J iteratively as Φ : s ji → q ji for i ∈ N. First, define s j0 = s 0 and q j0 = q 0 . Now assuming that S k := {s j0 , . . . , s j k } and Q k := {q j0 , . . . , q j k } are already defined, let us define its extension according to the following procedure: (1) Choose the smallest i such that s i / ∈ S k and set S k+1 = S k ∪ {s i }. Choose the smallest j such that q j / ∈ Q k and Φ : S k ∪ {s i } → Q k ∪ {q j } is an order preserving bijection. Set s j k+1 = s i and q j k+1 = q j .
(2) Choose the smallest j such that r j / ∈ Q k+1 , and choose the smallest i such that s i / ∈ S k+1 and Φ −1 : Q k+1 ∪ {q j } → S k+1 ∪ {s i } is an order preserving bijection. Set s j k+2 = s i and q j k+2 = q j .
(3) Repeat the process starting from Step 1.
Since the orderings of S and Q J are computable with respect to the fixed enumerations, the above described iterative procedure of defining Φ is also computable. Therefore, the map i → Φ(s i ) ∈ Q J is computable.
Remark 2.11. If G is left-ordered, then the lexicographical left-order on the group G × Q is dense (and has no minimal or maximal elements). In addition, if G = {g 1 , g 2 , . . .} has a computable left-order, the lexicographical left-order on G × Q is computable with respect to the induced enumeration. Moreover, the standard embedding G → G × Q that sends g → (g, 0) is computable and a Frattini embedding.
2.6. Dynamical realization of computably left-ordered groups. Let J be an interval in R. We denote the group of homeomorphisms of J by Homeo(J), and the subgroup of orientation preserving homeomorphisms of J by Homeo + (J).
We note that for every interval J ⊂ R, every countable left-ordered group G admits an embedding of We also note the following fact.
Proposition 2.12. Let G be a countable group.
If G is left-orderable, then there is an embedding ρ G : G → Homeo + (J) such that, for all g ∈ G \ {1}, the map ρ G (g) : J → J does not fix any rational interior point of J.
If G is computably left-orderable, then, in addition, all the maps ρ G (g) can be granted to be computable.
We actually need a strong variant of Proposition 2.12, see Proposition 6.7, but to the best of our knowledge, the computability aspect of Proposition 2.12 does not exist in the literature neither. For this reason we decided to include a proof of Proposition 2.12. We analyze computability aspects based on the proof given in [CR16,§2.4].
By Remark 2.11, we may assume that the order on G is dense. Then, by Lemma 2.10, there is an order Definition 2.13. Let Ψ : G → Q J be an order preserving bijection. We define ρ Ψ G : We note the following.
Lemma 2.14. Let Ψ : G → Q J be an order preserving bijection. The map ρ Ψ G : Lemma 2.15. Let Ψ : G → Q J be an order preserving bijection. If x ∈ Q J such that ρ Ψ G (g)(x) = x, then g = 1.
Proof of Proposition 2.12. Suppose G = {g 1 , g 2 , . . .} has a computable left-order with respect to the given enumeration. By Lemma 2.10, we may assume that the map i → Ψ(g i ) is computable. By Lemmas 2.14 and 2.15, ρ Ψ G : G → Homeo + (J) satisfies the properties required by Proposition 2.12.
GROUPS OF PIECEWISE HOMEOMORPHISMS OF FLOWS
We first collect definitions and facts on groups of piecewise linear homeomorphisms of flows from [MBT18]. As every countable group embeds as a subgroup in a group of piecewise homeomorphisms of flows, we then start to study such groups in more generality.
We recall from Example 2.8 that a dyadic point in R is one of the form n 2 m for some n , m ∈ Z. Moreover, for a dyadic interval J, F J is Thompson's group acting on J.
3.1. Minimal subshifts. Let A be a finite alphabet and ϕ a shift on A Z . If X is a closed and shift-invariant subset of A Z , then (X, ϕ) is a dynamical system that is called subshift. A subshift is minimal if the set of ϕ-orbits is dense in X.
Let (X, ϕ) be a minimal subshift of A Z . Then X is totally disconnected and Hausdorff, and every ϕ-orbit is dense in X.
The suspension (or mapping torus) Σ of (X, ϕ) is the quotient of X × R by the equivalence relation defined by (x, t) ∼ (ϕ n (x), t − n), n ∈ Z. We denote the corresponding equivalence class of (x, t) ∈ The map Φ t that sends [x, s] to [x, s + t] is a homeomorphism and defines a flow Φ on Σ, the suspension flow, so that (Σ, Φ) is a dynamical system as well. The orbits of the suspension flow are homeomorphic to the real line.
We denote by H(ϕ) the group of homeomorphisms of Σ that preserves the orbits of the suspension flow, and by H 0 (ϕ) the subgroup of H(ϕ) that, in addition, preserves the orientation on each orbit.
3.2. The group T(ϕ). Let C be a clopen subset of X and let J ⊂ R be of diameter < 1. The embedding of C × J into X × R descends to an embedding into Σ that we denote by π C,J .
For every clopen C ⊂ X and subset J of diameter < 1 in R, the map π C,J is a chart for the suspension, whose image is denoted by U C,J . If z is in the interior of U C,J , then π C,J is a chart at z.
Definition 3.1 (Dyadic chart). Let C be a clopen subset of X, and let J be a dyadic interval of length < 1 in R. Then π C,J : C × J → Σ is called dyadic chart. [MBT18]). The group T(ϕ) is the subgroup of H 0 (ϕ) consisting of all elements h ∈ H 0 (ϕ) such that for all z ∈ Σ there is a dyadic chart π C,J at z and a piecewise dyadic map f : J → f (J) with finitely many breakpoints such that the restriction of h to U C,J is given by We recall that F J denotes the group of piecewise dyadic homeomorphisms of J with finitely many breakpoints.
Definition 3.4. Let π C,J be a dyadic chart and let f ∈ F J . Then f C,J is the map in T(ϕ) whose restriction to U C,J is given by and that is the identity map elsewhere. We let F C,J be the subgroup of T(ϕ) generated by the elements The group T(ϕ) is infinite, simple, left-ordered and finitely generated [MBT18, Corollary C]. As noted in [MBT18], the first examples of such groups [HL19] are subgroups of T(ϕ).
In Section 3.5, we revisit the proof of simplicity given in [MBT18]. In Section 3.6, we revisit the proof of left-orderability given in [MBT18]. To this end we note the following.
In particular, the T(ϕ)-action on Σ is minimal.
For any group H, we denote by H the first derived subgroup of H.
Lemma 3.6 (Lemma 4.8 of [MBT18]). Let C ⊂ X be clopen and J ⊂ R be dyadic. If C × J is covered by a family {C i × J i } i∈I for clopen C i ⊂ C and dyadic intervals J i ⊂ J, then F C,J is contained in the group generated by i∈I F Ci,Ji We assume without restriction that (X, ϕ) is a minimal subshift over the two letter alphabet A = {0, 1}.
For k, n ∈ Z and a word w = a 0 a 1 . . . a k over A, we denote by C n,w the cylinder subset of X consisting As a matter of fact, the cylinder subsets are clopen and form a basis for the topology of X. We note that ϕ (C n,w ) = C n−1,w .
In particular, T(ϕ) can be generated by six elements.
. . , whose union is I, • for all of these I i the restriction of h to I i is a homeomorphism onto its image, If, in addition, the intervals I i and h(I i ) are dyadic, we say that h has dyadic breakpoints. If, the restrictions of h to the intervals I i are dyadic maps, we say that h has dyadic pieces.
If S is a set, bij(S) denotes the group of permutations of S.
Let us fix a half-open interval
the subgroup of all piecewise homeomorphisms with dyadic breakpoints on J. The subgroup of C(J) of orientation preserving bijections is denoted by C + (J).
Example 3.9. Every countable group embeds into C(J).
Example 3.10. Every countable left-orderable group embeds into the group of orientation preserving homeomorphisms of J, and therefore into C + (J).
Since the set of non-dyadic rational points of J is dense in J, the next lemma is a basic property of the (piecewise) continuity.
Lemma 3.11. Every function in C(J) is uniquely determined by its values on non-dyadic rational points on J. Moreover, every function from C(J) is continuous at non-dyadic rational points.
To construct respective embeddings into finitely generated simple groups we propose the following extension of the construction in [MBT18].
3.4. Groups of flows of piecewise homeomorphisms. Let us fix a subgroup G of C(J).
and g Σ,J is the identity map elsewhere.
We extend Definition 3.3 as follows.
Lemma 3.14. The group G embeds into T (G, ϕ) by g → g Σ,J . Moreover, if G is finitely generated, then T (G, ϕ) is finitely generated as well.
Proof. The second statement follows from the definition of T (G, ϕ) and the fact that T (ϕ) is finitely generated. For the first statement, it is enough to notice that, by definition, g Σ,J is an identity map if and only if g = 1.
If, in addition, t is not dyadic, we say that [x, t] is a non-dyadic rational point.
Lemma 3.16. There exists a dense and recursive set of non-dyadic rational points in Σ.
Proof. Let us choose a recursive countable subset X := {x 1 , x 2 , . . .} ⊂ X that is dense in X, for example, the set of proper ternary fractions. Moreover, for all i ∈ N, let R i ⊂ Σ be defined as Note that each of R i is a recursive set. Therefore, since X is also recursive by our choice, the we get that R is recursive as well.
Lemma 3.17. If G is a subgroup of C(J), then the elements of T (G, ϕ) are uniquely defined by their values on any (countable) dense set of non-dyadic rational points of Σ. Moreover, the elements of T (G, ϕ) are continuous at non-dyadic rational points of Σ.
Proof. By Lemma 3.16 there a fixed countable dense set of non-dyadic rational points in Σ. Let R ⊂ Σ be such a set. Let us define X ⊂ X such that for each x ∈ X there exists t ∈ Q such that [x, t] ∈ R.
Since R is dense, X is dense as well.
Therefore, the elements of T (G, ϕ) are uniquely defined by their restrictions to the Φ-orbits of the elements [x, 0] for x ∈ X . Now, the lemma follows from the combination of this observation with Lemma 3.11.
3.5. Simplicity and rigid stabilizers. To prove simplicity results, we use the following standard tool.
Let Y be a set, and H a group acting faithfully on Y . Then the rigid stabilizer of a subset U ⊂ Y is the subgroup of H whose elements move only points from U . We denote the rigid stabilizer of U by RiSt(U ).
The following lemma is used to prove simplicity of Lemma 3.18. Let N be a normal subgroup of H. If there is a non-trivial element g ∈ N and a non-empty Claim 1: The group T(ϕ) is in N .
The proof of Claim 1 follows the arguments of simplicity in [MBT18].
Proof of Claim 1. Let us fix a non-trivial element g ∈ N . Then, by Lemma 3.17, there exists a non-dyadic rational point y ∈ Σ such that g(y) = y. By Lemma 3.17, the elements of are continuous at the non-dyadic rational points of Σ. Therefore, since Σ is a Hausdorff space and Let z ∈ Σ, and choose h ∈ T(ϕ) such that h(z) ∈ U . Such a map h exists as, by Lemma 3.5, the action Therefore, for every chart π C,K there is a covering {C i × K i } of C × K such that F Ci,Ki is in N . By Lemma 3.6, we conclude that for every chart π C,K the group F C,K is in N . Now we use that . By the previous claim, f X,J ∈ N . Therefore, the first derived subgroup of the rigid stabilizer of the interior of U X×J is in N by Lemma 3.18. Finally, we note that τ Σ,J (G) is in the rigid stabilizer of the interior of U X,J . Thus τ Σ,J (G) is in N . As G is assumed to be perfect, this yields the claim. Now, to conclude the proof of Lemma 3.19, we only need to combine the above claims with the fact that, by definition, Proof. First of all, note that since G ≤ C + (J), the action of T (G, ϕ) on Φ-orbits of elements of X ⊂ Σ is orientation preserving.
. .} be a fixed, recursively enumerated and dense subset of non-dyadic rationals in Σ. The existence of such sets is by Lemma 3.16.
such that q k > s k . Therefore, by Lemma 3.11, for all f = 1 either f > 1 or f −1 > 1 and for f 1 , f 2 > 1, f 1 f 2 > 1. By Lemma 3.17, the defined order is a left-order on T (G, ϕ) .
Recall that the set R is recursive. Therefore, to check whether f > 1, we can consecutively compute
CHART REPRESENTATIONS AND THE WORD PROBLEM IN T (G, ϕ)
Recall that J is a fixed interval that is strictly contained in [0, 1). Let us fix a subgroup G in C(J), and assume that G is finitely generated and consists of computable functions. 4.1. Chart representations.
where h i is a piecewise homeomorphism with countably many breakpoints on I i and h i (I i ) = J i , such that {U Ci×Ii } and {U Ci×Ji } cover Σ, and such that the restriction of h to
Each of the triples
be a chart representation of h such that for every h i , 1 ≤ i ≤ n, one of the following takes place: (1) If I = I 0 ∪ I 1 , then (C × I, C × J, f ) can be replaced by (C × I 0 , C × f (I 0 ), f | I0 ) and and Proof. We will prove only that shift operations on charts of type (II) preserve the canonicity of chart representations, as the rest of statements of the lemma are straightforward.
Suppose that the initial chart of type (II), on which a shift operation of order m is applied, is (C i × Suppose that Λ = f g 1 f 1 . . . g n f n is decomposed as in Definition 4.6. Then,Λ =f g 1 f 1 . . . g nfn , The chartΛ is also a G-dyadic map. Therefore, be chart representations such that I i = J i = [0, 1]. Then we say that the chart representation is their composition.
Then there is an algorithm to determine a canonical chart representation Proof. We describe the algorithm.
Then there is an algorithm to determine a canonical chart representation Proof. The proof is analogous to the proof of Lemma 4.20.
. We conclude that h i (t) = t − m and ϕ m (x) = x. But ϕ is a minimal subshift, that is, every orbit of ϕ is dense.
In particular, m = 0. Therefore h i = id. This yields one side of the assertion. The inverse assertion is trivial. Λf is a G-dyadic map, so that, by assumption, we can algorithmically check whether h = id.
EMBEDDINGS INTO PERFECT GROUPS
Our next goal is to prove the following.
Theorem 5.1. Every countable group G embeds into a finitely generated perfect group H. In addition, (1) if G is computable, then H has decidable word problem; (2) if G is left-ordered, then H is left-ordered; (3) if G is computably left-ordered, then the left order on H is computable; (4) the embedding is a Frattini embedding.
We first prove Theorem 5.1 for finitely generated groups. In Section 5.4, we reduce the general case to the finitely generated case. 5.1. Splinter Groups. Let us assume that G is a finitely generated group. We now construct a finitely generated perfect group in which G embeds. Our construction resembles the splinter group construction of [Tho80,§2]. We comment on the construction of [Tho80] in Section 5.5.
Let us fix an action of T (ϕ) on the real line as follows: let us fix z 0 := [x 0 , 0] ∈ Σ. As the action of T(ϕ) on Σ preserves the Φ-orbits, T (ϕ) acts on the Φ-orbit of z 0 , the action is orientation-preserving, and its orbits are dense. Finally, recall that the Φ-orbit of z 0 is homoemorphic to R. We fix such a homeomorphism. This induces an action of T (ϕ) on R. We fix this action of T(ϕ).
Let C 0 (R, G) denote the group of functions from R to G of bounded support. The action of T (ϕ) on R induces an action σ of T (ϕ) on C 0 (R, G) such that for every h ∈ C 0 (R, G) and f ∈ T (ϕ), The permutational wreath product G R T(ϕ) is defined as the semi-direct product C 0 (R, G) σ T(ϕ), For every g ∈ G, we define the following function g in C 0 (R, Definition 5.2 (Splinter groups). The splinter group is the subgroup of the permutational wreath product G R T(ϕ) generated by G and T(ϕ). We denote it by Sp(G, ϕ).
Proof of Lemma 5.4. Since T(ϕ) is simple, it is in Sp(G, ϕ) . By Lemma 5.5, G is in Sp(G, ϕ) as well.
Proof. Let X and Y be finite generating sets of G and T (ϕ), respectively. We prove that the embedding of G = X into Sp(G, ϕ) = X ∪ Y by g →ḡ is an isometric embedding, whereX is the image of X in Sp(G, ϕ).
Let g ∈ G. Also, let f i ∈ T (ϕ) and g i ∈ G, 1 ≤ i ≤ n, be such that where | · | is the length of the group element with respect to the corresponding generating set. We havē where h i = f 1 . . . f i , 1 ≤ i ≤ n. Therefore, it must be that h n = 1 and where I ⊆ {1, . . . , n} is the set of indexes i such that h i (1/2) ∈ [1/2, 1). Thus we get . . = f n = 1 and I = {1, . . . , n}, which implies that |g| X = |ḡ|X ∪Y . Since g is an arbitrary element of G, the last conclusion finishes the proof.
Lemma 5.7. The embedding of G into Sp(G, ϕ) by g →ḡ is a Frattini embedding.
Proof. Let g, h ∈ G, and suppose thatḡ andh are conjugate in Sp(G, ϕ). We want to show that g is conjugate to h in G.
5.2.
The word problem for Sp(G, ϕ). We recall that T(ϕ) is computably left-ordered, acts orderpreservingly on R, and that this action is computable.
We adapt a notion of splinter table introduced in [Tho80, p. 413].
Definition 5.8 (Splinter table). A splinter table corresponding to the element (t, f ) ∈ Sp(G, ϕ) is a finite tuple of the form (J 1 , . . . , J n ; g 1 , . . . , g n ; f ), where J 1 , . . . , J n is a disjoint finite collection of bounded intervals from R whose union contains the support of t : R → G such that t(J i ) = g i ∈ G.
Let J := 1 i n J i and I := 1 j m I j .
Let (r, q) := (t, f )(s, e). Then q = f e, and r = t σ(f )s is a step function such that for all 1 i n and for all 1 j m r (J i ∩ f (I j )) = g i h j , r (J i \ f (I)) = g i , r (f (I j ) \ J) = h j and the identity elsewhere.
By the properties of T(ϕ), the inverse of f as well as J i ∩ f (I j ), J i \ f (I) and f (I j ) \ J can be computably determined.
Corollary 5.11. Every element of Sp(G, ϕ) can be represented by a splinter table.
Note that (J 1 , . . . , J n ; g 1 , . . . , g n ; f ) is a splinter table corresponding to the trivial element of Sp(G, ϕ) if and only if g 1 = . . . = g n = 1 and f = 1. Therefore, combining this observation and Lemma 5.10 with the fact that the word problem of T (ϕ) is decidable (Corollary 4.26), we immediately get the following.
Lemma 5.12. If the word problem for G is decidable, then so is the word problem for Sp(G, ϕ).
We conclude:
Lemma 5.13. If G is left-ordered, then so is Sp(G, ϕ). The order on Sp(G, ϕ) continues the order on G.
Lemma 5.14. If G is computably left-ordered, then so is Sp(G, ϕ). The order on Sp(G, ϕ) continues the order on G.
Proof. We fix a computable left-order on T (ϕ), see Corollary 4.26. Let (t, f ) ∈ Sp(G, ϕ). First run the algorithm for the word problem, see Lemma 5.12. If (t, f ) represents the identity stop. Otherwise, check whether or not f is positive, negative or the identity. In the first two cases, we are done. Otherwise, we can computably determine the leftmost (maximal) interval J of the splinter representation of (t, f ) such that t(J) = 1. Then we use that the left-order on G is computable to determine whether or not t(J) is positive or negative. 5.4. Embeddings into finitely generated groups. To conclude the proof of Theorem 5.1, we need the following result of [Dar15], see also [Dar19, Theorem 3] for more details on assertions (1)-(3).
Theorem 5.15. Every countable group G embeds into a 2-generated group H. In addition, (1) if G is computable, then H has decidable word problem; (2) if G is left-ordered, then H is left-ordered; (3) if G is computably left-ordered, then the left order on H is computable; (4) the embedding of G into H is a Frattini embedding.
Moreover, the left-order on H continues the left-order on G.
Here we briefly explain why the embedding from [Dar15] is a Frattini embedding.
Proof of assertion (4) of Theorem 5.15. As it is shown in Section 2 of [Dar15], for G = {g 1 , g 2 Therefore, n = 0, henceḡ(1) is conjugate toh(1) in G z . Repeating this argument one more time with respect to the pairḡ(1),h(1) ∈ G z and using the fact that (ḡ(1))(1) = g and (h(1))(1) = h, we get that g is conjugate to h in G. Since g, h ∈ G are arbitrarily chosen elements from G, we get that the embedding from [Dar15] that satisfies Theorem 5.15 is Frattini.
Proof of Theorem 5.1. By Theorem 5.15, we assume without loss of generality that G is 2-generated.
Let H be the splinter group Sp(G, ϕ). Let X be a Cantor set, whose elements are represented as infinite sequences in letters 0 and 1. We note that the so called Thompson's group V is exactly the group Ft(X) defined in [Tho80,p. 405]. In fact, V is an infinite finitely generated simple group that acts on X [Tho80, Proposition 1.5, Corollary 1.9].
We note that the splinter group of [Tho80] is the subgroup of G X V generated by V and the functions g from X to G that take the value g on all sequences starting with 01, and the identity elsewhere. Lemma Unfortunately, the group V and, hence, the splinter group of [Tho80] are not left-orderable.
EMBEDDINGS OF LEFT-ORDERED GROUPS
Let J be a dyadic interval in [0, 1]. Since every left-ordered group embeds as a subgroup into Homeo + (J), we have the following.
Proposition 6.1. Every countable left-ordered group G embeds into a finitely generated left-ordered group H. In addition, the order on H continues the order on G.
Proof. Let G be countable left-orderable group. Then, by Theorem 5.1, G embeds into a finitely generated perfect left-orderable group G 1 . On its own turn, since G 1 is left-orderable, it embeds into Homeo + (J).
We now construct an embedding as in the previous proposition, that, in addition, is Frattini and isometric (provided that G is finitely generated), as required by Remark 1.1, and that has the computability properties required by Theorem 2. To achieve this, we modify the construction of Proposition 2.12 of embeddings of left-ordered groups into Homeo + (J).
Definition 6.2. For any r = 2 k p q ∈ Q \ {0}, where p and q are odd integers, we call {r} d := k the dyadic part of r.
We observe: Definition 6.4. Let I and J be fixed intervals and g : Q I → Q J be a bijection. Then we say that g is strongly permuting the dyadic parts if the following two conditions take place.
(1) For each m ∈ Z, there exists at most one x ∈ Q I such that {x} d = m and {g(x)} d ≤ 0; (2) If If g is a bijection from I to J, when we say that g is strongly permuting the dyadic parts if it maps rational points to rational points and its restriction g | Q I : Q I → Q J satisfies Definition 6.4. Remark 6.5. If g : Q I → Q J is strongly permuting the dyadic parts, then, for each m ∈ Z, the set Let us consider, for 0 < i n, • bijective dyadic maps f i : • bijective maps g i : J i → I i−1 , whose restriction to Q Ji strongly permutes the dyadic parts.
Lemma 6.6. If Λ = g 1 f 1 g 2 f 2 . . . g n f n , then, for large enough N ∈ N, the set is unbounded from above. In particular, Λ = id.
Proof. We will prove the lemma by induction on n.
The statement now follows as g 1 strongly permutes the dyadic parts (see Remark 6.5).
6.2. The modified dynamical realization. Let J be a fixed closed interval in R with non-empty interior.
We prove: Proposition 6.7. Let G be a countable group.
If G is left-orderable, then there is an embedding Ψ : G → Homeo + (J) such that, for all g ∈ G \ {1}, the map Ψ(g) : J → J is strongly permuting the dyadic parts and does not fix any rational interior point of J.
If G is computably left-orderable, then, in addition, all the maps Ψ(g) can be taken to be computable.
As in the proof of Proposition 2.12, we fix a recursive enumeration Q J = {q 0 , q 1 , . . .} such that the natural order on Q J is computable with respect to this enumeration.
We first strengthen Lemma 2.10 that states that there is an order preserving bijection Φ : G → Q J .
Step 2n + 1. Let G 2n = {g i1 , . . . , g i2n } and Q 2n = {r i1 , . . . , r i2n } be already defined. Let us define g i2n+1 as the element of the smallest index that is not in G 2n . Suppose that g is < g i2n+1 < g it and that no element from G 2n is in between g is and g it . Then define r i2n+1 ∈ Q J to be of the smallest index such that Step 2n + 2. Let G 2n+1 := {g i1 , . . . , g i2n+1 } and Q 2n+1 = {r i1 , . . . , r i2n+1 } be already defined. Let us define r i2n+2 as the rational of the smallest index that is not in Q 2n+1 . Suppose that r is < r i2n+2 < r it and that no element from Q 2n+1 is in between r is and r it . Then let us define g i2n+2 ∈ G as the element of the smallest index such that (E1) g i2n+2 / ∈ G 2n+1 and g is < g i2n+2 < g it , and The bijection Θ defined this way is order preserving by (O1) and (E1). Condition (O2) yields assertion (1), and (E2) yields assertion (2). Finally, as the procedure is algorithmic, we also obtain assertion (3).
To this end, we enumerate Q J such that Θ(g i ) = r i and let r i = r j ∈ Q J . We define r k = Ψ(h)(r i ) = Θ(hg i ) and r l = Ψ(h)(r j ) = Θ(hg j ), so that g k = hg i and g l = hg j .
We first show property (1) of Definition 6.4. By contradiction, assume that there exist i = j such the indices k and l are even. Then, since g k = hg i = (hg j )(g −1 j )(g i ) and g l = hg j = (hg i )(g −1 i )(g j ), by (2) of Lemma 6.8, the largest index is among i or j. Let j > i. Then, since {r i } d = {r j } d , we get the index j is even. Since g j = g i (hg i ) −1 (hg j ), again by (2) of Lemma 6.8, we get a contradiction, which yields the claim.
Next, we prove property (2) of Definition 6.4. By contradiction, assume that there exist r i = r j ∈ Q J such that {r i } d = {r j } d and suppose that {r l } d = {r k } d . Without loss of generality, l > i, j, k (if, say, j > i, k, l, then instead of h we could consider h −1 ). Then, since {r k } d = {r l } d , by (1) of Lemma 6.8, l has to be even. Therefore, since Θ(hg j ) = r l and l is even, by (2) of Lemma 6.8, hg j ∈ g m g −1 n g p | 1 m, n, p < j . On the other hand, since l > i, j, k, we get hg j = (hg i )(g −1 i )(g j ), a contradiction.
This completes the proof of Proposition 6.7.
6.3. The embedding theorems. Let G be countable left-orderable group. Then, by Theorem 5.1, G embeds into a finitely generated perfect left-orderable group G 1 . Moreover, this embedding is a Frattini embedding.
For the definition of G 2 -dyadic maps, see Definition 4.6.
Lemma 6.9. Let Λ be a G 2 -dyadic map. If G 2 has decidable word problem, then there is an algorithm to decide whether or not Λ = id.
Proof. Let n > 0 and, for all 0 i n, let J i ⊂ J and let g i : J i → I i−1 be the restriction of an element of G 2 such that g i = id. Moreover, let f i : Since, for all 0 < i < n, J i ⊂ [1/4, 1/2] and, by definition, λ i is a power of 2, we get that c i = 0 for 0 < i < n. Then, by Lemma 4.25, Λ := g 1 f 1 g 2 f 2 . . . g n f n = id.
If n = 1 and f 1 = id, then Λ = g 1 ∈ G. Then we decide using the algorithm for the word problem in G.
Combining Lemmas 5.6 and 6.9, we also conclude the following.
We represent t by a canonical chart representation Let k be a index such that 1/2 is in the closure of J k and such that (after applying a chart refinement if necessary) J k ⊆ J. As g is fixing 1/2, there is J k ⊆ J k such that g(J k ) ⊆ J k and such that 1/2 is in the closure of J k . We let I k = t −1 k (J k ) and I k = t −1 k gt k (I k ). Then the triple (C × I k , C × I k , t −1 k gt k ) is in a chart representation of t −1 gt. Up to applying the algorithm of Lemma 4.20 to this chart representation, we may assume that I k is in [0, 1]. Moreover, up to applying a chart refinement if necessary, we may assume that either I k ∩ J is empty or consists of one point (1/4 or 1/2), or I k ⊆ J.
If I k ∩ J is empty or consists of one point, then (C × I k , C × I k , t −1 k gt k ) is in a chart representation of ht −1 gt. Thus t −1 k gt k = id on I k by Lemma 4.24. This implies that g acts as the identity on J k . Since non-trivial elements of G 2 do not fix any rational interior points of J, the element g = 1.
Otherwise, the triple (C × I k , C × h(I k ), ht −1 k gt k ) is in a chart representation of ht −1 gt. Thus ht −1 k gt k = id on I k by Lemma 4.24. Then h(I k ) = I k . This is only possible if I k ⊆ J. But then Lemma 6.9, implies that t k acts as an element of G 2 . Since non-trivial elements of G 2 do not fix any rational interior points of J, this implies that g and h are conjugate in G 1 .
We can now conclude Theorems 1, 2 and 4.
The group H is finitely generated left-orderable and simple by Lemmas 3.14, 3.20 and 3.19. By construction, G embeds into H, and the order on H extends the order on G. Moreover, by Lemma 6.11, the embedding of G is a Frattini embedding.
If G is computably left-ordered, we may in addition assume that G 1 is computably left-ordered, see Theorem 5.1. By Proposition 6.7, for all g ∈ G 1 , Ψ(g) is computable. Therefore, by Lemma 2.9, the positive cone of H is recursively enumerable. Moreover, by Lemma 6.9 and Lemma 4.25, the group H has decidable word problem. By Lemma 2.9, the left-order on H is computable.
Proof of Theorem 4. Let G be finitely generated left-orderable group with a recursively enumerated positive cone. If G has decidable word problem, then the left-order on G is computable by Lemma 2.9.
Then Theorem 2 implies that G embeds into a finitely generated computably left-ordered simple group H.
In particular, the word problem in H is decidable. Thus H can be defined by a recursively enumerable set of relations. By [BG09,Theorem D], H embeds into a left-orderable finitely presented group.
On the other hand, if H is a finitely generated simple subgroup of a finitely presented group, then it has decidable word problem (see [LS77,Theorem 3.6]). Therefore, G has decidable word problem as well.
EMBEDDINGS OF COMPUTABLE GROUPS
In this section we prove Theorem 3, the isometric version of Thompson's theorem [Tho80]. In Appendix we present yet another proof of Theorem 3 that, using the setting of our paper, mimics the original idea of [Tho80].
Theorem 7.1. Every computable group G Frattini embeds into a finitely generated simple group H with decidable word problem. If G is finitely generated, then the embedding is isometric.
Remark 7.2. The original statement [Tho80] is for finitely generated groups, but finite generation can be replaced by computability of G due to Theorem 5.15. 7.1. The embedding construction. Let G be a computable group. By Theorem 5.1, G embeds into a finitely generated perfect group G 1 with decidable word problem (if G is finitely generated, this claim also follows from [Tho80,§2]). g (2) , . . .} be enumerated so that m : N × N → N, defined as m((i, j)) = k if g (i) g (j) = g (k) , is computable. By Remark 2.3 the existence of such m is equivalent to decidability of the word problem.
Let us fix two recursively enumerated recursive sets of dyadic numbers {x 1 , x 2 , . . .} and {y 1 , y 2 , . . .} such that the following takes place (1) 0 < x 1 < y 1 < x 2 < y 2 < . . . < 1 3 , (2) x i and y i are of the form m 2 n and m+1 2 n , respectively, For every l ∈ N, let ξ l : J → J be such that, for every k ∈ N, it is an affine map from D k onto D m (l,k) and that is identity outside of ∪ ∞ i=1 D i . In particular, the map ξ l : D k → D m(l,k) is dyadic. Let us define λ : G 1 → C(J) by λ(g (l) ) = ξ l , for all l ∈ N.
Remark 7.4. The map λ is an embedding of G 1 into computable maps in C(J).
Let Λ = g 1 f 1 g 2 f 2 . . . g n f n : I n → I 0 , where f i : I i → J i and g i : J i → I i−1 , be a G 2 -dyadic map as in Definition 4.6. Recall that in particular we have J i ⊆ J = (0, 1 3 ] for 1 ≤ i ≤ n. We say that Λ is a special G 2 -dyadic map if for each 1 ≤ i ≤ n we have 1/3 ∈ J i (closure of J i ). Correspondingly, we say that a chart of type (II), see Definition 4.8, is special if the local representation in this chart is of the form The following lemma is a direct consequence of Remark 7.3 and of the fact that the maps f i , g i , Lemma 7.5. There exist a finite collection of intervals K 1 , K 2 , . . . , K s ⊆ (0, 1 3 ] such that Λ| Ki∩I0 is a special G 2 -dyadic map. Moreover, such intervals K 1 , K 2 , . . . , K s can be found algorithmically.
Lemma 7.6. If h : I → J is a surjective dyadic map such that 1/3 is in the closures of I and J, and I, J ⊆ (0, 1 3 ], then h is the identity map.
Proof. The lemma follows from the fact that 1 3 is non-dyadic. By Lemma 7.6, we have: Corollary 7.7. Special local representations are of the form f 0 g 1 f 1 . In particular, if a special G 2 -dyadic map or a special chart of type (II) fixes 1/3, then it acts as an element of G 2 .
Since the word problem for G 2 is decidable, this implies that there exists an algorithm that decides whether or not a special G 2 -dyadic map represents the identity map. This, combined with Lemmas 7.5 and 4.25, leads to the following corollary.
We adapt the proof of Lemma 6.11.
We represent t by a canonical chart representation We recall that t i (I i ) = J i . Let k be a index such that 1/3 is in the closure of J k and such that (after applying a chart refinement if necessary) J k ⊆ J. As g is fixing 1/3, there is J k ⊆ J k such that g(J k ) ⊆ J k and such that 1/3 is in the closure of J k . We let I k = t −1 k (J k ) and I k = t −1 k gt k (I k ). Then the triple (C × I k , C × I k , t −1 k gt k ) is in a chart representation of t −1 gt. Up to applying the algorithm of Lemma 4.20 to this chart representation, we may assume that I k is in [0, 1]. Moreover, up to applying a chart refinement if necessary, we may assume that either I k ∩ J is empty or consists of one point 1/3, or I k ⊆ J.
If I k ∩J is empty or consists of one point 1/3, then (C ×I k , C ×I k , t −1 k gt k ) is in a chart representation of ht −1 gt. Thus t −1 k gt k = id on I k by Lemma 4.24. This implies that g acts as the identity on J k . As 1/3 is in the closure of J k , this implies that g = 1.
Otherwise, the triple (C × I k , C × h(I k ), ht −1 k gt k ) is in a chart representation of ht −1 gt. Thus ht −1 k gt k = id on I k by Lemma 4.24. But then t k ht −1 k g : J k → J k has to be the identity as well. As g is fixing 1/3, t k ht −1 k has to fix 1/3. If t k was dyadic (i.e. of type (I)), it would have to fix 1/3, so that t k = id by Lemma 7.6. If t k is of type (II), we may assume that t k is special, see Lemma 7.5. Then, by Corollary 7.7, t k acts as an element of G 2 .
Thus g and h are conjugated by elements of G 2 . This implies that g and h are conjugated in G 1 . Combining Lemma 7.9 with Lemma 5.7, we obtain: Corollary 7.10. The embedding G → T (G 2 , ϕ) is Frattini.
Proof. We fix a finite generating set X for G 1 , and denote the generating set of T (ϕ) given by Lemma 3.7 by Y . We denote the union of the bijective images of X and Y in T (G 2 , ϕ) by Z and recall that Z generates T (G 2 , ϕ). We assume that all generating sets are symmetric. We denote by |.| A the word metric with respect to the generating set A.
Let g ∈ G 2 and let t = z 1 · · · z m be a reduced word in the alphabet Z that represents g −1 ∈ T (G 2 , Φ), so that tg = 1. In addition, we assume that m = |g| Z . We represent every generator z i by a canonical chart representation, see Lemma 4.15. Lemma 4.23 then gives a canonical chart representation Recall that the maps t i are compositions t i = h 1 · · · h mi , where each map h j is a local representation in the canonical chart representation of a generator in Z and m i m. In addition, up to applying the algorithm of Lemma 4.20 to this chart representation of t, we may assume Let I k be an interval such that 1/3 is in the closure of I k and such that (after applying a chart refinement if necessary) I k ⊆ J.
is in a canonical chart representation of the identity. By Lemma 4.24, t i g is the identity mapping. In particular, g −1 (I k ) = J i , so that J i ⊆ J and 1/3 is in the closure of We note that t i is not dyadic (i.e. of type (I)) by Lemma 7.6. If t i = f g 1 f 1 · · · g n f n is a chart of type (II), then, by Lemma 7.5, we may assume that t i is special. Thus t i = g 1 · · · g n ∈ G 2 (Lemma 7.6), where n m i m.
Thus we may assume that t i = x j1 · · · x jm i ∈ G 2 . Then |g| X m i m = |g| Z . We conclude that the embedding is isometric. Combining Lemma 7.11 with Lemma 5.6, we obtain: Corollary 7.12. If G is finitely generated, then the embedding G → T (G 2 , ϕ) is isometric.
Proof of Theorem 7.1. The simplicity of T (G 2 , ϕ) follows from Lemma 3.19. From Corollary 7.8, the word problem in T (G 2 , ϕ) is decidable provided that it is decidable in G 2 . By Corollary 7.10, the embedding G → T (G 2 , ϕ) is Frattini. By Corollary 7.12, it is an isometric embedding provided that G is finitely generated. Therefore, the embedding G → T (G 2 , ϕ) satisfies Theorem 7.1.
APPENDIX A. THOMPSON'S EMBEDDING REVISITED
Here we adapt the original embedding construction of [Tho80] to the setting of our paper and note that, in addition, it is an isometric embedding.
Theorem A.1. Every computable group G Frattini embeds into a finitely generated simple group H with decidable word problem. Moreover, if G is finitely generated, the embedding is isometric.
Remark A.2. The original statement [Tho80] is for finitely generated groups, but finite generation can be replaced by computability of G due to Theorem 5.15.
A.1. The embedding construction. Let G be a computable group. By Theorem 5.1, G embeds into a finitely generated perfect group G 1 with decidable word problem (if G is finitely generated, this claim also follows from [Tho80, §2]).
Let G 1 = {g 1 , g 2 , . . .} be enumerated so that m : N × N → N, defined as m((i, j)) = k if g i g j = g k , is computable. By Remark 2.3 the existence of m is equivalent to decidability of the word problem.
Let J = [ 1 2 , 1). For strictly positive k ∈ N, let We observe that any two such intervals are disjoint.
We denote by I l k the left half of the interval, and by I r k the right half, so that For every l ∈ N, let ξ l : J → J be the piecewise homeomorphism, whose pieces are dyadic, and that, for every k ∈ N, maps I r k onto I r m(l,k) and that is the identity map elsewhere on J. Let us define λ : G 1 → C(J) by λ(g l ) = ξ l , for all l ∈ N.
This contradicts the definition of G 2 -dyadic maps, Definition 4.6.
For n ∈ Z, we write s n (x) := 2 −n x + (1 − 2 −n ). All dyadic maps that fix 1 are of this form. Note that s n+m = s n • s m . We call |n| the degree of s n .
Lemma A.9. Let n = 0, and let g ∈ G 2 . Then, for all k > |n|, gs n and s n are equal on I r k . Moreover, s n g acts as the identity on at most finitely many I r k .
Proof. Let n > 0 and k > n. Direct computations show that Since, by definition, g acts trivially on I l k−n , we get gs −n coincides with s −n on I r k . Similarly, I r k ⊂ s n (I l k−n ), so that s n (I r k−n ) does not intersect with I r l , for any l > 0. Thus, by definition, g acts trivially on s n (I r k−n ), and gs n coincides with s n on I r k−n . In addition, as g permutes the intervals I r k , s n g acts as the identity on at most finitely many I r k .
Let m > 0 and for all 1 i m, let g i = 1 in G 2 , and n i = 0 in N. Let us fix Λ = g m s nm · · · g 1 s n1 to be a G 2 -dyadic map as in Lemma A.5. Let S 0 = id, S 1 = s n1 , and, recursively, S i = s ni S i−1 .
Lemma A.10. If, for all i < m, S i = id and k is strictly larger than the degree of S i , then Λ acts as g m S m on I r k . In particular, Λ = id.
Proof. Let k be strictly larger than the degree of S i , for all i < m. By Lemma A.9, as k > |n 1 |, g 1 s n1 equals to s n1 on I r k . Thus, restricted to these intervals, g m s nm · · · g 1 s n2+n1 equals to Λ. By induction this yields the first assertion.
We show that g m s nm+...+n1 = id on all but finitely many of the intervals I r k . If s nm+...+n1 = id, this is by Lemma A.9. Otherwise g m s nm+...+n1 = g m = id, which yields the claim by Remark A.4.
If m > 0, let i 0 be the smallest index such that n i0 + . . . + n 1 = 0, and recursively define i j to be the smallest index such that n ij + . . . + n ij−1+1 = 0. Let i M be the largest such index < m.
Lemma A.11. If n m + . . . + n 2 + n 1 = 0, then Λ equals to g m g i M g i M −1 · · · g i1 g i0 on all but a finite number of intervals I r k , which can be algorithmically determined. Otherwise, Λ = id.
Proof. If m = 0 the claim follows by Lemma A.10. Let m > 0.
By Lemma A.9, g m s nm g m−1 . . . g i1 and Λ are equal on I r k unless k is smaller than the degree of S i , for some i i 0 . Inductively, g m s nm g m−1 . . . g ij and g m s nm g m−1 . . . g ij−1+1 s ij−1+1 equal on I r lj := g ij−1 · · · g i0 (I r k ) unless l j is smaller than the degree of S i , for some i j−1 < i i j . Finally, g m s nm+...+i M +1 and g m s nm g m−1 . . . g i M +1 s M +1 are equal on I l M = g i M · · · g i0 (I r k ), unless l M is smaller than the degree of S i , for some i M −1 < i i M .
Let g := g i M g i M −1 · · · g i1 g i0 . We conclude that Λ is equal to g m s m+...+i M +1 g on all but a finite number of intervals I r k . As the degree of the S i is computable, we can algorithmically determine these intervals. If s m+...+i M +1 = id, this concludes the proof. Otherwise, by Lemma A.9, Λ acts as s nm+...i M +1 g on all but finitely many intervals I r k . Thus, Λ = id by Lemma A.9.
Corollary A.12. There is an algorithm to decide whether Λ is the identity on the intervals I r k in J .
Proof. By Lemma A.11, there is a computable number k 0 > 0 such that, for all k k 0 , Λ = id on I r k , if, and only if, g m g i M · · · g i1 g i0 = 1. As the word problem in G is decidable, this can be algorithmically determined. On the other hand, for each k, there is an (obvious) algorithm to decide whether or not Λ acts as the identity on I r k . We apply this algorithm for each k < k 0 . This completes the proof. Proof. Since, for all k, x ∈ S −1 1 (I r k ), we have that Λ(x) = g m s mn · · · g n2 s n2 s n1 (x). By induction, Λ(x) = S m (x), which is the claim.
Proof of Lemma A.5. By Lemma A.8, all dyadic factors in a G 2 -dyadic map fix 1. We first compute the degree of S m . If the degree of S m is not 0, then Lemma A.11 implies that Λ = id.
Otherwise, Lemma A.13 implies that Λ is the identity on J \ ∞ k=1 m i=0 S −1 i (I r k ). Let 0 i m. We argue that there is an algorithm to decide whether or not Λ is the identity on the intervals S −1 i (I r k ) in J . This will complete the proof. Let x ∈ S −1 i (I r k ), and let y ∈ I r k be the point such that x = S i (y). We note that Λ(x) = x if, and only if, Λ(S i y) = S i y, if, and only if, S −1 i ΛS i (y) = y. Therefore, we need to decide whether or not the G 2 -dyadic map S −1 i ΛS i is the identity on the intervals I r k such that S −1 i (I r k ) ⊂ J . Let k 0 > 0 be the smallest index such that for all k k 0 , I r k ⊂ S i (J ). As S i (J ) can be algorithmically determined, k 0 can be computed as well. Thus, we need to decide whether or not S −1 i ΛS i is the identity on the intervals I r k in 2 k 0 −1 2 k 0 , 1 . By Corollary A.12 such an algorithm exists.
The proof of this lemma is analogous to the proof of Lemma 6.11.
We represent t by a canonical chart representation (C i × I i , C j × J i , t i ) such that i J i = [0, 1]; and represent t −1 by (C i × J i , C i × I i , t i ). We recall that t i (I i ) = J i .
Let k be an index such that 1 is in the closure of J k and such that (after applying a chart refinement if necessary) J k ⊆ J. As g is fixing 1, there is J k ⊆ J k such that g(J k ) ⊆ J k and such that 1/3 is in the closure of J k . We let I k = t −1 k (J k ) and I k = t −1 k gt k (I k ). Then the triple (C × I k , C × I k , t −1 k gt k ) is in a canoncial chart representation of t −1 gt. Up to applying the algorithm of Lemma 4.20 to this chart representation, we may assume that I k is in [0, 1]. Moreover, up to applying a chart refinement if necessary, we may assume that either I k ∩ J is empty or consists of one point 1, or I k ⊆ J.
If I k ∩ J is empty or consists of one point, then (C × I k , C × I k , t −1 k gt k ) is in a chart representation of ht −1 gt. Thus t −1 k gt k = id on I k by Lemma 4.24. This implies that g acts as the identity on J k . As 1 is in the closure of J k , this implies that g = 1.
Otherwise, (C × I k , C × h(I k ), ht −1 k gt k ) is in a chart representation of ht −1 gt. Thus ht −1 k gt k = id on I k by Lemma 4.24. But then t k ht −1 k g : J k → J k has to be the identity as well. As g is fixing 1, t k ht −1 k has to fix 1.
If t k does not fix 1, h acts (up to applying finitely many chart refinements if necessary) as a dyadic map on I k . But it has to fix t −1 k (1). Thus h acts as the identity on I k . This implies that g = 1 by Remark A.3. Otherwise, by Lemma A.11, there are g i M , . . . , g i0 ∈ G 2 such that, on all but finitely many of the intervals I r j , the maps h −1 t −1 k gt k equals to h −1 g −1 i0 · · · g −1 i M gg i M · · · g i0 . By Remark A.4, this implies that h and g are conjugated in G 1 .
Moreover, the embedding of Thompson is also an isometric embedding.
Proof. We fix a finite generating set for G 1 . This gives a finite generating set for (G 2 , Φ). We denote by |h| the word metric of h.
Let g ∈ G 2 and t ∈ T (G, Φ) such that tg = 1. We represent t by finitely many (canonical) charts (C i × I i , C i × J i , t i ) such that I i = [0, 1]. We note that |t i | |t|.
Let I k be the interval such that 1 is in the closure of I k and such that (after applying a chart refinement if necessary) I k ⊆ J.
Then (C i × g −1 (I k ), C i × J i , t i g) is in a canonical chart representation of tg. By Lemma 4.24, t i g is the identity mapping. In particular, g −1 (I k ) = J i , so that J i ⊆ J and 1 is in the closure of J i .
If t i is a dyadic map, then t i = id (Lemma A.9) and thus g = 1 .
If t i ∈ G 2 , then g = t −1 i and |g| = |t i | |t|. Otherwise, t i = g 1 f 1 · · · g n f n is a G 2 -dyadic map. Moreover, by Remark A.4, we may assume that all dyadic maps f i fix 1. Thus, by Lemma A.11, g = g i1 · · · g m . Thus |g| m |f |.
We conclude that the embedding is isometric.
Combining Lemma A.15 with Lemma 5.6, we obtain | 2020-05-14T01:00:35.519Z | 2020-05-13T00:00:00.000 | {
"year": 2020,
"sha1": "03d470ac3c48e37148e36278ceb4b636d6e0aa79",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1112/jlms.12552",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "03d470ac3c48e37148e36278ceb4b636d6e0aa79",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Medicine",
"Mathematics"
]
} |
52283838 | pes2o/s2orc | v3-fos-license | Dataset on community structure of macro invertebrate fauna in Ubogo river, Udu LGA, Delta State, Nigeria
The datasets contained in this article are based on a baseline study on the selected physicochemical parameters and macro-benthic invertebrates’ community of Egini, and Ubogo Rivers in Delta State for a period of six months: February - July, 2010, within in six stations shared equally among the two rivers using the three communities they flow through as guide and water samples collected on monthly basis from these stations. The objectives include determination of the spatial variations and background concentrations of the selected physicochemical parameters, species composition and abundance of the macro-benthic invertebrates. Sixteen physicochemical parameters were analyzed in the water. Air and water temperature and current velocity were determined in-situ; the rest physicochemical parameters were determined adopting standard methods. Dusting method was adopted in sampling the macro-benthic invertebrates.
a b s t r a c t
The datasets contained in this article are based on a baseline study on the selected physicochemical parameters and macro-benthic invertebrates' community of Egini, and Ubogo Rivers in Delta State for a period of six months: February -July, 2010, within in six stations shared equally among the two rivers using the three communities they flow through as guide and water samples collected on monthly basis from these stations. The objectives include determination of the spatial variations and background concentrations of the selected physicochemical parameters, species composition and abundance of the macro-benthic invertebrates. Sixteen physicochemical parameters were analyzed in the water. Air and water temperature and current velocity were determined in-situ; the rest physicochemical parameters were determined adopting standard methods. Dusting method was adopted in sampling the macro-benthic invertebrates. &
Value of the data
The dataset-benthic macro invertebrates are sensitive to environmental impacts from both point and non-point sources of pollution.
The datasets integrate the effects of short-term environmental variations, such as oil spills and intermittent discharges.
Sampling involved via the dataset is relatively easy and inexpensive. The benthic macro invertebrates serve as the primary food source for many species of commercially and recreationally important fishes. Hence, an improvement on the economic and financial status of the dealers within the studied centers [1][2][3][4][5][6].
Benthic macro invertebrates communities can be used to identify sources of impairment.
Data
This study was carried out on Ubogo stream located within latitude (Latitude 5.45' -6.20N, and longitude 5.24' -60.20'E) with reference to Fig. 1. The streams sourced from Ohworode and flows
Sampling locations
In relation to the flow direction, two sampling stations were positioned along Egini River, and four along Ubogo River, two each around Ubogo and Ogbe-Udu community. The sampling stations were visited during the sampling period between February and July, 2010. Sampling stations were chosen on the basis of their proximity to facilities, structure or human activities that could potentially affect water quality and biodiversity. Refs. [7][8][9] are referred to related views. Correlation analysis was used to determine the relationship between the physico-chemical parameters and the abundance of benthic macro-invertebrate. Non-Parametric Spearman correlation was used.
Basic statistical measurement of central tendency and dispersion was used to characterize stations in terms of physicochemical conditions. Inter station comparisons were carried out to test for significant differences in the physiochemical conditions using parametric analysis of variance (ANOVA). If significant difference (p o 0.05) were obtained, Duncan multiple range (DMR) tests were performed to determine the location of differences using the computer SPSS 16.0 window application.
Diversity indices
Diversity indices combine the information on multiple species into a single number. This approach is a common way to summarize data in an environmental study. Data collected at the sampled stations are converted to diversity indices, and then the indices are analyzed to investigate patterns associated with environmental stress.
Community structure
Across the six stations studied in the two rivers, only the bank root biotope was sampled. The macro-invertebrate samples collected from this biotope were analyzed to assess the taxa composition, distribution, abundance, diversity and dominance (Tables 1,3-6).
Composition, distribution, abundance and dominance of macrobenthic invertebrates
The overall taxa composition, abundance and distribution are in Table 2. A total of 41 taxa comprising 621 individuals were obtained. These taxa were encompassed within Rhynchobdellida, Decapoda, Araneae,
Remarks on physco-chemical conditions
With the exception of turbidity, the rest physico-chemical parameters analyzed in this study were in conformity with Federal Ministry of Environment permissible limits for surface water. Meanwhile the diversity of macro-benthic fauna encountered did not reflect the prevailing physco-chemical conditions encountered. This discrepancy is attributable to the selected biotope sampled, oligotrophic nature of the rivers and also the prevailing turbid condition which in turn can affect the primary production in these ecosystems. It is imperative to characterize the microbial status and heavy metal concentrations of the rivers in order to reach a more concrete conclusion on the health state of the Table 2 Relative abundance of the individual MI fauna across the stations at the study areas. Degree of relative similarity evaluated from 0¼complete dissimilarity to 1¼ complete similarity; critical level of significance ¼ 0.50; Asterisk (*) -indicates significant dissimilarity. Degree of relative similarity evaluated from 0 ¼ complete dissimilarity to 1 ¼ complete similarity; critical level of significance (C j ) ¼ 0.5. Asterisk (*) -indicates significant dissimilarity. water bodies. Constant monitoring is also advised so that any deviation in the quality of the rivers could be detected on time and appropriate remedial actions taken in time. | 2018-09-23T00:24:57.924Z | 2018-05-23T00:00:00.000 | {
"year": 2018,
"sha1": "392c7b123a2e8f93234ab00961ac0d203517c0c5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2018.05.084",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1f40c35a6625fdbc156a1e27b6c1b359227da6e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
261465701 | pes2o/s2orc | v3-fos-license | Bored with boredom? Trait boredom predicts internet addiction through the mediating role of attentional bias toward social networks
Internet addiction is an emerging issue, impacting people’s psychosocial functioning and well-being. However, the prevalence and the mechanisms underlying internet misuse are largely unknown. As with other behavioral addiction disorders, the increase and persistence of internet addiction may be favored by negative affect such as boredom. In this study, we examined the role of boredom susceptibility, as a personality trait, in predicting the risk of internet addiction. Furthermore, we analyzed the attentional mechanisms that may exacerbate dysfunctional internet behaviors. Specifically, we assessed the mediating role of attentional bias toward social media cues on the relation between boredom susceptibility and internet addiction. Sixty-nine young adults were administered a dot-probe task assessing internet-related attentional bias (AB) and questionnaires measuring internet addiction (IAT) and boredom susceptibility (BS-BSSS). Correlation and t-test analyses confirmed that the tendency to experience boredom and selective attention toward social network information was related to internet addiction. Furthermore, the mediation model indicated that AB fully explains the link between BS-BSSS and IAT. The study highlighted the crucial role of selective attentional processing behind internet addiction. The current results are useful for both researchers and clinicians as they suggest that intervention programs for internet addiction should include strategies to cope with dysfunctional cognitive processes.
Bored with boredom?Trait boredom predicts internet addiction through the mediating role of attentional bias toward social networks Introduction Along with the beneficial improvements that the internet has brought about in society, several issues related to problematic internet usage and addiction have also emerged.According to a large body of research, this dysfunctional condition can have a significant impact on the quality of "real" life by negatively affecting time spent in social interactions (Enez Darcin et al., 2016), restricting one's capacity to fulfill commitments at the professional and academic levels (Young, 1996;Annunzi et al., 2022), or even interfering with time spent engaging in personal interests (Hellström et al., 2012;Rehbein and Baier, 2013).Within the broader category of internet addiction, the most heavily studied phenomena are certainly betting, gaming, and social network addictions (Petry, 2015;Calluso et al., 2020;Cannito et al., 2022a).However, partly due to the more recent spread of social networks, much more evidence is available on gaming addiction although it has recently been reported that social network addiction occurs more frequently in the population and it is equally, if not more, associated with psychosocial difficulties (Burén et al., 2021).A massive increase in social network addiction was reported alongside the spread of mobile hardware (smartphones and tablets) as it made it possible to be always connected (Schou Andreassen and Pallesen, 2014).From the emergence of this phenomenon, conflicting opinions have been reported in the literature on whether this behavioral pattern is to be considered a pathological addiction itself or, instead, an extreme of normal behavior that can take the form of problematic usage (Varona et al., 2022).
Intriguingly, while within the DSM-5 a diagnostic category for "internet gaming addiction" which focuses on the dysfunctional use of online gaming is available, no official diagnostic categories for internet addiction in general and social networking addiction are included, neither in the DSM-5 nor in the ICD-11.However, mounting evidence in the literature suggests that, as shown for other behavioral addictions such as gaming, excessive internet use in general and excessive use of social networks show numerous similarities with substance-based addictions.For example, typical psychological mechanisms associated with alcohol and drug addictions, such as withdrawal symptoms and tolerance, have been reported for internet and social media addiction as well (Bányai et al., 2017).Not only psychological but also cognitive features of substance use disorders have been reported in relation to internet and social media addiction.For example, alterations in executive functioning and inhibitory cognitive control have been reported in individuals suffering from social network addiction (Wegmann et al., 2020).Similarly, attention, as the most investigated cognitive domain subject to alterations in substance use disorders, has been shown to play a crucial role in internet and social media addiction as well, with clear evidence concerning the presence of attentional deficits and attentional bias (Jeromin et al., 2016;Wang et al., 2017;Nikolaidou et al., 2019).
Literature on this topic seems to suggest that being engaged in dysfunctional addictive behavior may serve as a coping strategy to manage emotional dysregulation because of stressful events that induce unpleasant emotions (Chou et al., 2015).While most of the available evidence focuses on contingent emotional states, less is known about the role of an individual tendency to be susceptible to certain emotions as a stable trait.This research line is mainly grounded in studies investigating the role of susceptibility to positive and negative affect in relation to mood and personality disorders (Larsen and Ketelaar, 1991) and studies investigating the relationship between general emotional susceptibility and interoceptive processes (Calì et al., 2015).
Furthermore, it should be noted that along with other negative consequences of the COVID-19 pandemic on personal general wellbeing (Cannito et al., 2020;O'Connor et al., 2021), relational phenomena (Cannito et al., 2022b), and economic and community organization (Cannito et al., 2021;Ceccato et al., 2021;Di Crosta et al., 2021), a consistent number of results suggests an increase in the prevalence of internet-based addictive behaviors (Masaeli and Farhadi, 2021) and smartphone misuse and separation anxiety (known as nomophobia) or dependency (Caponnetto et al., 2021).While this increase may be reasonably understood since technology use was the essential base of adaptability for smart working, schooling, and professional training, particularly during the strict lockdown phases, it remains unclear why some individuals continue to engage in these dysfunctional behaviors, even presenting the typical symptomatologic manifestations associated with addiction, including physiological and cognitive modifications (Konok et al., 2017).Intriguingly, since the beginning of the pandemic emergency, a consistent number of studies reported a significant increase in boredom experience among the population (Danckert, 2022) with fluctuations in levels of boredom associated with changes in the perceived passage of time during the lockdown phases (Wessels et al., 2022) and with boredom proneness predicting the violation of restrictive measures adopted by the governments (Boylan et al., 2021).While there is converging evidence suggesting a concomitant increase in internet addiction and the experience of boredom among the population, how emotional dysregulation associated with the experience of boredom promotes addictive behaviors remains an open question.
Following recent theorization suggesting the relevant role of attentional processes as core cognitive components of boredom (i.e., MAC Model; Westgate and Wilson, 2018), in the current study, we aimed to investigate the joint role of trait boredom (i.e., boredom susceptibility and the dispositional tendency to experience boredom) and altered attentive processing of relevant stimuli (i.e., Attentional Bias) in predicting internet addiction risk level.
The role of boredom in addiction
Despite its theoretical significance as an indicator of psychological well-being and its prompting role in some human behavioral patterns, the emotion of boredom started to receive more structured attention from the psychology community only in recent years, probably due to the long-standing debate on boredom's definition and nature (Fultz et al., 2022).According to the current literature, boredom can be defined as the subjective experience of being in a state perceived as undesirable and unpleasant (Eastwood et al., 2012), associated with a strong difficulty in maintaining attention and a tendency toward cognitive disengagement (Goetz and Hall, 2014;Elpidorou, 2018), as well as with a perceived slow passage of time (Witowska et al., 2020), which generally prompts people to take action to escape the present Frontiers in Human Neuroscience 02 frontiersin.orgmoment (Westgate and Wilson, 2018).Several models have been proposed to explain the emotion of boredom, most of which fall within three categories: attentional models, arousal/environmental models, and meaning/functional models of boredom.The first group (Eastwood et al., 2012) suggests that boredom results from a lack of engagement and attention to the task being performed.Therefore, when a task is perceived as uninteresting, it becomes difficult to sustain attention and focus, leading to boredom.For the arousal/environmental models (Cox, 1980;Chin et al., 2017), boredom is a result of low levels of physiological arousal and a lack of stimulation from the environment.Therefore, people who are bored are seeking new and exciting experiences to increase their level of arousal.According to meaning/functional models (van Tilburg and Igou, 2012), boredom is a result of a lack of meaning and purpose in an activity so when people feel that their actions are unimportant, they become bored and disengaged.Therefore, boredom's function is to communicate the worthlessness of the current action in which the individual is involved (Westgate and Wilson, 2018).Among all of them, the model that has received the most support from experimental evidence is the MAC (Meaning and Attentional Components) model of boredom and cognitive engagement, according to which attention and meaning work as independent predictors of boredom and are both required to avoid the experience of boredom (Westgate and Wilson, 2018).
In addition to the literature investigating the nature of boredom from a theoretical point of view, in recent years, evidence has accumulated showing the possible positive and negative behavioral consequences induced by boredom.For example, it has been shown that creativity may serve as a cognitive coping strategy to reduce boredom that motivates an individual to pursue new goals, thus suggesting a positive contribution of boredom in promoting behaviors that improve an individual's state (Elpidorou, 2018;Westgate, 2020).On the other side, boredom has also been shown to promote an individual's involvement in undesirable behavior, such as an optimistic perception of risk and consequently increased risk-taking (Kılıç et al., 2020;Bench et al., 2021), or an increased risk of substance use disorders and addiction, particularly among the youngest (Biolcati et al., 2016;Yang et al., 2020;Donati et al., 2022).
While most of the available evidence on the causal impact of boredom on addiction pertains to state boredom as a negative transient emotion experienced in a specific situation, recent contributions suggest that trait boredom (also known as boredom susceptibility or boredom proneness in the literature) accounts for negative behavioral outcomes, particularly during the COVID-19 pandemic, independent of state boredom (Weiss et al., 2022).
Boredom as a trait refers to an individual's stable tendency to easily experience boredom in several situations or activities.People who score high on measures of boredom proneness tend to find it difficult to be satisfied with their surroundings and may have a low tolerance for repetitive or unengaging experiences.It is important to note that trait boredom is a complex and multi-faceted trait that can be influenced by various individual, situational, and environmental factors.Tam et al. (2021) recently suggested that individual differences in trait boredom are reflected by differences in three macro-components: the frequency of getting bored, the intensity of boredom, and a holistic perception of life being boring, defined as perceived life boredom (Tam et al., 2021).
Following previous literature, it can be hypothesized that the level of trait boredom positively predicts levels of internet addiction.
The role of attentional bias in addiction
The literature on cognitive correlates of addiction has long uncovered a very robust mechanism known as attentional bias (AB).AB manifests itself as a distortion of the normal processes that support selective attention, thus producing a strong tendency to direct attention toward the addictive stimuli (engagement phase) and/or difficulty in shifting focus away from such stimuli (disengagement phase).AB is commonly measured via a dotprobe task in which an addiction-related picture and a neutral picture are presented side by side (Lorenz et al., 2013).One of the two pictures is then replaced by a target (x) and participants are asked to indicate its position.In this case, people respond more quickly to the target if it appears in a most frequented spatial area surrounding one of the two pictures (Posner et al., 1980).As individuals suffering from an addiction respond more quickly to targets that replace addiction-related pictures, it has been suggested that they have heightened attention toward these stimuli (Field and Cox, 2008).Across addiction categories, this bias is considered to play an important role in the development and maintenance of dysfunctional addictive behavioral patterns.For internet-and social media-based addiction, it has been referred to as a tendency to pay more attention (both visive and auditive) to internetrelated cues such as images of computer screens or notifications from social media, compared to neutral stimuli (Nikolaidou et al., 2019;Zhao et al., 2022).This bias in devoted selective attention is related to increased craving and internet use frequency and is also associated with differences in neural correlates.For example, an increased ERP-late positive potential to game-related stimuli in a sample of individuals with internet gaming disorder was reported (Kim et al., 2021).Additionally, studies have shown that attentional modification can be a pathway through which creating a psychological intervention for AB toward internet-and social media-related cues can be modified via attentional bias modification techniques, such as cognitive bias modification for addiction, which has shown promising results in reducing internet and social media craving and use (Xiaoxia et al., 2020;Camilla et al., 2022).Therefore, it was hypothesized that AB toward addictionrelated stimuli may work as a positive predictor of internet addiction as measured in the current study.
Participants
The sample included 70 (N = 13 men, mean age 19.42 ± 1.54 SD) Italian student participants.All the participants provided written informed consent in accordance with the ethical standards of the Declaration of Helsinki (1964).Participants were recruited through online public announcements and received no monetary or other compensation for their participation.To take part in the study, participants were required to be social network users and 10.3389/fnhum.2023.1179142not to be diagnosed with any neurological or psychiatric condition.This information was self-reported by participants during the recruitment phase by answering two questions (1.Have you ever been diagnosed with a neurological or psychiatric condition?2. Have you ever taken medication because of a neurological or psychiatric condition?).Exclusion from participation was determined by a positive response to either one or both questions.The whole experimental procedure was conducted in the laboratory and participants were instructed to perform the task and provide their answers to the questionnaires.For the visual dot-probe task, participants were asked to sit in front of a computer screen while maintaining a distance from the center of the screen of approximately 60 cm throughout the duration of the task.
Measures Internet addiction test
To measure participants' risk level for internet addiction, we administered the Italian version of the internet addiction test, hereafter, IAT (Casale and Fioravanti, 2015;Servidio, 2017), adapted from the original scale (Young and Rogers, 1998).The scale includes 20 items on a 5-point Likert scale (from 1 = Never to 5 = Always) measuring the risk for internet addiction, with a possible score ranging from 0 to 100.Following Young's original classification (Young and Rogers, 1998;Young and Case, 2004), a participant reporting a score above 30 should be considered at risk.The scale allows to individualize the risk level for addiction on four possible levels: severe risk (scores ranging from 80 to 100), moderate risk (scores ranging from 50 to 79), mild risk (scores ranging from 31 to 49), and no risk (normal usage, scores ranging from 0 to 30).Based on this classification and reported responses, our sample was distributed as follows: severe risk = 0%; moderate risk = 13.1%;mild risk = 69.5%;no risk = 17.4%.For our sample, Cronbach's α for this scale was 0.86.
Trait boredom
To measure trait boredom, participants were administered the Italian version of the Brief sensation-seeking scale, hereafter, BSSS (Primi et al., 2011).The scale, developed as the shortest version of the Sensation-seeking scale (Zuckerman et al., 1978), allows four different factors to be measured, among which there is boredom susceptibility (BS-BSSS) consisting of an aversion to repetition and routine, and restlessness when things are not changing (Zuckerman, 1994).Participants are required to express their agreement with respect to eight items on a 5-point Likert scale (from 1 = Strongly Disagree to 5 = Strongly Agree).Based on our participants' observed answers, the BSSS scale reports a Cronbach's α = 0.89.Cronbach's α for the boredom susceptibility subscale was 0.84.
Stimuli selection
A total of 80 pictures (20 social network logos, 20 brand logos, and 40 national flags), standardized for size and brightness, were selected from the web and administered to an independent sample (N = 35, mean age = 20.1 SD = 3.4 years old) in order to select 10 pictures highly associated with social networks (10 social network logos) and 30 pictures not associated with social networks (10 brand logos and 20 national flags).For this purpose, participants were asked to indicate how much, from 0 (not at all) to 100 (very much), the presented picture was associated with the idea of social network.The questionnaire was administered via Qualtrics software.Therefore, to construct the dot-probe task's test trials, we selected the 10 social network logo pictures with the highest evaluation and the 10 brand logo pictures with the lowest evaluation.Similarly, to construct the filler trials, we selected the 20 national flag pictures with the lowest evaluation (for details see Supplementary Table 1).
Dot-probe task
To measure attentional bias toward social network stimuli, a modified version of the standard dot-probe task (Miller and Fillmore, 2010) was employed.The task involves the presentation of 10 pairs of social network/brand visual stimuli that were presented four times based on the four possible stimulus/probe combinations (the position of the stimulus on the left or the right and the position of the probe on the left or the right), thus obtaining 40 test trials.Also, there were 40 filler trials, which consisted of 10 pairs of neutral pictures (national flags) each presented four times.We included the filler trials in this task to reduce possible habituation to stimuli that might occur if all trials contained images related to the brands.The 40 filler trials were randomly intermixed among the 40 test trials, for a total of 80 trials.The task was divided into two blocks: the first block with 10 practice trials (for which geometric-shaped stimuli were employed to avoid possible familiarization effect with stimuli used for task trials) and the second block with 80 task trials (40 test trials and 40 filler trials), randomly sampled without replacement.After presenting the instructions, participants were presented with a fixation cross (+) at the center of the screen (500 ms), followed by the presentation of a couple of stimuli (social networks and brands pictures for test trials and both flag pictures stimuli for filler trials) showed for 1,000 ms.The position of the pictures was randomly chosen to be either on the left or on the right of the fixation cross.After that, the two stimuli disappeared, and a probe (X) appeared in the position of one of the two objects (the duration of the probe was 1,000 ms).Participants were asked to press one key (A) if the probe was on the left and another key (L) if the probe was on the right (see Figure 1).The task administration was conducted through a screen sized 15.6 inches and pictures were presented in a box of 6 × 7 cm (visual angle = 5.72 • × 6.67 • , calculated using a viewing distance of 60 cm) to the left and right sides of the centered fixation cross, with a distance of 10 cm between the two.Attentional bias is determined as a difference in the reaction times at congruent trials (trials at which the probe replaces the target stimulus, here the social network picture) and incongruent trials (the probe replaces the brand picture).For individuals whose attention is systematically drawn to the social network stimuli, reaction times are expected to be shorter (i.e., faster) for trials where the probe replaces the social network picture compared to trials where the probe replaces the brand picture.
Due to a technical error during task administration which prevented responses to the dot-probe task from being
Frontiers in Human Neuroscience 04 frontiersin.orgDot-probe task.
recorded, one participant was removed from the sample.The final sample included 69 participants (N = 13 men, mean age = 19.42 years ± 1.55 SD).A good accuracy percentage was found for all the types of trials: the trials' accuracy when the probe replaced the target stimuli (i.e., social network logos) was 92.7%, and the trials' accuracy when the probe replaced a neutral stimulus both from the test and filler trials (i.e., brand logo and national flags) was 91.1%.The overall accuracy was 92.9%.Before calculating attentional bias, some data filtering was performed.
Trials with incorrect responses were not included in the dataset and reaction times shorter than 250 ms and longer than 1,000 ms were excluded.As a result, 89.3% of the original data were included in the following analyses.Each participant's mean reaction time per trial to probes was calculated for trial type (congruent versus incongruent).When considering test trials (no filler trials) in the whole sample, no significant RT difference was found between probes that replaced the target stimuli of social network logos (congruent trials, M = 360.94± 61.44 SD) and probes that replaced neutral images of brand logos (incongruent trials, M = 368.32± 68.32 SD), t(68) = −0.912,and p > 0.05.Pearson correlation coefficients were computed to assess the linear relationship between AB score, IAT score, and boredom susceptibility as obtained via BSSS.The results suggest significant positive correlations between all three variables (see Table 1 details).
Also, an independent sample t-test was performed to assess differences in IAT scores between participants that reported an AB (AB >0) and participants that did not report an AB toward social network stimuli (AB ≤0).The results indicated a significant difference in IAT scores, with significantly higher internet addiction levels for participants that showed AB (N = 33, M = 43.48,SD = 9.81) than for participants who did not (N = 36, M = 35.94,SD = 7.44), t(67) = 3.61, and p = 0.001, thus suggesting a significant contribution of an altered selective attentive process in internet addiction (see Figure 2A).
To test the hypothesis that being more prone to boredom may increase the risk for internet addiction both directly and indirectly, through the intervention of an altered attentional engagement mechanism toward addiction-related stimuli, a mediation model was performed.As a first step, simple linear regression was used to test if boredom susceptibility significantly predicted the IAT score.The fitted regression model was: IAT = 30.98+ 1.33 (boredom susceptibility).The overall regression model was statistically significant, R 2 = 0.061, F(1,67) = 4.36, and p = 0.04.Given the predictive role of boredom susceptibility on IAT score, a simple mediation analysis was conducted using the SPSS version (IBM SPSS, v. 22) of PROCESS macro and applying the Model 4, bootstrapping with 5,000 resamples to estimate indirect effects (Hayes and Preacher, 2013).This model is designed to test a situation in which the relationship between an outcome variable (IAT score) and a predictor variable (boredom susceptibility) can be explained by their relationship to a third variable (AB) named a mediator (Field, 2013).
Kappa-squared (κ 2 ) value was calculated to measure the size of the indirect effect: a value around 0.25 indicates a large effect, a value around 0.09 indicates a medium effect, and a small effect value is expected to be around 0.01 (Field, 2013).For the proposed mediation model, a κ 2 = 0.08 was computed (see Figure 3).As reported in toward social network stimuli on the relationship between trait boredom and internet addiction level.
Discussion
In the current study, attentional bias toward social networks has been identified as a mediator in the relation between trait boredom and internet addiction, suggesting that when someone chronically experiences boredom, their visual attention is more likely to be drawn to social media-related cues, increasing their risk of developing internet addiction.Indeed, mere exposure to addictive stimuli works as a factor that increases the risk of engaging in addictive behaviors.This is likely because social networks provide an easy form of entertainment and distraction from boredom, which can lead to a cycle of seeking out more and more online activities as a means of escape.
Our results are in line with a study by Al-Saggaf et al. (2019) which found that internet addiction, fear of missing out, and selfcontrol were all related to trait boredom, finding that boredom proneness was a positive significant predictor of internet addiction.Our results are also in line with those obtained by Zhao et al. (2022) which suggest that problematic use of social media is associated with a higher attentional bias toward social media, and both are associated with a higher experience of negative emotions (anxiety, depression, social fear, and loneliness) even if the emotions examined were not limited to boredom (Zhao et al., 2022).
Similarly, evidence from another study indicated that boredom proneness in adolescents was linked to a wide range of risky behaviors, including internet addiction, binge drinking, problem gambling, and sexual activity during free time (Biolcati et al., 2018).The authors concluded that boredom proneness could be a significant risk factor for problem behaviors in adolescents and could be an important factor to consider when designing interventions to reduce risk by introducing new practices to manage free time, therefore working on the reduction of, at least, the frequency of getting bored between the three factors proposed as core components of trait boredom.While our results work as corroboration of the existing literature as they support the idea that trait boredom may be a crucial element in defining a risk profile for internet addiction, they also add a new element to our understanding of the dynamic characteristics of this disorder.In particular, the evidence that the predictive role of trait boredom is fully mediated by the attentional bias toward disorder-relevant stimuli leads to at least two considerations.First, the idea that an individual stable trait's influence on dysfunctional behaviors can be minimized by a more not stable and treatable cognitive characteristic is itself encouraging and promising concerning the investigation of intervention protocols for this disorder.Second, and more in need of further exploration, the idea that intervention protocols for reducing internet addiction should not focus exclusively on personality traits and affect modifications.Often, the structure of these interventions is strongly focused on the reduction of non-engagement and trait boredom through involvement in stimulation-type activities and particularly during free/leisure time (Waterschoot et al., 2021).However, our findings suggest that this may not be sufficiently effective if it is not accompanied by modification in cognitive alterations, such as those affecting the attentional system associated with the disorder itself.Taken together, and from a cognitivebehavioral perspective, most of the currently available interventions seem to focus on modulating behavioral aspects (e.g., associated with motor activation or avoidance reduction) while less attention has been paid to managing cognitive aspects.In this sense, an involuntary alteration of attentional focusing patterns on addictive stimuli may be interpreted as a dysfunctional coping strategy aimed at managing boredom when the perception of this emotion exceeds the threshold of tolerance.Therefore, it would be useful to further investigate this relationship and to test the modification of the coping strategy based on volunteer alteration of attentional focus as a possible therapeutic intervention.
The current study presents some limitations.First, since the instrument used in our study to measure the risk of internet addiction was developed several years ago, future studies should test the validity of this model using a more recent instrument for the assessment of internet addiction.Nevertheless, our results would be further strengthened by the presence of an objective measurement of internet addiction since our data on internet addiction, as selfreports, reflect the subjective perception of the participants.Future studies should consider testing the model using a different type of measurement, such as hours spent on the internet.Second, clinical interpretation of the current results should be done considering that no participant in the sample showed a severe risk for addiction to the internet (IAT >80) and a small portion (approximately 13%) presented a moderate risk of addiction (IAT = 50-79).Another limitation to be highlighted concerns the measurement of boredom susceptibility by means of two item-based factors that may not have captured all relevant aspects of trait boredom.
Moreover, future studies should amplify the investigation of the role of attentional bias as expressed through other sensorial channels (such as acoustic) and multisensorial attentive distortion, as possible different involvement of attentional distortion on different sensorial levels might vehiculate and help define subsequent intervention projection and testing.Nevertheless, the role of cognitive functioning and processing in domains other than the attentive one (such as memory, reasoning, or consciousness) should be taken into consideration when evaluating internet addiction.At last, it would be particularly interesting to explore if evidence obtained in the current study also applies to the older population (middle-aged and older adults) for which much less evidence is available in the literature on the prevalence and development of internet addiction.
Altogether, our results suggest that to reduce the risk of developing internet addiction, it is important to look for ways to cope with boredom other than social media, such as engaging in meaningful activities.Moreover, it is crucial to promote deeper integration of available knowledge on attentive processing of addiction-related information.
FIGURE 1
FIGURE 1 FIGURE 2 (A) Boredom susceptibility and (B) internet addiction for participants with and without attentional bias toward SN stimuli.Error bars, 95% CI.* p < 0.01.
FIGURE 3
FIGURE 3 Mediation model.Significant p-values in bold. | 2023-09-03T15:04:27.560Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "17b0a007f21c79eb826ee258180746dc8058fb5c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2023.1179142/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d5de4c7eaeced02b01dd891023d4b7977627bf54",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.