id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
220852054 | pes2o/s2orc | v3-fos-license | Colorectal Cancer and Bone Tissue: Fantastic Relations and Where to Find Them
Colorectal cancer (CRC) is the third most common cancer worldwide. There is a need for the early diagnosis of CRC for a better prognostic outcome. It is, therefore, crucial to understand the CRC pathogenesis in all its aspects. In many cases, one of the main causes of cancer-related deaths is the presence of metastases. In this context, an often overlooked aspect is the metastatic tropism, since CRC, like other cancers, is more prone to metastasize some organs rather than others. Beyond the liver and lung, and differently from other types of cancers, a not usual site of CRC metastases is the bone. However, it may assume a crucial role in the development and the outcome of the disease. Therefore, this review aims to discuss the complex relations between bone markers and CRC pathogenesis, suggesting the use of these molecules as potential targets for therapeutic purposes. Different osteogenic molecules, some of whom are growth factors and are implicated in the different osteogenic pathways, have been proved to also be involved in CRC progression. Some of them are oncogenes, while others oncosuppressors, and in a future perspective, some of them may represent new potential CRC biomarkers.
Introduction
Colorectal cancer (CRC) is widespread across the world; it represents one of the most common cancers, and is among the leading causes of tumor death. Although the etiology of CRC relies on genetic causes, other factors (e.g., family history, inflammatory bowel disease, sex, smoking, folate intake, high intake of fats, alcohol, red and processed meats, sugars) can often actively contribute to its onset [1,2]. Besides, in 70% of CRC cases, it develops from previous neoplasms, such as colorectal bowel adenomatous polyps [3]. The ability of healthy colon epithelial cells to transform into neoplastic cells through the adenoma-carcinoma sequence has been largely described [4]. This sequence is regulated by oncogenes and oncosuppressors, subject to mutations and dysregulations that favor tumor development. High-performance techniques allow for quickly associating phenomena of genomic instability correlated with those processes promoting the development of cancer. These advances point to the possibility of early diagnosis and related therapeutic intervention [5].
A large part of the mortality rate from cancer, as well as tumor recidivism, is due to the staging and the presence or absence of metastases, where it has metastasized and whether there are microor macro-metastases. The process of the formation of metastases is characterized by several steps, all fundamental and essential to each other. These steps are not yet fully understood, as well as the molecular pathways that may regulate their occurrence, which genes are expressed and which are not. It is necessary that epithelial cells, following genetic mutations, become the trigger of what then evolves into carcinoma in situ. Later, some cancer cells detach from the primary mass to spread and evolves into carcinoma in situ. Later, some cancer cells detach from the primary mass to spread and place themselves in a distant site in the body where they form a metastasis [6]. This process is possible due to the occurrence of a process known as the "epithelial-mesenchymal transition" (EMT).
In this sequence of events, epithelial cells lose their different types of cell-cell and cellextracellular matrix (ECM) junctions, and the apical-basal polarity, while acquiring an invasive, migratory capacity and secreting multiple components of the ECM [7]. Consequently, these cells are termed circulating tumor cells (CTCs), which invade the ECM, and enter the vascular system. Thanks to the blood flow, CTCs can reach a more distant site, where inputs of the "mesenchymal-epithelial transition" (MET) allow them to acquire capacities to perform extravasation and spread in the parenchyma. Following the inversion of the EMT process, cancer cells acquire epithelial properties again and, first of all, the high proliferative rate to create metastases [8].
Each tumor has preferential sites in which it produces metastases, the so called metastatic tropism [9]. Cancer formation and progression cannot be detached from cancer stem cells (CSCs). CSCs are fundamental in different aspects of tumorigenesis, such as tumor transformation, progression, therapeutic resistance and in metastatic tropism and, consequently, in the formation of metastases [10,11]. Therefore, a greater understanding of these mechanisms is crucial. The injection of allograft-derived pancreatic cancer tumor stem cells into wild type mice [9] demonstrated the production of metastases only in the liver or lung and liver, depending on whether the cell pool inoculation had been done by intrasplenic injection or in the caudal vein, respectively. It was also shown that the size of the metastatic masses is larger when they form in the liver than the lung. Overall, these findings support how metastatic tropism is affected by the presence of direct blood flow that, starting from the inoculation site, can reach distant organs. Following this event, the implantation of CSCs and the production of metastases is then influenced by the microenvironment of the host organ, which may be more or less suitable.
As for CRC, after total surgical removal of the primary tumor mass, recurrences in the form of metastases can occur preferentially in the liver, lungs, lymph nodes, peritoneum and bone [12] ( Figure 1). In this context, the colon and bone tissue, apparently so distant, have something in common. A disease in one of these apparatuses may well affect the physiological state of the other. They are more related than one can imagine. This review aims to describe what is known in the literature, reporting the state of the art on this topic. Colorectal cancer and its metastatic tropism. Primary tumor cells can be subjected to epithelial-mesenchymal transition (EMT), in order to generate mesenchymal cells with more motility and invasiveness. These mesenchymal cells enter the bloodstream, becoming circulating cancer cells (intravasation). Through the blood flow and under cellular signals, these cells reach distant sites where they metastasize. At this point, the circulating cancer cells come out from the blood stream (extravasation), undergo an inverse transformation, namely mesenchymal-epithelial transition (MET). Metastases are formed in preferential sites (metastatic tropism), such as liver, lung or bone.
CRC and Bone Metastases
Compared to liver and lung metastases, bone metastases in CRC occur only in 10-15% of cases [13]. In such patients, the five-year prognosis is less than 5% [14]. The diagnostic picture of these patients is very often characterized by skeletal-related events (SREs), which makes the clinical course of the disease worse. SREs can be constituted by the weakening of the bone structure, at both the trabecular and cortical level, and bone pain, as well as a higher probability of fractures. These pathological events worsen the patient's survival and their quality of life [15,16]. In addition, gender and age are among the factors related to poor survival, Babu et al. presented a clinical study in which CRC patients with bone metastases were male and young. However, whether sex affects the prognosis of these subjects needs to be deeply investigated [17].
Santini et al. [18] collected the clinical data of a cohort of Italian CRC patients with different skeletal problems and bone metastases. According to their findings, the most affected bones by CRC metastases were the spine (65% of cases), hip/pelvis (34% of cases), long bones (26% of cases) and other bone sites (17% of cases). These percentages highlight the need for the early diagnosis of bone problems related to CRC and, therefore, for an equally early intervention to improve and extend the patient's survival. To perform a timely early diagnosis of bone metastases, a scoring technique has been assessed using different clinical factors, such as tumor localization, lymph node metastases, and, finally, the presence of metachronous lung metastases as a third risk factor. This scoring technique can help clinicians immediately identify CRC patients most at risk for the development of bone metastases and make it possible to intervene directly with suitable therapies and relieve bone metastasis-related SREs [19,20].
Baek et al. [21] reported that only 1.1% of 5479 CRC patients showed CRC-related bone metastases. Most of these patients were at a late stage of cancer at the time of the CRC diagnosis. Bone metastases were already present at diagnosis in half of them, while the other half of the patients developed bone metastases during the course of the disease. As expected and independent from the presence of metastases in other organs, the presence of bone metastases is also associated with the presence of different SREs, and this situation led to painful patient survival.
In CRC, bone metastases usually develop later than those in other organs or tissue, such as liver and lung metastases, and there is a preferential link between bone and lung metastases. The prognosis is more severe in cases in which the metastasis of the tumor involves several sites simultaneously [22]. Bone metastases, perhaps more than others, are highly debilitating because of the various bone-related clinical pictures that they entail. Therefore, it would be helpful to be able to diagnose these metastases in a shorter time compared to their onset. Studies on this topic were performed by evaluating cases of rectal or colon cancer cases individually. Recently, Zhenghong et al. [22] have reported a higher percentage of bone metastases in rectal cancer patients than in CRC patients. Probably, this finding may depend on the broader vascularization in the rectum compared to the colon [23,24].
CRC and Bone Marrow
Several studies investigated the interactions between colorectal cancer and bone marrow (BM). Taketo et al. reported that the loss of the oncosuppressor SMAD4 is synonymous with CRC advancement. The authors noted, in both in vitro and in vivo experiments, that the loss of SMAD4 implies the lack of the block of expression of the gene C-C motif chemokine ligand 15 (CCL15) [25,26]. In this circumstance, CCL15 is expressed by cancer cells and induces the recruitment of CCR1+ myeloid cells from BM. The C-C chemokine receptor type 1 (CCR1) + cells have the characteristic of expressing and secreting matrix metalloproteinase 9 (MMP9), which is involved in tumor invasiveness by promoting tumor-stromal interactions. The analysis of human liver metastases, related to CRC, have shown that CCL15 expression, linked to a higher content of CCR1+ cells, is associated with a lower patient survival with respect to CCL15-negative liver metastases [26,27]. SMAD4 and CCL15 are inversely correlated, since the action of SMAD4 induces a negative regulation of the promoter of CCL15, causing the inhibition of CCL15 gene expression. Moreover, inhibitors of the CCL15-CCR1 axis have been suggested as potential therapeutic agents [26].
The role of BM-derived CCR1+ myeloid cells in CRC pathogenesis was also investigated by others research groups. In this regard, Kiyasu et al. very recently reported that the depletion of CCR1 induced a reduction in CRC growth. In particular, after reconstituting sub-lethally irradiated wild-type mice with the BM of wild-type or CCR1 −/− mice, they implanted colorectal cancer cells in these mouse models [28]. They noted that mice with CCR1 cell depletion showed a reduction in tumor growth and liver metastases, with respect to CRC mouse models with wild-type BM. The depletion of CCR1+ myeloid cells, genetically induced or by using an anti-CCR1 antibody, caused a suppression of CRC development, indicating CCR1 as a potential therapeutic target [28].
BM metastases, although rare, characterize CRC tumorigenesis due to their high vascularization. Furthermore, the formation of BM metastases is promoted by the slowness of the bone marrow bloodstream, which helps the deposition of the metastatic cells, and the presence of several growth factors, secreted following interactions between tumor cells and BM stroma [29]. The occurrence of these conditions creates the right conditions for tumor development. Metastases in BM often go unnoticed if they are mild, because they are not yet detectable with the most common imaging techniques, or they are detected late when they are well extended and cause severe pain or osteolytic fractures [29]. BM metastases are commonly observed in different solid tumors, such as breast, lung, prostate and, rarely, in CRC patients [30]. Chuwa et al. very recently described a case report of a CRC patient with BM metastases [31]. In particular, this patient presented disseminated carcinomatosis of bone marrow (DCBM). DCBM was diagnosticated by analysis of a BM biopsy, since the patient presented a persistent pancytopenia. BM biopsy analysis showed the infiltration of non-hematopoietic malignant cells and BM necrosis, pivotal features of DCBM [31]. The micro-metastasis of BM is related to poor prognosis [31,32].
An important role in CRC tumorigenesis is played by mesenchymal stem cells (MSCs). These cells, derived from BM, secrete growth factors, cytokines and chemokines into the stroma of developing tumors [33]. Nishikawa G. et al. reported that MSCs promote CRC progression through C-C chemokine receptor type 5 (CCR5) ligands, such as C-C motif chemokine ligand 3 (CCL3), CCL4 and CCL5. These ligands bind the receptor, CCR5, expressed by CRC cells [34]. The authors also observed that high serum levels of CCR5 ligands are related to a poor prognosis in CRC patients, therefore, CCR5 ligands could have value as predictive biomarkers. As previously reported by other groups, it was noted that an inhibition of CCR5, and consequently a reduction in the MSC-CRC cell interactions, corresponds to a reduction in tumor growth [34][35][36].
Although, to date, there are numerous studies indicating that CRC can present a bone-related symptomatology (i.e., osteolytic lesions, skeletal related events, etc.) due to bone metastases, it remains not fully clarified how CRC cells interact with bone cells. The result of this interaction is an imbalance between functional cells within the bone, i.e., osteoblasts and osteoclasts, usually in favor of the former, resulting in the formation of osteolytic metastases, due to a preeminent osteoclastogenesis [37]. In this process, chemokines play a relevant role, leading to the interaction between cancer and host cells. Different chemokines are implicated in the CRC cells' chemoattraction to bone tissue, promoting cancer cell metastasis. The metastatic tropism is due to the interactions between ligands present on cancer cells and their specific receptors present on the cells of certain organs, or vice versa. Gong ZC et al. [38] very recently showed the relevance of CCL3, expressed by BM-derived monocytes, in osteoclastogenesis in CRC bone metastases. The authors reported that CRC cell-derived Epidermal Growth Factor (EGF) activates BM-derived monocytes and stimulates their high CCL3 expression. CCL3 promotes osteoclast maturation and, consequently, osteoclastogenesis [38]. Another interaction implicated in cancer cell recruitment has been demonstrated between CXCR4, expressed in CRC cells, and CXCL12, located in BM-derived cells. Furthermore, Itatani Y. et al. [12] described in detail other interactions existing between CRC cells and other myeloid cells.
Several proteins are differently involved in the relations between bone tissue and CRC by promoting tumor cell invasion and increasing the activity of other molecules with possible interferences in osteoinductive processes. A series of molecules, which are involved in these processes to varying degrees, is addressed below (Table 1). Table 1. Molecular factors and their mechanisms of action in bone tissue and in Colorectal Cancer (CRC).
Molecular Factor
Mechanism of Action in Bone Tissue
BMP9
Stimulation of the production of bone tissue
Bone Morphogenetic Proteins (BMPs)
Although the interrelationship between bone tissue and a distant tumor can represent a common event, the association between bone and the CRC is rather rare. Clinical cases of heterotopic ossification have been reported in some tumors, and this process might also happen in some cases of CRC. The mechanism behind this pathological process is not well understood. However, immunohistochemical analyses revealed the involvement of specific proteins in bone tissue cells. The most accredited theory reports a mechanism similar to the EMT of mesenchymal cells in osteoprogenitor cells against molecular inputs secreted by cancer cells. These inputs consist of proteins secreted by cancer cells, bone morphogenetic proteins (BMPs). There are 24 different subtypes according to their amino acid sequences [39].
In the case of heterotopic ossification, the most common subtypes found include BMP2, BMP5, BMP6 and BMP9. The latter can be considered as one of the major osteoinductive factors [40]. All these proteins are essential for bone differentiation and maintaining the balance between tissue formation and erosion; osteoblasts, osteoclasts and chondrocytes produce them. BMPs are also implicated in the development of extraskeletal organs and neoplastic cells.
Thanks to molecular investigations on the colon that identified the expression of osteogenic proteins, Noh et al. [41] performed the analysis of a clinical case of heterotopic ossification in a CRC patient. The authors established that some cancer cells could express osteoblast phenotypical markers. These cells, with an osteoblast-like phenotype, secrete osteoinductive molecules, such as BMP9, osteocalcin and osteopontin. The release of these factors induces the transformation of other peritumoral cells, eventually leading to heterotopic ossification of the tissue.
BMP9
Bone morphogenetic protein 9 (BMP9) is one of the bone morphogenetic proteins. These molecules are part of the transforming growth factor-β (TGF-β) superfamily [42]. BMP9 and the others are implicated in various physiological processes, such as proliferation, migration, adhesion and apoptosis [42]. Therefore, their dysregulation can lead to the aberrant functionality of downstream signal pathways, eventually creating a protumoral context. This event occurs at the initiation of several neoplasms, also including CRC [43].
Among the others, BMP9 has been proved to be the one with the greatest osteoinductive effect [44]. BMP9 has two ways of action. The first is the canonical BMP/Smad pathway, which implies that, after the binding of BMP9 to its type I or II BMPR receptor, Smad 1/5/8 are activated by phosphorylation; then, Smad 1/5/8 bind Smad 4, and so this molecular complex moves to the nucleus where it activates other targets. The second path of action of BMP9 is the non-canonical BMP/Smad pathway, which involves the p38 MAPK and PIK3/AKT pathways [45,46].
The action of BMP9 is currently highly debated, as it seems that this protein does not show the same behavior in all tumors. BMP9 has been reported to have a protumoral action in liver sarcoma, ovarian sarcoma and osteosarcoma [47,48], while showing an anticancer activity in breast and gastric cancer [49,50]. As for CRC, Yuan et al. [51] evaluated the action of resveratrol both in vitro on LoVo cells and in vivo by inducing colon tumorigenesis in a mouse model. The findings from that study revealed that resveratrol acts through BMP9, which shows an antitumor and proapoptotic tendency. Furthermore, with the use of p38 inhibitors and the BMPR receptor, they demonstrated that the action of BMP9 in CRC is exerted by its binding to the receptor and by intracellular activation of the p38 MAPK pathway, also a fundamental pathway in osteogenesis induced by BMP9.
The involvement of BMP9 in colonic tumorigenesis has also been highlighted by an interesting and recent study on the potential beneficial anti-neoplastic effects of a natural component against CRC [52]. This study demonstrated that evodiamine (Evo), a quinolone alkaloid extracted from traditional herbal medicine Evodia rutaecarpa, had an antitumor activity, and the authors tried to elucidate its molecular mechanisms. By treating a colon cancer cell line, HCT116 cells, with Evo, an increase in BMP9 expression was observed. The authors evaluated whether the increase in the expression of BMP9 with the antitumor and anti-proliferative effects might involve hypoxia-inducible factor α (HIF-α). HIF-α is a proangiogenic factor, and it stimulates the formation of new vessels, an essential step for tumor progression. By treating HCT116 cells with Evo and using these cells to induce cancer in a mouse model following injection into the hip, the mechanism of the anticancer effect in CRC of Evo and BMP9 was investigated. In those experiments where BMP9 was overexpressed or silenced in HCT116, the antitumor action of Evo, its upregulation of HIF-α and its activation of the oncosuppressor p53 increased with BMP9 and was decreased by silencing BMP9 [52].
BMP5
Bone morphogenetic protein 5 (BMP5) also belongs to the BMP superfamily, a subgroup of the TGFβ family. The homonymous gene encodes BMP5 as a pre-protein, which is enzymatically processed to give protein subunits arranged into homodimers and has a role in bone and cartilage development.
Like other BMPs, BMP5 is implicated in several malignancies, including CRC. Chen et al. [53] found a reduction in BMP5 expression both at gene and protein levels in CRC and that this lessening correlated with short patient survival. Furthermore, from sequencing genomic analyses, it has been clarified that, in some cases, these are loss-of-function type mutations. This evidence suggests that BMP5 has an anticancer action and, therefore, that its decline has a role in tumor transformation, as well as in EMT, and then in the loss of epithelial markers [54].
Studies of the transfection of colon carcinoma cells with a BMP5-expressing viral vector demonstrated that these cells overexpress a cell cycle regulator and tumor suppressor, cyclin-dependent kinase inhibitor 1C (CDKN1C), compared to cells in which BMP5 is down-expressed. CDKN1C induces a cell blockade in phase G1. Furthermore, the cellular overexpression of BMP5 also leads to a better expression of epithelial markers and a loss of mesenchymal markers [53].
With the advent of the microRNA era, the link between miR-32 and BMP5 has been better studied. The miRNA miR-32 is a short strand of RNA of only 20-25 nucleotides. The miRNAs negatively regulate gene expression and are involved in the regulation of physiological processes, but their dysregulation is often an explanation for pathological processes, including CRC.
The result of careful and profound bioinformatic analyses has highlighted the involvement of dozens of miRNAs in CRC patients. Among them, one of the most involved is miR-32 [55], which exhibits different functions depending on the pathology, behaving as either a tumor suppressor or oncogene. In CRC, it has been observed that dysregulated miR-32 acts with protumoral, pro-proliferative and invasive effects. Hence, miR-32 dysregulation is associated with poor patient survival. The authors identified BMP5 as a target of miR-32 by using expression and clinical data from The Cancer Genome Atlas (TCGA). They found that miR-32 expression levels are inversely correlated with BMP5 expression levels and that this correlation is stronger in advanced CRC [56].
Osteoprotegerin
Osteoprotegerin (OPG) belongs to the tumor necrosis factor receptor family (TNFR) and, as the name implies, is a soluble protein that protects bone tissue from the erosive action of osteoclasts. It is a protein secreted by osteoblasts. OPG is a decoy receptor for the receptor activator of nuclear factor kappa-B ligand (RANKL), a RANK receptor ligand; in this way, OPG evades the ligand-receptor bond RANKL-RANK, avoiding the maturation of osteoclasts, on whose surface RANK is present, and, therefore, their activation and erosion of the bone tissue. Hence, OPG acts as a negative regulator of bone turnover [57,58].
OPG is also expressed in several cancers, including CRC. In vitro studies show that colon cancer cells, such as HT-29 and SW480, express OPG both at the gene and protein levels. This result has been confirmed by in vivo studies in mouse models of colon tumors obtained after injection of the above-mentioned cell lines. In these models, the expression of OPG was found in tumors by using immunohistochemical analysis carried out on fixed and paraffin-embedded tumor masses [59,60]. Besides, the role of OPG was also investigated in the specific tumor context of CRC. OPG is also a decoy receptor for TNF-related apoptosis-inducing ligand (TRAIL), expressed by immune cells, and induces apoptosis in cancer cells. Therefore, the binding of OPG to TRAIL prevents its action, deflecting the problem of the TRAIL-induced apoptosis of cancer cells due to the binding of TRAIL to membrane death receptors, such as death receptors 4 and 5 (DR4-DR5) [60,61].
In vitro studies on colon carcinoma cells allowed us to observe this mode of action of OPG. In cells where OPG is silenced at the gene level by the use of siRNA or at the protein level, through the use of neutralizing antibodies in culture, TRAIL-induced apoptosis increases [59,62]. By using ELISA, high levels of OPG expression have been found in the culture medium of colon carcinoma cell lines.
Clinical trials have been conducted to measure serum OPG levels in patients with CRC. In these studies, OPG levels correlated with the staging of the tumor. Interestingly, the late stages of CRC corresponded to high serum OPG values [63,64]. Since the aggressiveness of CRC, as well as of all tumors in general, is also due to the resistance to apoptosis of cancer cells, one of the possible therapies is the exogenous administration of proapoptotic factors, such as TRAIL [65]. Tsukamoto et al. [66] demonstrated for the first time that gene and protein OPG expression increased in metastatic CRC cases compared to patients who have not yet developed metastases. However, CRC, especially when it is in an advanced stage, seems to be resistant to TRAIL, highlighting the problem of how to remedy this pathological condition. As a consequence, OPG is now considered as a potential prognostic and therapeutic target. The evaluation of its serum levels could allow for assessing the stage of pathology, as well as represent the object of targeted therapy [66].
The pro-tumor action of OPG in CRC was not confirmed by Kim HS et al. [67]. The authors reported that the OPG gene and protein expression levels decreased in colon cancer cells in comparison to normal colon epithelial cells. Besides, the downregulation of OPG was firstly hypothesized and then confirmed to be dependent on the degree of hypermethylation of the OPG gene promoter. When CRC cancer cells were treated with a demethylating substance, the OPG gene expression was recovered in comparison to the same cells before treatment. Therefore, OPG is described, rather, as an oncosuppressor in CRC, whose reduced expression may be considered as an index of the worsening of the clinical picture in CRC patients. According to the authors, these data, which contrast with previously published papers, may be explained by a comparison with normal colon epithelial cells, which has not yet been performed. Since the available clinical trials have not been performed on a consistent number of patients, there is a need for both further laboratory and clinical research. Based on the available data, OPG is undoubtedly actively involved in CRC tumorigenesis [67].
Osteopontin
Osteopontin (OPN) is a protein involved in bone remodeling and represents the major organic, non-collagenic component of the extracellular bone matrix. It is expressed by the precursors of osteoblasts and osteoclasts, as well as by mature osteoclasts, osteocytes and osteoblasts. As the name implies, OPN acts as a bridge between the osteoclasts and the inorganic part of the bone, the hydroxyapatite crystals, through the bond with α v β 3 and CD44 receptors [68,69]. OPN is not only involved in bone turnover, but also in other physiological and pathological processes, including tumorigenesis.
OPN is a potential tumor marker in CRC. In fact, OPN gene and protein expression levels are higher in colon cancer cell lines than in normal colon epithelial cells, as well as in CRC tissue samples compared to non-tumor samples. The expression of OPN in CRC also correlates with the staging of the tumor, and, therefore, even with the survival of the patient, in whom high expression of OPN is synonymous with a poor prognosis [70].
The overexpression of OPN in colon cancer cells induces faster cell proliferation, with a higher migration capacity than cells transfected with the empty vector. These activities, promoting tumor progression, were confirmed in vivo by inoculating mouse models with OPN overexpressing cells. Under such conditions, larger tumor masses and metastases may develop compared to controls. Immunohistochemical analyses by CD31 antibodies have shown higher angiogenesis in OPN overexpressing tumor masses than those obtained by inoculating colon cancer cells transfected with an empty vector in mouse models. Overall, these results support OPN as a CRC tumorigenesis factor that increases the metastatic potential of the tumor [71].
The mechanism underlying the involvement of OPN in tumorigenesis has been clarified by using a murine cell line of colon cancer, CT-26 cells, which are highly invasive, have metastatic potential and express OPN at high levels. CT26 cells silenced for OPN were inoculated in immunosuppressed mouse models. Cells with silenced OPN showed a lessened formation of metastases compared to control cells, due to the reduction in motility and cell invasiveness. The reduced cell invasiveness is explained by a decreased expression of matrix metalloproteases, specifically of MMP-2 and MMP-9 [72,73]. These data were also confirmed by experiments with siRNA-OPN. Gene silencing of OPN reduced the proliferation, adhesion and migration of another human colon carcinoma, LoVo cells. Besides, these results were associated with a lessening of the angiogenic process in CRC [74]. Gene analyses have made it possible to identify recurrent single nucleotide polymorphisms (SNPs) associated with a greater tendency for tumor development. Data analysis on Chinese CRC patients revealed a higher risk of developing CRC in the presence of the A allele in the SNP rs9138 in the 3'UTR region of the OPN gene and allele C in the SNP rs1126616 in exon 7 of the OPN gene. However, no statistically significant relationship between the patient's genotype and serum OPN levels were found [75]. On the contrary, Kamal et al. [76] found contrasting data by carrying out the study on Egyptians. This finding is not so surprising since different ethnic groups may show varied polymorphisms, and the same allele in a SNP can give different results in diverse populations.
The aggressiveness and invasiveness properties of cancer cells correlate with a lower state of differentiation and with stem characteristics. The association between the protumor properties of OPN in CRC and cancer cell stemness was found by Ng et al. [77]. After the induction of the overexpression of OPN in the DLD-1 cell line, the authors reported that these cells show high levels of expression of some stem cell markers, SOX2 and OCT4. Furthermore, by treating these DLD1-OPN cells with oxaliplatin, the cells showed resistance to apoptosis compared to the control DLD-1 cells. These data were then confirmed in vivo in CRC tissues and patients. A positivity for SOX2 expression was found in OPN-positive CRC tissues, demonstrating that cancer cells show stem properties. Oxaliplatin-treated CRC patients, who were resistant to the chemotherapy drug, showed higher OPN expression levels than drug-sensitive CRC patients. Therefore, OPN could even be a reference value for identifying a CRC patient's response to oxaliplatin treatment [77].
Lastly, the different possible mechanisms by which OPN promotes tumorigenesis in CRC include autophagy, which appears to be dysregulated in this neoplasm, and through alterations in the p38-MAPK pathway [78]. OPN can inhibit the autophagy process through the p38-MAPK pathway, as demonstrated by an in vitro study on the HCT116 cell line. The administration of exogenous OPN to these cells induced augmented p38-MAPK phosphorylation, along with a reduction in the gene and protein expression of molecules involved in the formation of the autophagosome (such as Beclin1, Atg4b, Bnip3 and Vps34) [79].
Bone Sialoprotein
Bone sialoprotein (BSP) is the structural glycoprotein of the bone matrix that defines the initial process of bone formation and constitutes 15% of the non-collagen proteins of this tissue. It is expressed by different cells of the bone tissue and the cartilage. Like OPN, BSP has an important domain for binding with αvβ3 integrin, which mediates the interaction of osteoblasts and osteoclasts with the bone matrix during bone resorption. In particular, it has been shown that BSP regulates the processes of deposition and mineralization of the bone, but it is also involved in the opposite process, bone resorption [80]. Some findings also suggest the expression of BSP in various neoplasms and non-tumoral diseases. A study by Fedarko et al. [81] described the expression of BSP in the serum of patients with different cancers, including CRC samples, and reported that circulating BSP levels were significantly elevated in CRC patients.
Tartrate-Resistant Acid Phosphatase
Tartrate-resistant acid phosphatase (TRAP) is a glycosylated metalloprotein and has a fundamental role in many biological processes of skeleton development. TRAP is highly expressed in osteoclasts [82]. It is capable of degrading phosphoproteins of the bone matrix, including OPN and BSP. OPN and BSP, even when partially dephosphorylated, are unable to bind osteoclasts. Therefore, it has been hypothesized that TRAP is secreted by the osteoclast, dephosphorylates osteopontin and thus allows osteoclast migration and the occurrence of further resorption [83]. In addition to these functions, TRAP, also being expressed by activated macrophages, seems to have a role in innate immunity. A study performed on a cohort of CRC patients by Nagorsen et al. [84] found that the high expression of TRAP is associated with longer survival in CRC patients and a reduction in mortality. This finding was independent from age, since both young and older patients showed comparable percentages. There are two subclasses of macrophages, the anticancer M1 and the protumoral M2. Although, in some pathological contexts, these two subclasses and their respective roles are clear, in CRC, there are controversial opinions. In fact, in addition to the known role of M1 macrophages, an improvement in prognosis in CRC is also reported following the in situ recruitment of M2 macrophages.
How et al. [85] reported an analysis of which subclass expresses TRAP in CRC by using immunohistochemical analyses that made it possible to achieve the goal by double marking the CRC tissue samples. The double positivity to TRAP and CD68, a macrophage marker, revealed that macrophages express TRAP. Furthermore, the double positivity to TRAP and CD163, a marker of the M2 subclass, has shown that both M2 and non-M2 macrophages express TRAP. These results provide evidence that TRAP macrophage expression is associated with improved prognosis and implies that TRAP is a potential biomarker in CRC.
Runt-Related Transcription Factor 2
Runt-related transcription factor 2 (Runx2), belonging to the RUNX heterodimeric transcription factor family, was the first osteoblast-specific transcription factor to be identified and is still considered the master gene for osteoblastogenesis. KO mice for the RUNX2 gene have a cartilaginous skeleton and the complete absence of osteoblasts, irrefutably demonstrating the role of RUNX2 in osteoblastic differentiation [86]. Several pieces of evidence report the role of RUNX2 in tumorigenesis in various organs. As for CRC, Sase et al. [87] assessed the expression levels of RUNX2 in CRC tissue samples, reporting their positivity compared to normal epithelial cells and that this result was related to poor patient prognosis. Besides, from in vitro analyses of colon cancer cells, i.e., SW480 and DLD1, RUNX2 gene silencing experiments showed a reduction in the proliferation, migration and invasion of cells compared to control cells. Previous studies evaluated the association between the ERβ estrogen receptor and RUNX2 [88], and based on them, the authors have conducted further experiments to clarify the concept. By blocking the expression of ERβ by gene silencing or with a protein-level inhibitor, a reduction in RUNX2 levels in vitro, as well as a reduction in the proliferation of these colon cancer cells, were shown. These results make it clear that RUNX2 is an important prognostic factor for CRC and seems to be regulated, at least partially, by ERβ [87].
Trasforming Growth Factor β1
During the resorption process, many factors are released from the bone matrix that direct the migration of mesenchymal stem cells (MSCs) to the newly eroded surface. Among these factors, there is transforming growth factor (TGF)-β1, one of the most abundant cytokines in the matrix. TGF-β1 is a cytokine involved in the regulation of numerous cell functions, ranging from growth to apoptosis. It has been shown that this factor can regulate the proliferation and differentiation of osteoprogenitor cells [89], but the possible action of TGF-β1 was also evaluated in colon cancer cells [90]. TGF-β1 inhibits the proliferation of HT-29, cells that show resistance to chemotherapy treatment-induced apoptosis. Besides, TGF-β1 weakens the expression of cyclooxygenase 2 (COX-2), a well-known inflammation-promoting enzyme that is adapted to a tumor context. This result is reversed by treating HT-29 with an ERK inhibitor, confirming that ERK mediates the effect of TGF-β1 on COX-2. To further clarify this mechanism, the authors assessed whether ERK activation was direct or indirect, evaluating the involvement of ROS. HT-29 cells were treated with both TGF-β1 and an antioxidant substance. The findings from the study highlighted a loss of the activation of ERK by TGF-β1, suggesting that TGF-β1 is involved in the inhibition of tumor growth in HT-29, the induction of apoptosis by ROS production and that this event is upstream of ERK activation [90].
Conclusions
Colorectal cancer metastasis is a complex process with many molecular components that act as oncogenes and oncosuppressors. Numerous clinic studies have clarified that bone biomarkers are important players in CRC, as some of them correlate with cancer development and prognosis.
In this review, we pointed out that numerous osteogenic molecules regulate many pathological aspects of CRC, including the initiation of inflammation, tumoral cell transformation, metastatic capacity acquisition, epithelial-mesenchymal transition, intravasation, extravasation, mesenchymal-epithelial transition and metastasis formation.
After elucidating the molecular mechanisms that support the bone biomarkers' actions in CRC pathogenesis, new bone molecule-based therapies may be realized. Interestingly, the manipulation of endogenous bone biomarkers by administering siRNA inhibitors could be useful in modulating the expression of downstream pathways. To date, no therapies targeting these molecules have been developed to treat CRC in human clinical trials. Despite this, the use of these bone molecular factors as therapeutic targets is very promising since they are able to regulate the course of the neoplasm.
In conclusion, although the intestinal tract and bone tissue seem to be so far from each other in terms of anatomy, embryology and physiology, they are more related than one can imagine. Several relations have been demonstrated between these two organs, implicating different molecules. The study of their molecular relations opens new horizons for diagnosis and therapies for CRC patients. | 2020-07-30T02:04:46.304Z | 2020-07-24T00:00:00.000 | {
"year": 2020,
"sha1": "0653792185c4073bf8d48ac9c561efb62fac8133",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/12/8/2029/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6386b1956e99f9970da50fc892148b8d7610f4a3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55745280 | pes2o/s2orc | v3-fos-license | A Photochemical Modelling Approach to Investigate O3 Sensitivity to NOx and VOCs in the Urban Atmosphere of Delhi
Ambient air pollutants data were used to evaluate whether O3 formation at a specific site (Sirifort) in Delhi was limited by volatile organic compounds (VOCs) or oxides of nitrogen (NOx). For this purpose, a photochemical model OZIPR (Ozone Isopleth Plotting Research) based on Lagrangian trajectory model was applied to 9 ozone episodes that occurred at Sirifort from August to October 2006. Emissions data were estimated using an area-source box model. The results show that the prediction for peak O3 concentration agreed reasonably well with the observed data. O3-isopleth plots clearly reveal that O3 formation is more sensitive to VOC emissions for lower VOCs/NOx ratio, while for higher VOCs/NOx ratio, O3 formation is more sensitive to NOx emissions. However, for the purpose of practical O3 control applications at the observation site, it is concluded that VOCs emissions should be reduced while keeping a lower VOCs/NOx ratio.
INTRODUCTION
primary pollutants and which come from identifiable specific sources, such as vehicular traffic.This has led various researchers to employ different approaches in order to quantify this relationship.
Tropospheric O 3 , a well-known secondary pollutant is formed and sustained in the atmosphere due to a chain of complex reactions taking place among NO x and VOCs in the presence of sunlight.To prepare a control strategy for O 3 , it is crucial to investigate O 3 behavior in relation to its precursors (NO x and VOCs), which are mostly For the Greater Athens Area, using an extended database of air pollution and meteorological parameters in an Urban Airshed Model (UAM), Ziomas et al. (1998) concluded that ozone abatement strategy should focus mostly on controlling VOC emissions rather than controlling NO x (Blanchard and Stoeckenius, 2001) and compared O 3 predictions from six photochemical air-quality simulation models.For all simulations, peak ozone values increased following NO x control in 95% (median over all simulations) of the high-ozone (> 80 ppbV hourly mixing ratio in the base-case) grid cells having mean afternoon O 3 /NO z ratios less than 5:1, O 3 /NO y less than 4:1.Peak ozone levels decreased in response to NO x reductions in 95% (median over all simulations) of the grid cells having peak hourly ozone mixing ratios greater than 80 ppbV and where mean afternoon O 3 /NO z exceeded 10:1, O 3 /NO y was greater than 8:1.Arbilla et al. (2002) used an empirical kinetic modeling approach (EKMA) in order to simulate ozone concentrations for an urban downtown area with high vehicular traffic.The agreement between experimental and simulated results was quite good.The simulated ozone peak was obtained at 3:15 p.m. (23.0 ppb).A sensitivity-uncertainty analysis was performed and hypothetical scenarios were designed to illustrate the predictive potential of air quality models.Stein et al. (2005) examined the airborne measurements of sulphur dioxide (SO 2 ), total reactive oxides of nitrogen (NO y ), and O 3 taken from an instrumented aircraft to assess the governing photochemical processes of ozone formation.The sensitivity of ozone to changes in its primary sources was examined by simulating scenarios with varying rates of NO x and VOC emissions.The study of Stein et al (2003) show that for this particular case the measured and modeled upwind NO x sources are more effective than VOCs emissions for lowering O3.Cohan et al. (2006) applied nested grids of 36-, 12-and 4-km resolution to model an air pollution episode in Georgia., USA.A direct sensitivity analysis method used to compute the O 3 /NO y ratios seems to identify a photochemical regime in which reductions in response of ozone to emissions of its precursors: nitrogen oxides (NO x ) and volatile organic compounds (VOCs).All three grids predict that ozone production is limited primarily by the availability of NO x , and yield similar predictions of average ozone sensitivity to both regional and local emissions of NO x .The present study employs the Lagrangian trajectory model (Seinfeld and Pandis, 1988;Jang, 1999) because it allows us to study the response of the photochemistry in isolation
INPUT REQUIREMENTS OF OZIPR
from transport, which complicates the analysis in Eulerian grid model (Arbilla et al., 2002).In this context, the Ozone Isopleth Plotting Research (OZIPR) (Gery and Crouse, 1990)
OZIPR
(i) Sunlight Intensity: The OZIPR program uses a city's latitude, longitude, time zone, and day of the year being modelled to generate the appropriate diurnal pattern of photolytic reaction rates.Usually no changes need be made for this set of model inputs.
The OZIPR simulates complex chemical and physical processes of lower atmosphere through the use of Lagrangian trajectory model.The physical representation is a well-mixed column of air extending from the ground to the top of the mixed layer.This ideal air column moves with the wind (along the wind trajectory), but cannot expand horizontally.
Emissions from the surface are included as the air column passes over different emission sources, and air from above the column is mixed in as the inversion rises during the day.Very complex chemical mechanisms may be input into OZIPR to describe the chemical processes that occur within the modeled air mass.OZIPR performs a specified set of simulations to calculate ozone levels at fixed intervals.This allows for the plotting of fixed ozone concentration lines (isolines) as a function of initial precursors.
(ii) Dilution: In OZIPR model, dilution occurs as a result of the rise in atmospheric mixing height that typically occurs between early morning and mid-afternoon.The mixing height can be viewed as the top of a surface-based layer of air which is well-mixed due to mechanical and thermal turbulence.Specific input to OZIPR includes the early morning mixing height, the maximum afternoon mixing height, the time that the mixing height rise begins, and the time at which mixing height is finally attained.Procedure for estimating the early morning mixing height and maximum afternoon mixing height from available radiosonde measurements are outlined in EPA (1989).This was implemented using MATLAB ® program codes in the present study.Radiosonde measurements pertaining to Safdarjang, New-Delhi, measured with the help of Rutherford Appleton Laboratory, Chilton, and compiled by the British Atmospheric Data Centre (BADC, 2006) were used in this study.L = length of airshed, m; C = pollutant concentration in the airshed, mg/m 3 .Fig. 1 shows the map of New Delhi.A 10 km 2 enclosed square area around the observation site (Sirifort) is shown by a grey square on the map.An air parcel box was assumed having this area as the base area.The height of this box is given by mixing height determined by prevailing meteorological conditions.Air was assumed to be uniformly mixed within this box.For this assumed box model, the hourly mass emission rate were estimated by using Eq. ( 1).
(iii) Post-0800 Emissions: Post-0800 emissions refer to emissions occurring along the trajectory subsequent to the start of the model simulation (usually 08:00 AM).The actual inputs are expressed as emission densities (kg/km 2 -hr) of NMOC, NO x , and carbon monoxide (CO) concentrations that should be added each hour to the effect of fresh precursor emissions.
(iv) Ozone Transport: The two possible mechanisms by which ozone is transported into an urban area are: In the present study, Area-Source Box Model (Masters, 1998) was used to estimate mass emission rate (kg/km 2 -hr) from the available ambient concentration data (in g/m 3 ).The present study makes use of steady state solution of area-source model to estimate the mass emission rate from ambient pollutant concentration.The steady state solution of box model is given by (1).Advection of ozone along the earth's surface, and (2).Advection of ozone aloft, typically at night and during early morning hours, with downward mixing when the mixing layer increases later in the day.
Ozone transported at the surface is subject to surface reactions and scavenging by other species (e.g., NO) emitted during the night.As a result of night time atmospheric stability, ozone transported aloft does not come into contact with scavengers emitted during the night.Thus, overnight advection of ozone aloft is the more significant mechanism of transport from one urban area to another (EPA, 1989).(Masters, 1998) Where: Q = mass emission rate per unit area (mg/m 2 -s); u = average wind speed against one edge of the box, m/s; H = mixing height, m; This study mainly simulated the model for daytime only when the ozone aloft mechanism was less significant.As such, ignoring ozone aloft values does not change the final result significantly.This exercise was carried out for some of the well-tested US cities data as provided by EPA (2004).The change in simulated result is not more than ±5%.Hence, only surface ozone values were used in the present study; this is partly due to lack of ozone aloft data.
(v) Precursor Transport: Just as for ozone, precursor pollutants (NO x , VOCs) could be transported in both the surface layer and aloft.However, outside of urban areas, the surface layer is expected to be very shallow.Thus, long range transport of precursors in the surface layer may not be significant.Again, in the present study, for the reasons mentioned above, only surface precursor values have been used.
This study selects CB-4 mechanism as this mechanism uses a highly condensed method to represent reactions of VOCs, with the goal being to predict ozone from ambient mixtures as accurately as possible.It was evaluated against a large number of environmental chamber experiments (Gery et al., 1988).
For the purpose of carbon-fractionation, annual mean concentration data of VOCs for residential areas in Delhi pertaining to Srivastava (2005) have been employed.In the Srivastava (2005) study, the samples were collected in 2001 and analyzed to estimate a number of VOCs.Although there is a significant time gap between Srivastava (2005) and the present study, it can be a good representative for the purpose of estimating carbon fractions as per EPA (1989) recommendation.We have also compared this carbon-fraction estimation with the available CPCB hourly VOCs data at Sirifort from August to October 2006, which consists of only 5 VOCs.Within the range of ±0.05, o-Xylene, m,p-Xylene, Toluene, Benzene and Ethyl-benzene show an agreement with the carbon-fraction estimation based on Srivastava (2005).The small discrepancy shown here may also be due to the lack of other VOC data at Sirifort.Hence, the present study preferably uses carbon-fraction estimation based on Srivastava (2005) which is also more representative of VOC variability in Delhi, owing to its extensive broad-based sampling, monitoring and analysis using GC-MS techniques.
(vii) Temperature: Hourly temperature data must be utilized in OZIPR.Use of hourly temperatures allows reaction rates to be increased or decreased according to the hourly temperature.In the present work, hourly temperature data pertaining to Safdarjang, New Delhi, thankfully provided by Nautica Editrice Srl (1995-2006), was used.
(viii) Water Vapor: Ozone predictions are also sensitive to the amount of atmospheric moisture content.OZIPR estimates atmospheric moisture content using relative humidity values and ambient pressure level.
Hourly values of relative humidity were used for Safdarjang, New Delhi (provided by Nautica Editrice Srl, 1995-2006).
(viii) Biogenic Emissions: OZIPR also provides an option for biogenic emissions, such as isoprene, a-pinene, monoterpenes, and unknowns.EPA has prepared a computer program to estimate biogenic emissions rates on a county basis for the USA.This program estimates biogenic emission on the basis of day-specific meteorological parameters.However, in the absence of such data and facilities, EPA (1989) recommends to set it simply to zero, which was followed in the present case.
OZONE EPISODES, NO x , VOCs AND METEOROLOGICAL CONDITIONS
High ozone episode is a condition when the level of O 3 concentration exceeds a certain threshold and thereby posing threats to the human health.The World Health Organization has prescribed a standard of 100 g/m 3 (8-hour average) (WHO, 2006) for ambient O 3 concentration.In the present study, those conditions were considered as high ozone episodes when the ambient O 3 concentration repeatedly exceeded 100 g/m 3 level (Fig. 2).In Fig. 2 and a region that is NO x -limited where NO x reductions are more effective for reducing O 3 .Fig. 8 shows that for a NO x -value (> 0.04 ppm), O 3 production is more sensitive to the amount of VOCs than to the amount of emitted NO x (VOC limited regime).For a NO x -value (< 0.04 ppm), O 3 production is more sensitive to the amount of NO x -emission than to the amount of emitted VOCs (NO x limited regime).An examination of Fig. 9 reveals that O 3 formation is more sensitive to NO x for a higher VOC/NO x ratio, while for lower VOC/NO x ratio, O 3 formation is more sensitive to VOCs.The models were evaluated by comparing the model predicted peak O 3 with the observed peak O 3 concentrations.The observed vs. simulated peak O 3 concentration for all the ozone episodes taken together is shown in Fig. 7.In general, peak O 3 concentration has either been over-or under predicted by the model.However, the mean absolute percentage error (MAPE) of observed vs. simulated peak O 3 concentration varied from 15.2% on one occasion to 26.7% on another occasion.The MAPE for all the ozone episodes taken together is 21.3%.These MAPE values are well within the prescribed limit of 30% by EPA (1989) for OZIPR.Hence, the models can be adjudged to be sufficient to proceed with control estimate calculation.closer indicating a greater sensitivity.In Fig. 11, O 3 -isopleths are much closer and for a NO x value (> 0.04 ppm), O 3 production is more sensitive to the amount of VOCs than to the amount of emitted NO x (VOC limited regime) while for a NO x value (< 0.04 ppm) O 3 production is more sensitive to the amount of NO x than to the amount of emitted VOCs (NO x limited regime).The O 3 -isopleths in Figs. 12, 13 and 14 show a marked similarity.The isopleths are shifted towards the relatively higher VOC range.In addition, O 3 -isopleths in these cases are within the limited range of NO x concentration (up to 0.10 or 0.15 ppm).Figs. 15 and 16 also show a greater sensitivity of O 3 formation to VOCs for lower VOCs/NO x ratio, while higher VOC/NO x ratio, O 3 formation is more sensitive to NO x .
MODEL AND SCENARIO CONDITIONS
In all the cases except Fig. 8, it is remarkable to note that O 3 formation is relatively insensitive to VOC for a very high NO x concentration (say, > 0.26-0.27ppm).
Normally, ambient NO x concentration in Delhi varies within the range of 0.01-0.15ppm and very rarely exceeds 0.20 ppm (www.cpcb.nic.in).Therefore, the study of O 3 sensitivity to NO x , as well as to VOCs is important while delineating the conditions for controlling tropospheric ozone in the ambient environment of Delhi.In the present emission scenario, the analysis of the observed ozone episodes reveals that O 3 formation is more sensitive to VOC for lower VOC/NO x ratio, while for higher VOC/NO x ratio it is more sensitive to NO x ; although the range and threshold of NO x or VOC concentration (for O 3 sensitivity to VOC/NO x ratio) may vary from one episode to the other.These ranges and thresholds seem to depend upon the meteorological conditions, as one may notice that O 3 -isopleths of nearby days are more similar to each other than days which are further apart.For instance, O 3 -isopleths for 25 Sept,29 Sept and 4 Oct (Figs. 12,13 and 14) are very similar in nature, as well as in terms of range and threshold of NO x and VOCs concentration (for O 3 sensitivity to VOC/ NO x ratio).In the same way, O 3 -isopleths for 17 Oct and 21 Oct (Fig. 15 and 16) are similar in these characteristic terms.The typical behavior of O 3 -isopleths in Fig. 8 (13 Aug) shows that for lower VOC/NO x , O3 formation is greater, while a similar amount of O3-formation required much higher VOC/NO x ratio on the other days.This may be due to the fact that the early morning of 13 th August witnessed several thunderstorms (Nautica Editrice Srl, 1995-2006) and consequently higher O 3 concentrations at the start of the day.This might be the reason that even for low VOC/NO x ratio, O 3 -production seems to be higher on that day.Figs. 8 to 16 can be useful in the regulatory control of ozone.If VOC/NO x ratio is relatively high (> 1) and NO x -emission is within a certain range, the higher the VOC concentration, the higher the O 3 production.
While for the similar VOC/NO x ratio and NO x emission below the lower threshold value of the range, the higher the NO x emission, the higher the O 3 production.On the contrary, if the VOC/NO x ratio is relatively low (< 1) and NO x emission is above the lower threshold value of the range, O 3 production is sensitive to VOC concentrations.While for the similar VOC/NO x ratio and NO x emission below the lower threshold value of the range, O 3 production is more sensitive to NO x emission.Therefore, the most effective way to reduce the O 3 levels (or at least not to increase them) would be to reduce VOC emissions while keeping a relatively low VOC/NO x ratio.
It is worth mentioning here that the studies on O 3 , NO x and VOCs variation in the ambient urban environment of Delhi have so far been rather limited (e.g., Varshney and Aggarwal , 1992;Singh et al., 1997;Padhy and Varshney, 2000;Srivastava, 2005;Chelani and Devotta, 2006).Varshney and Aggarwal (1992) and Singh et al. (1997) deal with seasonal variation of O 3 in the Delhi's atmosphere, while the studies of Padhy and Varshney (2000) and Srivastava (2005) pertain to VOC variability.Chelani and Devotta (2006) forecasts NO 2 concentration using a hybrid model.However, so far no available reported studies deal with the interaction of O 3 , NO x and VOCs or O 3 sensitivity to NO x and VOCs in the context of Delhi.We expect this study to provide an insight into the nature of ozone episodes, its sensitivity to NO x and VOCs, and possible control measures in the urban atmosphere of Delhi.
CONCLUSIONS
A photochemical modeling approach was employed to examine the relationship between O 3 levels and concentration of NO x and VOCs aimed at suggesting ozone abatement strategy in an urban area of Delhi.Ozone formation was found to be more sensitive to VOCs emissions for lower VOC/NO x ratio.For higher VOC/NO x ratio, O 3 formation is more sensitive to NO x -emissions.However, for the purpose of practical O 3 control applications, it is concluded that VOC emissions should be reduced while keeping a lower VOC/NO x ratio in order to reduce the ambient O 3 levels at the observation site.
, encircled conditions depict the ozone episodes that occurred at Sirifort, New Delhi on various occasions.A total of 9 episodes from August to October 2006 were identified for the purpose of the present study.The corresponding NO x and VOCs concentrations are shown in Figs.3a & 3b.The observed meteorological conditions during the study period, such as temperature, relative humidity, and wind speed are shown in Figs. 4, 5 and 6.
Fig. 2 .
Fig. 2. Hourly average concentration of O 3 during the period of 3 August to 21 October 2006 at Sirifort.
Fig. 3 .
Fig. 3. Hourly average concentration of (a) NO x , and (b) VOCs, during the period of 3 August to 21 October 2006 at Sirifort.
Fig. 10
Fig. 10 also exhibit the similar characteristics with a difference that isopleths are relatively | 2018-12-07T12:33:50.958Z | 2008-06-01T00:00:00.000 | {
"year": 2008,
"sha1": "853af81cc08849ddade6dc4265d73c2aac17583c",
"oa_license": "CCBY",
"oa_url": "https://aaqr.org/articles/aaqr-07-09-oa-0037.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "853af81cc08849ddade6dc4265d73c2aac17583c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
240951507 | pes2o/s2orc | v3-fos-license | Hypothyroidism-Related Cardiac Tamponade
Thyroid dysfunction is a common incidental finding among healthy individuals. It can affect various organs of the body, including the heart. Among many other heart complications, it can lead to pericardial effusion by causing increased permeability of albumin across the pericardial membrane that leads to exudative pericardial effusion. In hypothyroidism, the fluid collection process occurs over a period of months, giving enough time for the pericardial membrane to stretch and accommodate the fluid within itself without causing any symptoms. Eventually, the pericardial membrane stretches to its maximum capacity and has no room to accommodate any more fluid, resulting in cardiac tamponade in the patients. Patients with hypothyroidism-related cardiac tamponade usually remain asymptomatic or present with atypical symptoms such as bradycardia and a normal heart rate or high blood pressure, and the diagnosis comes into light only when patients present to the hospital with hemodynamic instability. In these cases, echocardiography successfully detects large pericardial effusion with collapsed cardiac chambers. To treat hypothyroidism-related cardiac tamponade, treating the underlying condition has been very successful in the majority of the asymptomatic patients, but pericardiocentesis is required in emergencies to relieve symptoms of patients presenting with hemodynamic instability. We believe hypothyroidism-related cardiac tamponade is a preventable condition if detected and treated in outpatient settings by family physicians. This will prevent occurrence of various complications arising from hypothyroidism, including pericardial effusion. This will lead to a better quality of life among patients with the added benefit of reduced health care burden due to reduced frequency of hospital admissions of acutely ill patients.
Introduction And Background
It is suggested that about 4% to 10% of the general population is affected by hypothyroidism [1]. Low thyroid hormone can affect almost every organ system of the body, leading to various clinical presentations ( Figure 1). In the cardiovascular system (CVS), thyroid hormone plays a vital role in its development and function, mainly mediated by Triiodothyronine (T3). T3 causes increased tissue oxygen consumption (tissue thermogenesis), increased force of systolic contraction, and diastolic relaxation, along with an overall reduction in vascular resistance [2]. A lack of T3 in hypothyroidism leads to cardiovascular manifestations such as diastolic hypertension, sinus bradycardia, pericarditis, dyslipidemia, and pericardial effusion [3]. On physical examination, cardiovascular symptoms such as dyspnea, muffled heart sounds, non-pitting or pitting edema, and decreased exercise tolerance may be noted [4].
FIGURE 1: Clinical manifestations of Hypothyroidism
The structure that lines the heart and the proximal vessels is known as the pericardium. It is made up of two relatively avascular layer, the parietal and visceral pericardium, separated by a space that contains up to 50 ml of fluid, known as a pericardial fluid [5]. Pericardial effusion is a condition developed when there is increased accumulation of fluid in the pericardial cavity that can occur from various underlying pathologies such as a tumor, uremia, trauma, cardiac surgery, and hypothyroidism. In hypothyroidism, the incidence of pericardial effusion is about 3% to 6% in mild cases as compared to about 30% to 80 % in severe cases [6]. Typically pericardium is impermeable to proteins, whereas, in hypothyroidism, there is a rise in pericardial permeability and reduced lymphatic drainage of the albumin that causes leakage of proteins into the pericardial space, leading to accumulation of fluid in the pericardial space. Thus, the idea of exudative pericardial effusion is supported [5]. As the fluid accumulation is very slow, this gives the heart enough time to stretch and adapt to the change causing significant fluid accumulation without causing any hemodynamic compromise [7]. Although a larger volume of fluids can be accommodated in the pericardial space due to slow collection, at some volume, the pressure-volume curve will have a steep rise in pressure leading to tamponade physiology. Typical acute cardiac tamponade is a clinical diagnosis with clinical symptoms of hypotension, jugular vein distention, and tachycardia, known as Beck's triad. Contrary to this, hypothyroid patients present with bradycardia and a normal heart rate or high blood pressure. Pericardiocentesis is only required in hemodynamically unstable patients. However, hypothyroidism-related pericardial effusion may have echocardiographic evidence of tamponade [8,9]. This article will review the pathophysiology of cardiac tamponade in patients suffering from hypothyroidism and how it is different from cardiac tamponade occurring from other etiologies. We will also discuss the importance of early diagnosis and treatment of hypothyroidism to prevent life-threatening complications that may arise from severe hypothyroidism.
Pathophysiology of hypothyroidism associated pericardial effusion
Physiologically, pericardial fluid is formed by ultrafiltration that occurs at the site of pericardial capillaries. Normally, hydrostatic pressure is higher in the arterioles than the pressure in the venules, and the colloid osmotic pressure created by the plasma proteins is essentially the same at both ends. Thus, most of the fluid gets reabsorbed at the venous end, and some of the retained fluid will be drained out via lymphatic drainage [5]. Typically, the pericardial space contains about 10 ml to 50 ml of pericardial fluid that acts as a lubricant between the pericardial layers. Any underlying condition can end up causing inflammation inside the pericardial cavity leading to increased fluid accumulation inside the cavity, thus causing pericardial effusion [10]. In hypothyroidism, there is low plasma volume, high vascular permeability, lower synthesis and catabolism rates of albumin, and prolonged passage time through the extravascular spaces causing increased albumin mass in the extravascular space [7]. This high vascular permeability to albumin increases the interstitial colloid pressure. As the colloid osmotic pressure gradient between the interstitial and the intravascular space reduces, there is a reduced fluid return to the capillaries at the venous end. Also, there is poor albumin drainage into the lymphatics that worsens the process, causing fluid retention inside the pericardial cavity [5] (Figure 2).
Clinical presentation of hypothyroidism associated pericardial effusion
Hypothyroidism can cause various cardiac manifestations, such as bradycardia, atrial fibrillation, diastolic hypertension, varying degrees of atrioventricular (AV) block, prolonged QT interval leading to torsades de pointes, accelerated coronary artery disease, and pericardial effusion [7]. Clinically, hypothyroidism associated pericardial effusion can vary from being asymptomatic to presenting as cardiac tamponade, in which patients present with hemodynamic compromise [11]. Pericardial effusion can be classified based on onset (acute, subacute, and chronic), composition (transudative or exudative), and by its size, i.e., mild (<10 mm), moderate (10-20 mm), or large (>20 mm) [10]. The severity of pericardial effusion is dependent on the severity of the disease. Thus pericardial effusion is frequently found in myxedema, an advanced stage of hypothyroidism, and is rarely related to mild disease [12]. Also, the severity of the pericardial effusion depends on the speed of fluid accumulation in the pericardial sac. If it is getting collected over a short period, such as after trauma, the clinical presentation would be dramatic, i.e., even small amounts of blood over a short period can cause high pressure inside the pericardial cavity-causing hemodynamic instability in the patients. A slow accumulation of fluid, as in hypothyroidism, allows enough time for the pericardial membrane to stretch and accommodate the fluid. Thus, it takes weeks to months before symptoms start to appear, making cardiac tamponade a rare presentation in hypothyroidism [10]. Literature suggests that about half of all patients with chronic large pericardial effusion are asymptomatic [11]. When present, the patients may present with symptoms such as dyspnea (61.1%), cough (25%), and chest pain (13.9%) in the setting of clinical symptoms of severe hypothyroidism, including lethargy, facial swelling, dry skin with non-pitting edema, delayed relaxation of deep tendon reflexes. To evaluate the patients, a chest X-ray and ECG (electrocardiogram) are considered. Chest X-ray shows the enlarged cardiac silhouette, and electrocardiogram (ECG) shows changes, such as low voltage, sinus bradycardia, flattened T or inverted T waves, prolonged QTc, exposing the patient to high risk of developing torsades de pointes, ventricular tachycardia, and rarely, AV block. Abnormalities in the chest X-ray and the ECG are further investigated by an echocardiogram [8,10]. As per the guidelines, pericardial effusion should be managed by treating the underlying etiology as much as possible [10].
Clinical presentation of hypothyroidism associated cardiac tamponade
Pericardial tamponade, also known as cardiac tamponade, is a condition in which there is an abnormally large amount of fluid accumulation in the pericardial cavity-causing increased intrapericardial pressure, which is then transmitted to cardiac chambers, thus reducing cardiac filling. Although pericardium has some degree of elasticity to accommodate some fluid, as in pericardial effusion, without showing clinical symptoms, once the limit is attained, it starts to compress all the cardiac chambers. Since the right atrium is thin-walled, it is not only the most vulnerable to compression by the pericardial fluid, but its increased pressure also affects the veno-atrial gradient that determines the cardiac filling [13]. This causes a hemodynamic compromise in the body, creating a shock-like state in the body. Pericardial tamponade is a clinical diagnosis that can be recognized by three signs, also known as Beck's triad. They include hypotension, jugular venous distention, and muffled heart sounds, along with pulses paradoxus. On investigation, ECG shows low-voltage QRS and electrical alternans due to the damping effect of pericardial fluid and swinging of heart inside the fluid. Echocardiograph is considered the single most useful diagnostic tool to identify the size, location, and degree of hemodynamic compromise. It is a medical emergency that needs quick removal of fluid from the pericardial cavity. This is done with pericardiocentesis [14] (Figure 3).
FIGURE 3: Clinical features of Hypothyroidism-related cardiac tamponade versus typical cardiac tamponade
On the contrary, cardiac tamponade may present differently in patients with a history of hypothyroidism. To analyze the management plan of hypothyroidism-related cardiac tamponade, we collected case reports of eight different patients. Patients in Table 1 were treated medically with Levothyroxine. On the other hand, patients in Table 2 required pericardiocenteses. Baldwin et al. [8] compared three unique cases of cardiac tamponade in patients with a history of hypothyroidism. All of them were successfully treated without invasive interventions with Levothyroxine. Chou et al. [15] reported a patient with a history of laryngectomy. He was suspected of having hypothyroidism due to his history of neck surgery. Once the diagnosis was confirmed, he was successfully treated medically with oral thyroxine for his massive pericardial effusion. Although hypothyroidism-related cardiac tamponade can be managed medically by treating underlying hypothyroidism, pericardiocentesis may be required in some cases. Wang et al. in 2010 studied 36 patients in total in which only eight patients (22.2 %) had both clinical and echocardiographic findings of tamponade and were treated by pericardiocentesis [16]. We described four cases of hypothyroidism-related cardiac tamponade that required pericardiocentesis in order to stabilize the patient. Once stabilized, patients were managed medically with Levothyroxine to prevent recurrence of pericardial effusion ( Table 2). Once hemodynamic stability is obtained, the patient should be further investigated to look for the underlying cause to prevent reoccurrence. Also, the pericardial fluid analysis showed yellow fluid without any bacteria or malignant cells present in it [9,12,17,18]. Alexander first described this in 1919, as the pericardial effusion of "gold-paint" appearance due to the presence of cholesterol in the fluid with no bacteria in it [19].
Hypothyroidism presenting with hypertensive emergency and intracranial hemorrhage
Hypothyroidism may also present as a hypertensive crisis, which is an infrequent complication but can happen. Chui et al. [2] reported that a 20-year-old male patient presented with a one-month history of intermittent blurred vision, which was getting worse. His vitals were as follows: BP 224/140 mm Hg, HR 62 beats per minute. On cardiovascular (CVS) examination, JVP was not elevated, and no pulsus paradoxus or cardiac murmurs were present. Neurological examination showed bitemporal hemianopsia, and fundoscopic examination of the eyes showed papilledema and hypertensive retinopathy. On investigation, neuroimaging and ECG findings were insignificant. An echocardiogram showed large circumferential pericardial effusion. Laboratory work showed hypothyroidism, and he was started on high-dose thyroid replacement and antihypertensive agents for his hypertension. However, he became short of breath; pericardiocentesis was performed to relieve his symptoms. After he became stable, he was discharged on oral thyroid replacement therapy. On follow-up appointment, his high blood pressure had resolved with normalization of echocardiogram findings. Hwang et al. [20] reported a 46-year-old female who presented to the hospital with left arm weakness and dysarthria that had started 30 minutes ago. On general examination, the patient had a puffy face, and there was generalized edema. Cardiovascular examination was insignificant. Her vitals were as follows, BP 213/124mm Hg, HR 60 beats per minute. Neurological examination showed dysarthria and weakness in the left arm of motor grade 1-2. On investigation, intracranial CT showed hemorrhage. Chest X-ray showed a "water bottle" sign, indicating cardiomyopathy and transthoracic echocardiogram showed a large amount of pericardial fluid. She was being managed for her high blood pressure to prevent worsening neurological symptoms, but she started to have shortness of breath and reduced awareness. Urgent pericardiocentesis was performed to relieve her symptoms. On further investigation, her TSH was elevated. She was started on Synthroid along with anti-hypertensive agents. On follow-up after six months, her dose of anti-hypertensive medication was reduced due to normalization of thyroid hormone level. It was concluded that severe hypothyroidism might present with hypertensive crisis and hemorrhage, and these life-threatening situations can be well prevented by managing the patient's underlying medical condition.
In this review article, we studied eight patients thoroughly from the literature to draw our conclusion on how we can prevent life-threatening cardiac temponade in hypothyroidism patients. To check accuracy of this study, we encourage future researchers to conduct larger observational studies and come up with recommendations for regular follow-up of hypothyroidism patients and strategies to create awareness about the nature of medical condition among patients.
Conclusions
As hypothyroidism can cause symptoms of almost all body systems, patients may present with vague symptoms, due to which diagnosis may be missed in many patients. Additionally, there are no recommendations for screening among the general population for thyroid dysfunction, which together contribute to delayed diagnosis among patients. It is evident from the above-mentioned examples that patients who missed the diagnosis of hypothyroidism ended up needing pericardiocentesis in the hospitals. It should also be noted that thyroid dysfunction can also present as worsening of a patient's chronic condition such as diabetes, chronic heart failure, depression, and fatigue, etc. Thus, thyroid function should be routinely tested as part of their routine blood work. Once diagnosed, patients should be regularly followed up by their family physician during the time they are undergoing treatment and thereafter to see if they continue to remain euthyroid. The frequency of follow-up may vary depending on various factors such as the severity of disease, presence of multiple comorbidities and compliance to the medication, in which case the patients should be seen more frequently and should be assessed if they are candidate for referral to a specialist. Emphasis should also be made to educate the patients about the nature of their disease and the complication that may arise if they are left untreated. This way, a family physician can play a huge role in significantly reducing hypothyroidism-related cardiovascular complications and can help prevent hospital admissions among these patients.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2021-10-15T15:51:21.182Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "5b3fea34aa000fb12f94fe700886f5a27496c7d1",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/63248-hypothyroidism-related-cardiac-tamponade.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "886ce97284346d1a2aa043c4da3c1987b37e6ae3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
134916683 | pes2o/s2orc | v3-fos-license | Distribution and Characteristics of Nanotubular Halloysites in the Thach Khoan Area , Phu Tho , Vietnam
Two types of halloysite collected from the upper (UPS) and lower (LOS) zones of a weathered pegmatite profile in the Thach Khoan area, Phu Tho were defined by X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), Fourier transform infrared spectroscopy (FT-IR), thermal analysis (TG and DTG), and N2 adsorption-desorption isotherms. XRD analysis showed that halloysite and kaolinite coexist in samples of size fractions <2 μm. Semi-quantitative analysis by XRD after formamide (FA) treatment indicated that the halloysite contents are approximately 81% and 93% in UPS and LOS samples, respectively. The results of SEM and TEM analyses showed that while short halloysite type is mainly distributed in the upper zone, long halloysite type occurs primarily in the lower zone of the weathered pegmatite profile. The length of short halloysite ranging from 250 to 750 nm is most popular, accounting for 47.2% of halloysites in the UPS sample. Meanwhile, long halloysites which have the length of 750–1250 nm are dominant in the LOS sample with 69.9%. In addition, short halloysites with outer diameter of >100 nm constitute 79.1% of halloysites in the UPS sample while long halloysites with outer diameter of 50–100 nm make up 74.2% of halloysites in LOS sample. Specific surface areas are 15.7434 and 22.0211 m2/g and average pore sizes are 18.9837 and 17.0281 nm for the UPS and LOS samples, respectively. The analysis implies that although forming under same natural geographical and climatic conditions, halloysites at different depths in the weathered pegmatite profile may have different morphological and other properties.
Introduction
Halloysite was originally depicted as a 1:1 layered aluminosilicate mineral of the kaolin group by Berthier [1].The chemical composition structure of halloysite is similar to that of minerals of the kaolin group (kaolinite, nacrite and dickite minerals) but the unit layers of halloysite are isolated by a monolayer of water molecules [2][3][4].Halloysite appears mainly in two different polymorphs: a chemical formula Al 2 Si 2 O 5 (OH) 4 •2H 2 O when fully hydrated and Al 2 Si 2 O 5 (OH) 4 when dehydrated [5].
Halloysite can been found in a variety of particle morphologies, such as short-tubular, large-tubular and spheroidal, and platy shapes [6].However, nanotubular morphology is the most common shape of halloysite.The tubular shape can be considered as rolled kaolin sheets with an inner diameter of 1/30 nm, an outer diameter of 30/50 nm and a length of 100-2000 nm [7,8].The interior surface of halloysite is composed of siloxane (Si-O-Si) groups, while the external is a gibbsite-like array of aluminol (Al-OH) groups [9,10].
Halloysite deposits have been discovered and exploited in different countries such as New Zealand, United States, Australia, China, Brazil, and Turkey [11].This mineral can be formed both by weathering of igneous rocks and their hydrothermal alteration [12][13][14][15][16][17][18][19].For instance, the Matauri Bay (New Zealand) halloysite deposit was formed by hydrothermal alteration at low temperature of rhyolite and dacite volcanic rocks [20].The large mass of halloysite at the Dragon Mine (UT, USA) was formed by irregular replacement of Early Paleozoic dolomite rock in contact with hydrothermal fluids channeled along the Dragon Fissure Zone [21].Halloysite at TePuke is a weathering product of volcanic rocks of rhyolite and andesite in the Bay of Plenty, New Zealand [22].The above literature and others have shown that halloysites from different areas also have different morphological and physicochemical properties [7,[23][24][25].
In the recent years, due to its superior properties such as tubular structure, non-toxicity, large surface area, high mechanical strength, lower cost compared to nanotubular carbon, halloysite has attracted considerable attention of scientists and many new possibilities of application [8,[26][27][28][29][30][31][32][33][34].However, in many cases, differences in morphology, size, as well as other properties of halloysites may have certain impacts on their applicability in practice.For instance, Makaremi et al. [35] used two different types of halloysite nanotubes to improve the properties of apple pectin bionanocomposites as potential films for food packaging applications.Results indicated that the short halloysites with 50-3000 nm length and 50-200 nm outer diameter had better ability for the encapsulation of salicylic acid into their lumen, while the long halloysites with 200-30,000 nm length and 40-55 nm outer diameter made the encapsulation process more difficult.Zheng and Ni [36] prepared an efficient flame-resistant composite using the pentaerythritol-loaded halloysites for the UV-curable epoxy resin.In this study, halloysites have length 300-1000 nm, outer diameter 50-70 nm and BET surface area 36.40 m 2 /g.The obtained composite showed a low moisture absorption and a good stability of the mechanical properties.Pasbakhsh et al. [37] have studies the properties of some halloysites in the world, and have given orientations for their applications.For example, the long-tubular halloysite with 200-5000 nm length and 40-55 nm outer diameter are very suitable for use both an additive and a carrier.The halloysite tubes showing a wide variation in size may be well suited as microfiber filler.Thus, it can be seen that studying the properties of halloysites from different deposits or even in a deposit is necessary before using them for different applications.
This study aims to study the distribution and characteristics two types of halloysite nanotubes from a weathered pegmatite profile in the Thach Khoan area, Phu Tho Province.Different characteristics of these halloysites were determined using X-ray diffraction (XRD), scanning electron microscopy-energy dispersive X-ray spectroscopy (SEM-EDS), transmission electron microscopy (TEM), fourier transform infrared spectroscopy (FT-IR), thermal analysis (TG and DTG), and N 2 adsorption-desorption isotherms.The results showed that halloysites from different depths of the weathered pegmatite in the study area have different morphological properties.This information is useful for the understanding of distribution and characteristics of halloysites in the deposit and helping for exploitation and use these nanotubular minerals effectively.
General Geological Setting of Study Area
The study area has many pegmatite bodies with different sizes related to the Late Paleozoic Tan Phuong granite Complex [38].The surrounding rocks of pegmatite bodies are the metamorphic Thach Khoan formation of Proterozoic age (Figure 1).The composition of this formation consists mainly of mica quartz schist, mica schist, staurolite-bearing quartz, disten, sillimanite, and garnet.Thach Khoan formation of Proterozoic age (Figure 1).The composition of this formation consists mainly of mica quartz schist, mica schist, staurolite-bearing quartz, disten, sillimanite, and garnet.The pegmatite bodies have the strike of 60° N-80° W, dipping to the southwest with a slope of 50°-80°.They vary from several hundred up to thousands of meters in length and from tens to hundreds of meters wide.All pegmatite bodies have a similar weathering profile with an upper brown yellow zone (15-20 m), a middle pink zone (5-10 m) and a lower white, light orange zone (5-15 m).
Samples
A typical outcrop about 40 m high that has the GPS position of 21°11′31″ N and 105°15′07″ E, was prepared for sampling.For comparison purposes, two samples were collected separately.The first sample, called UPS sample, was taken in upper zone, and the second one, LOS sample, was from the lower zone of the weathered pegmatite profile.The samples were taken from the top down, perpendicular to the weathering layers.Separated samples were mixed homogeneously before using for further steps.
The bulk samples were first dissolved in deionized water by repeated ultrasonic vibration.A portion of the <2 µm clay sample fraction was obtained using the decantation method.The clay fractions were then freeze dried and examined by different analyses.The pegmatite bodies have the strike of 60 • N-80 • W, dipping to the southwest with a slope of 50 • -80 • .They vary from several hundred up to thousands of meters in length and from tens to hundreds of meters wide.All pegmatite bodies have a similar weathering profile with an upper brown yellow zone (15-20 m), a middle pink zone (5-10 m) and a lower white, light orange zone (5-15 m).
Samples
A typical outcrop about 40 m high that has the GPS position of 21 • 11 31 N and 105 • 15 07 E, was prepared for sampling.For comparison purposes, two samples were collected separately.The first sample, called UPS sample, was taken in upper zone, and the second one, LOS sample, was from the lower zone of the weathered pegmatite profile.The samples were taken from the top down, perpendicular to the weathering layers.Separated samples were mixed homogeneously before using for further steps.
The bulk samples were first dissolved in deionized water by repeated ultrasonic vibration.A portion of the <2 µm clay sample fraction was obtained using the decantation method.The clay fractions were then freeze dried and examined by different analyses.
Characterization
X-ray diffraction (XRD) patterns of the samples were collected by using a D8-Advance Bruker diffraction (Bruker Corporation, Billerica, MA, USA) with radiation of CuKα (λ = 1.5406 nm) generated at 40 kV and 40 mA.The data were archived in the Bragg angle (2θ) range of 3 • -70 • with scanning speed of 2 • min −1 .Minerals were defined by using the software of Evaluation 10.0 with database (PDF-2 2004) provided by the International Centre for Diffraction Data.Formamide (FA) treatment was used to estimate the content of halloysite and kaolinite in the samples [22].
The Fourier transform infrared (FT-IR) spectra for each sample were achieved in transmission mode on pellets containing a pressed mixture of approximately 1.0 mg of the sample and 100 mg of KBr.The IR spectra were recorded in the range from 4000 to 400 cm −1 with a resolution of 2 cm −1 (Shimadzu IR Prestige-21 spectro-meter instrument, Kyoto, Japan).
Scanning electron microscope (SEM) coupled with energy dispersive X-ray spectroscopy (EDS) (Quanta 450, FEI Company, Hillsboro, OR, USA) were initially used to analyze the morphology of minerals and elements present in the samples.Transmission electron microscopy (TEM) images were obtained by a JEM 1010 operated at an accelerating voltage of 200 kV.The samples were suspended by using a drop-wise of ethanol and evaporated on 200 mesh copper grids covered with amorphous Formvar carbon.
Thermmogravimetric analyses (TG) were carried out on a SETERAM Instrument (Caluire-et-Cuire, France).Approximately 2-3 mg of the samples were heated from 50 to 1050 • C in a platinum crucible with a heating rate of 10 • C min −1 , under an atmosphere of high purity N 2 .
The specific surface area of the samples was measured from N 2 gas adsorption at 77 K by using a TriStar 3000 (Micromeritics Corp., Norcross, GA, USA).Surface areas were calculated from the linear part of the (Brunauer-Emmett-Teller) BET plot.The N 2 isotherms and the Barret-Joyner-Halenda (BJH) method were used to calculate pore size distributions of halloysites.
Characterization
X-ray diffraction (XRD) patterns of the samples were collected by using a D8-Advance Bruker diffraction (Bruker Corporation, Billerica, MA, USA) with radiation of CuKα (λ = 1.5406 nm) generated at 40 kV and 40 mA.The data were archived in the Bragg angle (2θ) range of 3°-70° with scanning speed of 2° min −1 .Minerals were defined by using the software of Evaluation 10.0 with database (PDF-2 2004) provided by the International Centre for Diffraction Data.Formamide (FA) treatment was used to estimate the content of halloysite and kaolinite in the samples [22].
The Fourier transform infrared (FT-IR) spectra for each sample were achieved in transmission mode on pellets containing a pressed mixture of approximately 1.0 mg of the sample and 100 mg of KBr.The IR spectra were recorded in the range from 4000 to 400 cm −1 with a resolution of 2 cm −1 (Shimadzu IR Prestige-21 spectro-meter instrument, Kyoto, Japan).
Scanning electron microscope (SEM) coupled with energy dispersive X-ray spectroscopy (EDS) (Quanta 450, FEI Company, Hillsboro, OR, USA) were initially used to analyze the morphology of minerals and elements present in the samples.Transmission electron microscopy (TEM) images were obtained by a JEM 1010 operated at an accelerating voltage of 200 kV.The samples were suspended by using a drop-wise of ethanol and evaporated on 200 mesh copper grids covered with amorphous Formvar carbon.
Thermmogravimetric analyses (TG) were carried out on a SETERAM Instrument (Caluire-et-Cuire, France).Approximately 2-3 mg of the samples were heated from 50 to 1050 °C in a platinum crucible with a heating rate of 10 °C min −1 , under an atmosphere of high purity N2.
The specific surface area of the samples was measured from N2 gas adsorption at 77 K by using a TriStar 3000 (Micromeritics Corp., Norcross, GA, USA).Surface areas were calculated from the linear part of the (Brunauer-Emmett-Teller) BET plot.The N2 isotherms and the Barret-Joyner-Halenda (BJH) method were used to calculate pore size distributions of halloysites.
XRD Analysis
XRD patterns of the UPS and LOS samples with size fractions <2 µm in natural condition are presented in Figure 2. The results indicated that minerals of kaolin group coexisted in the samples.The basal reflections of 10 Å halloysite were recorded at peaks of 10.0°, 4.44°, 3.36° and 2.56° (Al2Si2O5 (OH)4•2H2O, hexagonal structure, PDF No. 00-29-1489).The peaks at 7.38° and 3.60° refer to kaolinite (Al2Si2O5 (OH)4, with a triclinic structure, PDF No. 01-089-6538).For the measurement of the content of halloysite and kaolinite, the formamide (FA) treatment was applied to the samples following Churchman et al. [22].The percentage of halloysite a in a sample was defined by the equation: where I 7 and I 10 denote the height of the peaks near 7 and 10 Å of XRD patterns, respectively.Figure 3 shows the XRD results of UPS and LOS samples with size fractions <2 µm after formamide treatment.
It can be seen that the intensities of the 10 Å peak were significantly higher those of the 7 Å peak in both the UPS and LOS samples.The estimated percentages of halloysite in the samples from Figure 3 were approximately 81% and 93% for UPS and LOS samples, respectively.
For the measurement of the content of halloysite and kaolinite, the formamide (FA) treatment was applied to the samples following Churchman et al. [22].The percentage of halloysite a in a sample was defined by the equation: where I7 and I10 denote the height of the peaks near 7 and 10 Å of XRD patterns, respectively.Figure 3 shows the XRD results of UPS and LOS samples with size fractions <2 µm after formamide treatment.It can be seen that the intensities of the 10 Å peak were significantly higher those of the 7 Å peak in both the UPS and LOS samples.The estimated percentages of halloysite in the samples from Figure 3 were approximately 81% and 93% for UPS and LOS samples, respectively.One of the models concerning the transformation from halloysite to kaolinite during weathering of crystalline rocks was based on the dissolution and recrystallization mechanism [39,40].According to this model, the thermodynamics of halloysite is less stable than that of kaolinite under the prevailing weathering conditions.Thus, halloysite is formed early in the weathering profile, then, dissolved and kaolinite is eventually crystallized under suitable weathering conditions such as time, the activity of water table in each weathering zones.In this study, that the percentage content of halloysites in the lower zone (93%) is higher than that in the upper zone (81%) of the weathered pegmatite profile is consistent with the previously reported results of Inoue et al. [41].
FT-IR Analysis
Figure 4 shows the FT-IR spectra of the two samples (UPS and LOS).It can be seen that the IR spectra of the samples were quite similar and all present the existence of kaolin minerals [42].The absorptions bands at 3696 and 3620 cm −1 in the FTIR spectra are assigned to the stretching vibration due to the inner-surface of O-H groups.The absorption at 1640 cm −1 is assigned to the interlayer water [43].The intensity of this absorption band increases as the interlayer water content increases.It may come from the significant content of halloysite in the LOS sample.One of the models concerning the transformation from halloysite to kaolinite during weathering of crystalline rocks was based on the dissolution and recrystallization mechanism [39,40].According to this model, the thermodynamics of halloysite is less stable than that of kaolinite under the prevailing weathering conditions.Thus, halloysite is formed early in the weathering profile, then, dissolved and kaolinite is eventually crystallized under suitable weathering conditions such as time, the activity of water table in each weathering zones.In this study, that the percentage content of halloysites in the lower zone (93%) is higher than that in the upper zone (81%) of the weathered pegmatite profile is consistent with the previously reported results of Inoue et al. [41].
FT-IR Analysis
Figure 4 shows the FT-IR spectra of the two samples (UPS and LOS).It can be seen that the IR spectra of the samples were quite similar and all present the existence of kaolin minerals [42].The absorptions bands at 3696 and 3620 cm −1 in the FTIR spectra are assigned to the stretching vibration due to the inner-surface of O-H groups.The absorption at 1640 cm −1 is assigned to the interlayer water [43].The intensity of this absorption band increases as the interlayer water content increases.It may come from the significant content of halloysite in the LOS sample.
Electron Microscopy Analysis
The scanning electron micrograph (SEM) images and EDS data of the samples with the size fraction <2 µm are shown in Figure 5.The rod-shaped minerals were interwoven and overlapped each other as matrices.From these images, it can be seen that there may be two types of halloysite available in the samples: short halloysites in the upper zone (UPS sample) and long halloysites in the lower zone (LOS sample) of the weathered pegmatite profile in the study area.EDS spectra shows the main elements of Al, Si, and O, which are relative to halloysite chemical formula (Al2Si2O5 (OH)4•2H2O).The transmission electron micrographs in Figure 6 also display tubular morphology of these minerals clearly.Under the same magnification, the lengths of halloysite in the UPS sample are generally shorter than the lengths of halloysite in the LOS sample (Figure 6A,B).Closed view of these minerals are presented in Figure 6A1,B1.The distributions of the lengths and outer diameters of these halloysites using TEM images are presented in Figures 7 and 8, respectively.Results showed that short halloysites in UPS sample are distributed mainly in the length range from 250 to 750 nm, accounting for 47.2% of halloysites in the sample.Meanwhile, long halloysites are dominant in the LOS sample with 69.9% of a length range from 750 to 1250 nm (Figure 7).In addition, short halloysites with an outer diameter of >100 nm constitute 79.1% of halloysites in the UPS sample, while long halloysites with an outer diameter of 50-100 nm make up 74.2% of halloysites in the LOS sample (Figure 8).This difference in size of halloysites between weathering zones may be due to the structure of early formed halloysites in the upper zone partially replaced by new small kaolinites crystals [41].
Electron Microscopy Analysis
The scanning electron micrograph (SEM) images and EDS data of the samples with the size fraction <2 µm are shown in Figure 5.The rod-shaped minerals were interwoven and overlapped each other as matrices.From these images, it can be seen that there may be two types of halloysite available in the samples: short halloysites in the upper zone (UPS sample) and long halloysites in the lower zone (LOS sample) of the weathered pegmatite profile in the study area.EDS spectra shows the main elements of Al, Si, and O, which are relative to halloysite chemical formula (Al 2 Si 2 O 5 (OH) 4 •2H 2 O).The transmission electron micrographs in Figure 6 also display tubular morphology of these minerals clearly.Under the same magnification, the lengths of halloysite in the UPS sample are generally shorter than the lengths of halloysite in the LOS sample (Figure 6A,B).Closed view of these minerals are presented in Figure 6A 1 ,B 1 .The distributions of the lengths and outer diameters of these halloysites using TEM images are presented in Figures 7 and 8, respectively.Results showed that short halloysites in UPS sample are distributed mainly in the length range from 250 to 750 nm, accounting for 47.2% of halloysites in the sample.Meanwhile, long halloysites are dominant in the LOS sample with 69.9% of a length range from 750 to 1250 nm (Figure 7).In addition, short halloysites with an outer diameter of >100 nm constitute 79.1% of halloysites in the UPS sample, while long halloysites with an outer diameter of 50-100 nm make up 74.2% of halloysites in the LOS sample (Figure 8).This difference in size of halloysites between weathering zones may be due to the structure of early formed halloysites in the upper zone partially replaced by new small kaolinites crystals [41].
Thermal Analysis
Figure 9 presents the weight loss traces, thermogravimetry (TG) and derivative thermogravimetry (DTG) curves of the UPS and LOS samples.As can be seen, two main mass loss steps were determined in the TG curves.At first, the endothermic peaks at 91.4 °C with a mass loss of 0.6% and at 87.5 °C with a mass loss of 9.5% are ascribed to the removal of physisorbed water in the UPS and LOS samples, respectively.In the second endothermic peak at 510.4 °C (mass loss of 12.2%) for UPS and at 516.0 °C (mass loss of 13.0%) for LOS are due to the dehydroxylation of the structural aluminol groups in halloysite.The TG (DTG) curves of these thermal analysis were in agreement with previous literature [45][46][47].The difference of halloysite percentages in samples (81% and 93% halloysites for UPS and LOS samples, respectively) may be one of the reasons for differences in temperature of the endothermic peaks and their lost weights [19].
Thermal Analysis
Figure 9 presents the weight loss traces, thermogravimetry (TG) and derivative thermogravimetry (DTG) curves of the UPS and LOS samples.As can be seen, two main mass loss steps were determined in the TG curves.At first, the endothermic peaks at 91.4 °C with a mass loss of 0.6% and at 87.5 °C with a mass loss of 9.5% are ascribed to the removal of physisorbed water in the UPS and LOS samples, respectively.In the second endothermic peak at 510.4 °C (mass loss of 12.2%) for UPS and at 516.0 °C (mass loss of 13.0%) for LOS are due to the dehydroxylation of the structural aluminol groups in halloysite.The TG (DTG) curves of these thermal analysis were in agreement with previous literature [45][46][47].The difference of halloysite percentages in samples (81% and 93% halloysites for UPS and LOS samples, respectively) may be one of the reasons for differences in temperature of the endothermic peaks and their lost weights [19].
Thermal Analysis
Figure 9 presents the weight loss traces, thermogravimetry (TG) and derivative thermogravimetry (DTG) curves of the UPS and LOS samples.As can be seen, two main mass loss steps were determined in the TG curves.At first, the endothermic peaks at 91.4 • C with a mass loss of 0.6% and at 87.5 • C with a mass loss of 9.5% are ascribed to the removal of physisorbed water in the UPS and LOS samples, respectively.In the second endothermic peak at 510.4 • C (mass loss of 12.2%) for UPS and at 516.0 • C (mass loss of 13.0%) for LOS are due to the dehydroxylation of the structural aluminol groups in halloysite.The TG (DTG) curves of these thermal analysis were in agreement with previous literature [45][46][47].The difference of halloysite percentages in samples (81% and 93% halloysites for UPS and LOS samples, respectively) may be one of the reasons for differences in temperature of the endothermic peaks and their lost weights [19].
Surface Area and Pore Size
The nitrogen adsorption-desorption isotherms of the UPS and LOS samples are displayed in Figure 10.These isotherms exhibit type II with an H3 hysteresis loop in the relative pressure (P/P0) and this type of isotherm is a typical characteristic of mesoporous structures [48].The values for specific surface areas (SBET), the cumulative specific surfaces obtained from isotherms for both adsorption (Sads) and desorption (Sdes) of nitrogen on each sample are summarized in Table 1.The surface areas were determined for the UPS and LOS samples with SBET of 15.7434 and 22.0211 m 2 /g, respectively.Halloysites in the LOS sample have higher value of SBET than that of the halloysites in the UPS sample because halloysites in the LOS sample have longer and thinner cylindrical structure [37,49].Average pore sizes of for the UPS and LOS samples are 18.9837 and 17.0281 nm, respectively.
Surface Area and Pore Size
The nitrogen adsorption-desorption isotherms of the UPS and LOS samples are displayed in Figure 10.These isotherms exhibit type II with an H3 hysteresis loop in the relative pressure (P/P 0 ) and this type of isotherm is a typical characteristic of mesoporous structures [48].The values for specific surface areas (S BET ), the cumulative specific surfaces obtained from isotherms for both adsorption (S ads ) and desorption (S des ) of nitrogen on each sample are summarized in Table 1.The surface areas were determined for the UPS and LOS samples with S BET of 15.7434 and 22.0211 m 2 /g, respectively.Halloysites in the LOS sample have higher value of S BET than that of the halloysites in the UPS sample because halloysites in the LOS sample have longer and thinner cylindrical structure [37,49].Average pore sizes of for the UPS and LOS samples are 18.9837 and 17.0281 nm, respectively.The low discrepancy between Sads and SBET indicates that these samples likely contain mainly cylindrical pores of varying radius and slit-shaped pores are the dominant shape in both samples [39,48,49].
The pore size distributions using the Barrett-Joyner-Halenda (BHJ) theory for the UPS and LOS samples are presented in Figure 11.From Figure 11, the narrow peaks centered at 4.3 nm are signed to internal/surface pores, including spaces between the overlaps of folded halloysite sheets in the samples [39].The stronger intensity of this peak in the LOS sample indicates that halloysite formed a more concentrated and uniform pore size distribution.The peaks at 9.2, 10.7, and 13.4 nm are identified as the lumens of halloysites in the samples and are agreeable with measurements on TEM images.The low discrepancy between S ads and S BET indicates that these samples likely contain mainly cylindrical pores of varying radius and slit-shaped pores are the dominant shape in both samples [39,48,49].
The pore size distributions using the Barrett-Joyner-Halenda (BHJ) theory for the UPS and LOS samples are presented in Figure 11.From Figure 11, the narrow peaks centered at 4.3 nm are signed to internal/surface pores, including spaces between the overlaps of folded halloysite sheets in the samples [39].The stronger intensity of this peak in the LOS sample indicates that halloysite formed a more concentrated and uniform pore size distribution.The peaks at 9.2, 10.7, and 13.4 nm are identified as the lumens of halloysites in the samples and are agreeable with measurements on TEM images.The low discrepancy between Sads and SBET indicates that these samples likely contain mainly cylindrical pores of varying radius and slit-shaped pores are the dominant shape in both samples [39,48,49].
The pore size distributions using the Barrett-Joyner-Halenda (BHJ) theory for the UPS and LOS samples are presented in Figure 11.From Figure 11, the narrow peaks centered at 4.3 nm are signed to internal/surface pores, including spaces between the overlaps of folded halloysite sheets in the samples [39].The stronger intensity of this peak in the LOS sample indicates that halloysite formed a more concentrated and uniform pore size distribution.The peaks at 9.2, 10.7, and 13.4 nm are identified as the lumens of halloysites in the samples and are agreeable with measurements on TEM images.
Conclusions
In conclusion, two main types of halloysite were formed in the weathered pegmatite profile in the Thach Khoan area, Phu Tho, northern part of Vietnam.Analysis methods of XRD, SEM-EDS, TEM, FT-IR, TG and N 2 adsorption-desorption isotherms were used to characterize these halloysites.The results showed that the short halloysite type is mainly distributed in the upper zone and long halloysite type can be found in the lower zone of the weathered pegmatite profile.The short halloysites have the length ranging mainly from 250 to 750 nm, the outer diameter of >100 nm (79.1%), the specific surface areas of 15.7434 m 2 /g and the average pore sizes of 18.9837 nm.Meanwhile, the length ranging mainly from 750 to 1250 nm (69.9%), the outer diameter of 50-100 nm (74.2%), the specific surface areas of 22.0211 m 2 /g, and the average pore sizes of 17.0281 nm are properties of the long halloysites.XRD after formamide (FA) treatment indicated that the halloysite contents are approximately 81% and 93% for the upper zone and the lower zone of the weathered pegmatite profile, respectively.The results provided useful information for the understanding of distribution and characteristics of different halloysites in the deposit and for exploiting and using these nanotubular minerals effectively.
Figure 1 .
Figure 1.Vietnam map and the site of the study area (A); geological legend (B) and geological map of the study area (C).
Figure 1 .
Figure 1.Vietnam map and the site of the study area (A); geological legend (B) and geological map of the study area (C).
Figure 3 .
Figure 3. XRD results of the UPS and LOS samples after formamide treatment.
Figure 3 .
Figure 3. XRD results of the UPS and LOS samples after formamide treatment.
Figure 4 .
Figure 4. FT-IR graphs of the UPS and LOS samples.
Figure 4 .
Figure 4. FT-IR graphs of the UPS and LOS samples.
Figure 5 .
Figure 5. SEM-EDS images of the UPS and LOS samples with the size fraction <2 µm.LOS sample (A) and UPS sample (B).O-oxygen; Al-aluminum; Si-silicon.
Figure 6 .
Figure 6.TEM images of the UPS (A,A1) and LOS (B,B1) sample.Scale bar in (A,B) images represents the length of 500 nm and in (A1,B1) images is 100 nm.
Figure 5 .
Figure 5. SEM-EDS images of the UPS and LOS samples with the size fraction <2 µm.LOS sample (A) and UPS sample (B).O-oxygen; Al-aluminum; Si-silicon.
Figure 5 .
Figure 5. SEM-EDS images of the UPS and LOS samples with the size fraction <2 µm.LOS sample (A) and UPS sample (B).O-oxygen; Al-aluminum; Si-silicon.
Figure 6 .
Figure 6.TEM images of the UPS (A,A1) and LOS (B,B1) sample.Scale bar in (A,B) images represents the length of 500 nm and in (A1,B1) images is 100 nm.
Figure 6 .
Figure 6.TEM images of the UPS (A,A 1 ) and LOS (B,B 1 ) sample.Scale bar in (A,B) images represents the length of 500 nm and in (A 1 ,B 1 ) images is 100 nm.
Figure 7 .
Figure 7.The distributions of the lengths of halloysite from TEM for the UPS and LOS samples.Unit is nanometer (nm).
Figure 8 .
Figure 8.The distributions of the outer diameters of halloysite from TEM for the UPS and LOS samples.Unit is nanometer (nm).
Figure 7 .
Figure 7.The distributions of the lengths of halloysite from TEM for the UPS and LOS samples.Unit is nanometer (nm).
Figure 7 .
Figure 7.The distributions of the lengths of halloysite from TEM for the UPS and LOS samples.Unit is nanometer (nm).
Figure 8 .
Figure 8.The distributions of the outer diameters of halloysite from TEM for the UPS and LOS samples.Unit is nanometer (nm).
Figure 8 .
Figure 8.The distributions of the outer diameters of halloysite from TEM for the UPS and LOS samples.Unit is nanometer (nm).
Figure 10 .
Figure 10.Nitrogen gas adsorption-desorption isotherms of the UPS and LOS samples.
Figure 11 .
Figure 11.BJH pore size distribution of the halloysite in the UPS and LOS samples.
Figure 10 .
Figure 10.Nitrogen gas adsorption-desorption isotherms of the UPS and LOS samples.
Figure 10 .
Figure 10.Nitrogen gas adsorption-desorption isotherms of the UPS and LOS samples.
Figure 11 .
Figure 11.BJH pore size distribution of the halloysite in the UPS and LOS samples.Figure 11.BJH pore size distribution of the halloysite in the UPS and LOS samples.
Figure 11 .
Figure 11.BJH pore size distribution of the halloysite in the UPS and LOS samples.Figure 11.BJH pore size distribution of the halloysite in the UPS and LOS samples.
Table 1 .
Surface area and pore size data of the UPS and LOS samples.
Table 1 .
Surface area and pore size data of the UPS and LOS samples. | 2019-04-27T01:21:05.327Z | 2018-07-08T00:00:00.000 | {
"year": 2018,
"sha1": "6266a87bb368991dff9d76c05b9b72239a379fc0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-163X/8/7/290/pdf?version=1531016314",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6266a87bb368991dff9d76c05b9b72239a379fc0",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
252584647 | pes2o/s2orc | v3-fos-license | Analysis of Weaving Design and Symbols of Traditional Weaving Cloth in Koting
This research aims to identify and analyze the meaning contained in the types of designs and symbols on weaving cloth, especially for women in Koting village. The research used qualitative methods. Data collection techniques used in this research are observation, interview, and study documentation. The results of the analysis found several types of weaving, with the types of designs such as geometric design, nature design and combination design. In addition, it was found several types of symbols such as lines, forms and colors. The researcher concludes that the overall meaning contained in weaving cloth is self-esteem and a reflection of the goodness of a woman such as loving, protecting, respecting in a family and others.
INTRODUCTION
Indonesia is a country that has a diversity culture of high value which has been passed down over generations as a reflection of the nation's culture. (Azzizah, 2015) pointed out that Indonesia has 300 ethnic groups. This is greatly influenced by the geographical location which is an archipelagic country so that each region has a different culture, as well as the weaving culture.
Fister in (Pattinama, 2011) Indonesia is recognized as one of the largest weaving producing countries in the world, especially in terms of ornamental diversity. Weaving or ikat weaving is part of the cultural diversity that must be preserved because it can enrich the nation characteristics with various designs and patterns. Ikat weaving has developed in each tribe in Indonesia from generation to generation as an activity to preserve traditional arts (Panta, 2022). It is also supported by (Moalosi, R., Popovic, V. and Hickling-Hudson, 2006), a woven fabrics becomes cultural identity of society and it is also as a socio-cultural interaction within a local context. In addition to this, weaving is not only as an artistic difference which is influenced by cultural differences. It is a valuable concept, beliefs customs, rituals, habits, and ideas that shaped in a weaving design (Temesgen, 2018). Hence, it is crystal clear that weaving is an inseparable component of societies life.
There are various kinds of weaving culture in Indonesia, especially in Sikka society. Sikka is one of regency that has a rich weaving design in Flores island. If re-examined from the motifs, techniques, manufacturing processes and origins, ikat weaving for the community can be considered to have deep values and meanings, including spiritual values, political values, and socio-economic values (Elvida, 2015) . Each area in Sikka regency has a different design and symbol. One of the areas in Sikka that still preserves the weaving culture is in Koting village. This ancestral culture is still maintained and developed by women in Koting village. Weaving is the work of the original culture of the Koting village which had done by the skilled hands of women a long time ago by means of weaving using traditional tools. The work process of weaving takes a long time to go through the stages of making threads, tying the threads to form design, coloring and weaving to produce a piece of cloth. These weaving produce very diverse designs from attractive lines, colors, symbols and decorations which have their own meaning and difficult to interpret.
However, as time goes by in the current era, the meaning value of each weaving clothes slowly begins to disappear. Today's Koting community and younger generation only know how to use these weaving clothes without knowing the types and meanings of designs and symbols on each weaving cloth.
Based on the background above, the researcher intends to find out the types and meaning of weaving design and symbols in Koting village especially weaving used by women that called utan, in order to increase knowledge about the meaning of weaving design to the community and the younger generation.
RESEARCH METHODS
This research used qualitative method by applying observation and interview as data collection techniques. Qualitative methods are often used to capture the experiences and lived meanings of the subject's everyday world (Brinkmann, S., & Kvale, 2015). In observation, researcher examined the process of making weaving cloth starting from spinning thread, tying to form design and symbols, coloring and weaving to become a weaving cloth. In interview process, researcher collected information about the types and meaning of the weaving design and symbols from weaving craftsmen group leader and members in Koting village. While in study documentation researcher obtained several pictures of weaving clothes.
OF RESEARCH AND DISCUSSION
The results are presented in accordance with the following aspects; the types and meaning of weaving design and symbols in Koting village especially weaving used by women that called utan. Researcher found two types of design and three types of symbols. The weaving design found in utan namely; (a) natural design is like plants and animals, (b) combination design which is the application of geometric and natural design such as plants, animals, and rhombuses. In addition to this, the types of symbols applied to weaving utan are lines, shape, and colors.
Natural Design
Natural design is design that are strongly influenced by the shapes that exist in real nature. The manifestation forms can take the form of animals (fauna), plants (flora), humans, flowers, mountains, clouds, star and so on (Alamsyah, 2019). This research found 5 classifications of natural weaving design in utan namely; (1) Pedan Puhun (Pineapple Flower)
Figure 1. Pedan Puhun Design
It consists of two words, namely Pineapple 'Pedan' and Flower 'Puhun' which has a strong, courageous meaning and always reflects kindness, especially for family and others. The making of this design was inspired by the pineapple plants that exist around their lives. This is because the pineapple plant always stands upright or strong and can grow in any season, so people believe that every woman who wears this weaving will look like someone who has a leadership spirit and always adapts to other people. This weaving is used for all women from children to the elderly who are used as daily clothes, traditional clothes, church clothes and party clothes since then until now. In the pedan puhun weaving design there are several types of symbols, such as: Pedan Wuan (Pineapple fruit)
Figure 2. Symbol of Pedan Wuan
This symbol means reflecting goodness such as humility, courtesy, and not hurting others. This is because the taste of pineapple is considered by people to have a sweet taste. So, that the people of Koting believe or think that making the pineapple symbol reflects goodness, like a sweet pineapple fruit.
Figure 3. Symbol of Pedan Ubun
This symbol means being a strong leader and always nurturing and protecting his family. This is because the people of Koting see it is seen from the top of the pineapple from the pineapple crown that always stand upright on the pineapple fruit. So that peoples believe that making this symbol reflects the spirit of leadership.
Figure 4. Symbol of Pedan Roun
This symbol means to protect, respect, love and care for each other in a family and others. This is because it is seen from the long and thorny leaves that represent the soul to defend itself from all security disturbances from wherever it comes. So that people believe that making this symbol reflects protection and mutual love. It consists of two words, namely Star 'Dala' and Morning 'Mawarane'. This design is believed by the public that its use always gets lighting or guidance and is also used as a repellent to disaster, which means that if people wear this weaving, they will be safe from people who have bad intentions towards them. Weaving is used for all women from children to the elderly who are used as daily clothes, traditional clothes, church clothes and others since then until now. In the dala mawarane weaving design there are several types of symbols such as: Dala (Star)
Figure 6. Symbol of Dala
This symbol means giving light to everyone, this is because the stars shine brightly at night. So that people of Koting believe that making this symbol on weaving brings light to themselves and others.
Figure 7. Symbol of Dala Telu
This symbol means the family unit consisting of husband, wife and children which means side by side. This is because the stars in the sky always shine together at night to illuminate the universe. So that the making of this symbol on weaving can reflect a family that is always side by side and works together.
Figure 8. Symbol of Line with a rhombus
This symbol means good relations in family and neighbor and protection to prevent calamity. However, this rhombus is generally found in weaving. The rhombus is usually combined with other design or symbols to make the weaving look attractive and beautiful. So that if it is associated with the understanding above, there is a slight difference. People of Koting believe that making this symbol on weaving can reflect good relationships and mutual protection between each other.
Figure 9. Manuwalu Design
Manuwalu design has the meaning of protection in family life, like parents protecting their children. This weaving is worn by all women from children to parents who are used as daily clothes, traditional clothes, church clothes and others. In the Manuwalu weaving design there are several types of symbols such as: Manu inan (Hens)
Figure 10. Symbol of Manu Inan
This symbol means parents who act as protectors. The chicken symbolizes awareness because the chicken always crows every morning awakening humans, life, and leaders who are protective. From the understanding above, if it is related to human life like a mother and father who cares for her children with love, it is the same as apparent who pays attention and takes good care of her children.
Figure 11. Symbol of manu anak walu
This symbol means the children in a family. This is because it is seen from the chicken who always take care of their children, just like humans. In a family there must be parents who have children, and these children have the right to get love from their parent.
Koja wulet
design has many descendants. In the past, people believed that if the bride wore this weaving, she would be blessed with many offspring and each offspring would give birth to new offspring such as (a bunch of walnuts). This weaving is used for adult women who are ready to get married. But now with the changing times, this weaving has been used by all women from children to the elderly whether used for daily clothes, traditional clothes, church clothes, party clothes and others. In the Koja wulet weaving design there are several types of symbols such as:
Figure 13. Symbol of Line
This symbol presents a family relationship that are always harmonious and love each other. This can be seen from the line that is always related to the triangles that never breaks.
Figure 14. Symbol of Rhombus with many fillings
This symbol means rhombus with many fillings symbolizes many descendants or children. The making of symbol on the weaving was inspired by a large collection of walnuts that reflect many descendants.
Figure 15. Rempe Sikka Design
Rempe Sikka design which means sunflower has the meaning of loyalty, obedience and harmony in a family. It is inspired by beautiful sunflowers and always faithfully faces the sun. This weaving is used for adult women who are ready to get married. But now with the changing times, this weaving has been used by all women from children to the elderly whether used for daily clothes, traditional clothes, church clothes, party clothes and others. In Rempe Sikka weaving design there are several types of symbols such as:
Figure 16. Symbol of big circle
This symbol means depicts the sun shining brightly which symbolizes a life full of joy and happiness in living life. This is because peoples see from the diameter or large circle that sunflowers have it as if it depicts the sun shining brightly. The colour of the sunflower, yelloworange, suggests vitality and energy, as well as fertility, happiness, health, wisdom, nourishment, light, and warmth. Figure 17. Symbol of small flowers This symbol presents togetherness and unity in a family. This is because in the diameter of the sunflower there are small flowers which are considered by people to reflect the joy and togetherness in a family.
Combination Design
Combination design is a combination of various designs. The combination is made in such a way that it can add to the beauty of the cloths. For example, animal combined with plants design and others (Alamsyah, 2019). In this research, it was found that there is only one classification of combination weaving design in utan which is called Jarang Atabian (Horse and Human).
Figure 18. Jarang Atabian Design
It consists of two words, namely Horse'Jarang' and Human'Atabian' which has a horse as a vehicle for spirits to the afterlife, has a philosophical meaning that human life cannot be separated from death, because life in this world is only temporary. Even so, humans will never be completely extinct, but new life emerges after the old life ends. This weaving is worn by adult women and parents when there is a death or an atmosphere of mourning. But now with the changing times, this weaving has been used by all women from children to the elderly whether used for daily clothes, traditional clothes, church clothes, party clothes and others. In the Jarang Atabian weaving design there are several types of symbols such as: Jarang (Horse) Figure 19. Symbol of Jarang This symbol means as a vehicle for spirits to the afterlife. So, from this understanding, the making symbol on the weaving, because people consider horses as human vehicles in worldly life and death. The rhombus with small horse means the human lineage where new life emerges after death. However, this rhombus is generally found in weaving. The rhombus is usually combined with other design or symbols to make the weaving look attractive a beautiful. So, if it is associated with the understanding above, there is a difference. The symbols are making on weaving because the people sees that every human who has died must have offspring, and these descendants are symbolized in the form of a small rhombus and little horse.
CONCLUSION
In regards to the results and discussion, conclusion is proposed. Weaving clothes is a work of ancestral heritage art that has very significant value for Sikka women, especially women in the village of Koting. There were several designs and symbols found in weaving clothes for women. It was found two types of weaving design namely natural design (plants and animals) and combination design (combination of plants, animals, and rhombuses). In addition to this, the types of symbols applied are lines, shape, and colors. Based on the result of this research, there were 5 classifications of natural weaving design such as pedan puhun, dala mawarane, manuwalu, koja wulet, and rempe sikka. While in combination weaving design, there was only one classification namely Jarang Atabian. Moreover, there is valuable meaning of each design and symbol. | 2022-09-29T15:21:43.120Z | 2022-08-20T00:00:00.000 | {
"year": 2022,
"sha1": "9cd261cbc39ff20cf90af4d8ebe8b391fe33d26c",
"oa_license": "CCBY",
"oa_url": "http://jurnal.unsyiah.ac.id/riwayat/article/download/27461/16293",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "35a35e6ccd281e4c5d8999905fe3d2d132853289",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": []
} |
236203953 | pes2o/s2orc | v3-fos-license | Postural orientation, what to expect in youth athletes? A cohort study on data from the Malmö Youth Sport Study
Background Studies investigating postural orientation in uninjured youth athletes are scarce. Understanding how postural orientation during functional performance tests change with age in uninjured athletes has the potential to enhance awareness of changes in performance after injury and to set realistic goals for injured athletes. Thus, the aim of this study was to explore postural orientation during functional tasks at early adolescence, and changes in postural orientation from early to middle adolescence and relate this to sex, type of sport and right leg lean body mass (RLLBM). Methods In this cohort study 144 (38% female) youth athletes (mean age 13.5 years, SD 0.3) were included at baseline and 86 of these at follow up 2 years later. Four functional performance tests were visually evaluated for Postural Orientation Errors (POEs) with an ordinal scale, ranging from 0 (good) to 2 (poor), yielding a maximum total POE score of 51, and RLLBM by dual energy X-ray absorptiometry. Results Improvements were observed in the total POE score from baseline to follow-up, median difference − 10 and − 7 (p < 0.001) for female and male athletes, respectively. At follow-up, female athletes had lower total POE score (median 18) than males (median 24) (p = 0.01). There were no differences in POE scores between sports type (team, individual, aesthetic) (p = 0.20–0.98) and no relationship between total POE score and RLLBM (rs = 0.09, p = 0.42). Conclusions POEs appear to be quite common in young athletic population, but improvements are achieved over time. At mid-adolescence, female athletes seem to have less POEs than males. Neither sport type nor RLLBM seem to influence postural orientation.
to girls who reach an earlier plateau [9]. Hence, an important factor to consider when examining youth athletes is their sex and age because this may influence their functional performance, including quality of movement.
Postural orientation is one aspect of movement quality, which is defined as the ability to maintain an appropriate relationship between the body segments and between the body and the environment when performing a task [12]. Postural orientation is one component that, together with postural stability, constitutes postural control [13]. Postural control has been noted to be important for performance in several sports [14][15][16]. For example, soccer players have been found to have better postural control, in terms of less postural sway, compared to participants involved in limited contact sport or no sport at all [14]. However, to our knowledge, research on the relationship between postural orientation and sport performance seem to be lacking.
For postural orientation of the lower extremity, the knee joint is often described as normal (aligned), varus, or valgus [17] where valgus has been highlighted as a factor associated with knee injury [18][19][20]. For example, some anterior cruciate ligament (ACL) tears in sports involve a noncontact mechanism, with the lower extremity displaying a dynamic knee valgus moment at the impact of injury [18]. Furthermore, patients with ACL injury seem to have different lower limb biomechanics [21], in addition to poorer trunk control [22] and poorer postural stability [13], compared with healthy controls. However, the interpretation of data from these studies in a clinical setting without any reference values from an uninjured population is challenging.
From a sport medical perspective, reference values from age-matched athletic controls without injury are important to identify any abnormal and/or impaired values when testing groups of patients. To our knowledge, there is only one study [23] investigating postural orientation during a functional performance task in healthy children and adolescence (from 9 to 16 years of age). In that study, no differences between sexes in absolute values were noted but a different effect of age for boys and girls [23]. However, only one dynamic test, the single-limb mini squat, was performed to assess postural orientation and only the medio-lateral knee position was analyzed [23]. In addition, no data on sport participation were specified for the participants in the study, thus, it was unclear whether they were athletes. Thus, studies investigating postural orientation in youth athletes without injury are warranted.
Understanding how postural orientation during functional performance tests change with age in uninjured athletes has the potential to enhance awareness of changes in performance after injury and to set realistic goals for injured athletes. The knowledge from the present study could help sport physical therapists, coaches and/or athletic trainers when assessing functional performance in young athletes.
The aims of this study were to explore: 1) postural orientation during functional tasks at early adolescence (baseline assessment); 2) any changes in postural orientation from early to middle adolescence (baseline to follow-up 2 years later); 3) any sex differences in postural orientation; 4) differences in postural orientation between different sports; and 5) the relation between postural orientation and lean body mass, in youth athletes.
Study design
Data for this cohort study, following to the STROBE statement [24], were collected during 2013-2017 as part of the Malmö Youth Sport Study (MYSS), in detail described in previous publications [25,26]. In summary, MYSS is an ongoing longitudinal cohort study, including boys and girls (later young men and women), investigating physiological, psychological and social factors associated with sports performance, academic success and long-term physical activity [25]. The participants in the MYSS project are young athletes attending a sport school in the southern part of Sweden, a part of the Swedish National Sport Education program, aiming for an elite career. The selection criteria for acceptance to the school are sport merits and one of the aims of the education is sport talent development. The school provides organized sport specific training during school hours allowing students to combine educational work with sports. The students practice their sport during school hours (≥450 min/week) outside regular training and competitions after school.
Participants and procedure
In this report, we included athletes who were involved in team sports (soccer, ice hockey, floorball and basketball), individual sports (swimming, athletics, tennis, badminton and squash), or aesthetic sports (diving, figure skating and artistic gymnastics). From the total cohort of the MYSS project (n = 156), 144 (38% girls) healthy adolescent athletes were included. Thirteen-year old athletes were assessed at baseline, during the winter months Table 1) and, of these, 86 were assessed approximately 2 years later at the same season and with the same test battery. Their mean (SD) age was 13.5 (0.3) years at baseline and 15.6 (0.3) years at follow up.
Trained physiotherapists collected data, with videorecording, for functional performance tests at the athletes' school. Anthropometrics (height, weight and total body lean body mass (TBLBM) were measured in a laboratory setting at another occasion within 1-3 months from the functional performance test session. Athletes with any difficulty moving around on the day of testing, or reports of lower extremity injury, limiting the completion of the tasks, were excluded (n = 1).
Measurements of postural orientation errors
The athlete's performance during 4 functional performance tests, previously described by Nae et al. [27,28], was videotaped, using a digital camcorder (1920 × 1080 pixels; 30 Hz; Everio GZ-HM650BE; JVC, Yokohama, Japan) placed on a tripod in front of the athlete, perpendicular to the frontal plane, for later assessment of Postural Orientation Errors (POEs). To ensure that the whole movement was captured during testing, the camera was positioned 2-4 m (m) in front of the athlete in line with his or her waistline (approx. 1 m off the floor).
Athletes were instructed to wear shorts and a tight top. The first test (single-leg mini squat) was performed barefoot, whereas in the remaining tests athletic shoes were worn. All athletes performed the tests in the order that they are described below, starting on the right leg. Prior to each test, the test leader gave standardized instructions along with a visual demonstration of the test. The athlete was allowed 2-3 practice trials per test, or per side for the one-legged test, before initiating the testing. For the Drop-jump, practice trials were given until the athlete was familiarized with the procedure.
Single-leg mini squat
For the Single-leg mini squat (SLS), the athlete was standing with the arms alongside the body on one leg and with the second toe placed on a longitudinal line. The athlete was asked to bend his/her knee, without bending forward from the hip, until he/she no longer could see the line along the toes (corresponding to about 50 degrees of knee flexion), and then return to extension. The test was repeated 5 times on each leg and POEs were assessed during the entire movement, from starting position through return to this same position.
Forward lunge
For the Forward Lunge (FL), the athlete was standing with the arms alongside the body and with feet hipwidth apart on the floor. The athlete took a long stride forward, about 1 m, flexed the knee to approximately 90°, and pushed back to starting position by extending the front leg. The test was repeated 5 times on each leg and the front leg was assessed in the landing phase from initial contact until maximum flexion of the knee.
Drop-jump
The Drop-jump (DJ) test was performed with the athlete standing on a step board, approximately 30 cm high, with feet hip-width apart. The athlete dropped from the step-board with both feet leaving the box simultaneously, then performed a maximal vertical jump upon landing. Arm swing was allowed during the jump and the jump was repeated 3 times. The POEs were assessed during the first landing, from first contact with the floor to extended knees.
Single-leg hop for distance
For the Single-leg hop for distance (SLHD), the athlete was standing on one leg, with the other leg lifted from the floor by flexing the knee. The athlete jumped forward as far as possible, taking off and landing on same foot with a safe and controlled landing maintaining balance on landing for 2 to 3 s. Arm swing was allowed during the jump. The test was repeated 3 times and POEs were assessed during landing, from first contact with the floor to extended knee.
Scoring of postural orientation errors
A trained physical therapist (SRA) observed and rated POEs from the video recordings according to a previously evaluated protocol [28]. POEs were assessed by evaluating 1) pronation of the foot (SLS only), 2) knee medial-to-foot position (KMFP), 3) femoral valgus, 4) deviation of pelvis in any plane (lateral deviation, tilt and/or rotation of pelvis) and 5) deviation of trunk in any plane (forward, lateral and/or rotation) as described [28]. A 3-point ordinal scale, ranging from 0 to 2, was used for the evaluations, with 0 indicating good postural orientation (no signs of POEs), 1 fair (minimal signs of POEs), and 2 poor (clear signs of POEs). When the execution of the test did not have any similarities to the test a score of 3 was given, representing very poor postural orientation, thus the maximum within-task POE score was given [28]. A POE was scored as fair or poor when it occurred at least 3 out of 5 times in the tasks performed with 5 repetitions and at least 2 out of 3 times in the tasks performed with 3 repetitions (Table 2). Both the within-tasks POE score (the sum of all POEs within a task), and the total POE score were calculated to a percentage scale (0-100) and used in the analyses ( Table 2).
Reliability analysis
Intra-rater and inter-rater reliability were evaluated from 10 athletes' video recordings for within-task POE score and total POE score. Inter-rater reliability was assessed by two authors (SRA and JN) for each within-task POE score with Cohen's kappa [29,30] and showed moderate to almost perfect agreement (kappa values from 0.74 to 0.88, p ≤ 0.0001), according to Landis and Koch [31]. Intra-rater reliability, analyzed on two separate occasions within 2 weeks, for each within-task POE (assessed by the author SRA) was calculated with intra-class correlation coefficient (ICC 2,1 ), with the two-way random effect model (absolute agreement definition, 95% confidence intervals (CI)) and indicated excellent agreement (ICC 2,1 value from 0.824 to 0.98, p ≤ 0.002). Interrater reliability for total POE score showed excellent agreement (Cohen's kappa value 0.875, p < 0.001). A Wilcoxon's rank test was also calculated, revealing no systematic difference between raters for the total POE score (p = 0.32). Total POE score for intra-rater reliability, assessed with ICC 2,1 , was 0.95 (CI: 0.82-0.99, p < 0.001).
Anthropometry
Body height (cm) was measured, with a Holtain Stadiometer (Holtain LTD, Pembrokeshire, UK) and body mass (kg) with an electric scale (Avery Berkel HL 120 Electric Scale, Avery Berkel, West Midlands, UK). Total body lean body mass (TBLBM) and right leg lean body mass (RLLBM) was measured by dual energy X-ray absorptiometry (DXA) (iDXA® version enCore 13.60, Lunar Corporation, Madison, WI, USA). When estimating TBLBM and RLLBM we used a total body scan and standard adult software. The measurements were done with the participants non fasting, dressed in light clothes, with no shoes, and with the athletes in a supine position according to standard procedure recommended by the manufacturer. Two trained research technicians performed all measurements and software analyses. All measurements were done within 1-3 months from the functional performance test session. The DXA apparatus was calibrated daily, by use of a phantom. Coefficient of variation (CV%) was for TBLBM 0.6% .
Statistical methods
Statistics were calculated using IBM SPSS (IBM SPSS Statistics for Windows, Version 23.0. IBM, Armonk, NY). Descriptive data are presented with median and quartiles for categorical data, while means and standard deviation (SD) were used to describe continuous data. A small, likely non-clinically relevant, difference was found between the right (median 3, quartiles 1-4) and left legs (median 2, quartiles 2-3) in the DJ test (p = 0.002). No other differences were observed between the right and left legs; therefore, data were analyzed for the right leg only. For drop-out analysis, demographic data (height and weight) and baseline screening results are presented for the drop-outs (those who did not attend at followup, n = 58) and the participants (included in the followup analysis, n = 86). Males and females were analyzed separately except for the comparison between sports type (team, individual, aesthetic sports). For comparison baseline vs follows up, data were analyzed with the Wilcoxon's rank test for POEs and with the Pairedsample t-test, with 95% CI, for RLLBM. TBLBM value is expressed in kg and percentage of body weight together with relative to bodyweight (rTBLBM). The Mann- x 100 Forward lunge X X X X sum score x 100 Drop Jump X X X X sum score x 100 Single-leg hop for distance X X X X sum score 12 x 100 Total POE score sum score 51 x 100 KMFP knee medial to the foot position Whitney U test was used for comparison between sexes, and the Kruskal-Wallis H for comparison between sports type. Differences between POE score for different tests were analyzed with Friedman Test and Wilcoxon's rank test. Fisher's exact test was used to assess any differences in the distribution of males and females between the groups of different sports type. The Spearman's correlation coefficient was used to analyze the association between changes in POEs (median difference baseline vs follow-up) and changes in RLLBM (mean difference baseline vs follow-up). The level of significance was set at p ≤ 0.05.
Results
Participantsdrop-out analysis A total of 144 athletes were screened at baseline and, of these, 86 athletes were included in the follow-up analysis. At baseline, the total POE score was 29 (q1-3 = 12) for the drop-outs (n = 58) and 31 (q1-3 = 13.5) for the follow-up participants (n = 86). Table 3 gives the baseline values for total POE score, weight, and height at baseline for female drop-outs (n = 20) and follow-up participants (n = 34) and male drop-outs (n = 38) and follow-up participants (n = 52).
Baseline assessments of postural orientation errors
Within-task POE scores for each task and total POE score, at baseline, are given in Table 4 for female and male athletes as well as POE scores according to sports type. Median POE score in SLS and SLHD were significantly higher compared to FL and DJ for both females and males (p < 0.001).
Changes in postural orientation errors over time
There were significant improvements in the total POE score between baseline and follow-up for both female (p < 0.0001) and male athletes (p < 0.0001) ( Table 5). There were also improvements in all tests (SLS, p = 0.001; FL, p < 0.001; DJ, p < 0.001; SLHD, p = 0.024) for females and in FL (p < 0.001), DJ (p < 0.001) and SLHD (p = 0.001) for males.
Sex differences
At baseline, no differences were found between males and females for any POE scores (p = 0.06-0.42). At follow-up, female athletes scored better in the SLS test (p = 0.004) and had lower total POE score than males (p = 0.01) ( Table 5).
Postural orientation errors in different sports type
POE scores according to sports type are presented in Table 6. There were no differences in sex distribution between the groups of different sports type (p = 0.40).
No differences in POE scores were found between sports type (team, individual, aesthetic) at baseline (p = 0.20-0.98) whereas aesthetic athletes performed significantly better in SLS at follow-up compared to team athletes (p = 0.02). All groups significantly improved Total POE score (p = 0.0001-0.04).
Discussion
The main observation in this study was that POEs seem to be quite common in early adolescent athletes. Both female and male athletes demonstrated rather high POE scores, indicating poor postural orientation, at age 13 with no differences between females and males (p ≥ 0.06). Thus, appropriate postural orientation may not be expected in this young population. When examined 2 years later, both female and male athletes improved their total POE score between baseline and follow-up but female athletes scored significantly better in the total POE score (p = 0.012). Neither sport type nor LBM was associated with POE scores. The total POE score (28 and 31 for females and males, respectively) at the age of 13, noted in the present study, are higher than POE scores noted in women (26) and men (20.5) with ACL injury (mean (SD) age 26.7 (6.5) [32]. However, in the study on ACL injured participants, the test battery consisted of five test of functional task and six segment-specific POEs yielding a higher maximum total POE score of 73. The median POE score for FL and DJ test was 25 respectively and 33 for both the SLS and SLHD test. This indicates that, at early adolescence, we can expect POEs to a fairly large extent in functional tasks, which is important knowledge for professions that examine athletes in different aspects of neuromuscular control. The POE scores for the SLS and the SLHD was higher than for the FL and DJ, indicating that the SLS and SLHD may better detect POEs in this young athletic population compared to FL and DJ. Further, the higher scores for the SLS and the SLHD suggests that unilateral tests are, not surprisingly, more demanding for maintaining appropriate postural orientation than two-legged tests. One advantage of single-leg tests is their ability to detect between-limb imbalances [33] and thus, might be more useful when aiming to detect postural orientation errors between injured and non-injured limb.
The significant improvements in the total POE score between baseline and follow-up could be related to natural neuromuscular improvements from early to mid-adolescence. In a previous study on youth tennis players, the authors found large age effects on neuromuscular lower-limb asymmetries (between-limb differences) [33]. However, contrary results have been found in other investigations [10,34]. In a study on elite male youth soccer players, the stage of maturation did not show any effect on the level of asymmetry, in functional performance tasks, in terms of landing force and between-limb difference [34]. In another study [10], investigating neuromuscular control on 1140 youth athletes, no age effects, from 9 to 17 years, were noted in limb alignment measured as medial knee displacement during a drop-jump. Although there was an overall performance enhancement in the current study, the median POE scores for the SLS (20 for females, 33 for males) and SLHD (29 for females, 33 for males) were still rather high at follow-up, suggesting that POEs are still present at mid-adolescence. Yet, there were relatively large improvements for FL and DJ for females (− 17 and − 16.5, respectively) and in FL for males (− 17) compared to the improvements noted in the SLS and SLHD tests (0 to − 8). Hence, at mid-adolescence, the use of single-leg tests might be more suitable to detect postural orientation errors. Taken together, improvements in the total POE score between baseline and follow-up were evident for both female and male athletes suggesting some kind of maturity effect. Table 4 Within-task POE score for the single-leg mini squat (SLS), forward lunge (FL), drop jump (DJ) and single leg hop for distance (SLHD), and total POE score, are presented for right leg, female and male athletes and for different sports type (team, individual, aesthetic) at baseline (n = 144). Values are median (quartiles, minimum-maximum) In the present study, we did not find any sex differences in POEs at the age of 13. Previous have reported that young girls seem to have better postural stability [35] and less body sway than boys [36]. However, these studies included younger non-athlete children, 8-12 and 3-6 years of age, respectively, and measured postural stability (measured as motion of the center of pressure) and not postural orientation.
In the present study, female athletes scored significantly better in total POE score than males at midadolescence. One possible explanation for the sex differences at mid-adolescence could be that females mature earlier than males [37] and thus have reached a more developed motor control system. However, we can only speculate regarding the impact of maturity level as no maturity data were collected. No differences in POE scores were found between sports type (team, individual, aesthetic) at baseline whereas aesthetic athletes performed significantly better in SLS at follow-up compared to team athletes. In addition, there were no differences in sex distribution between groups and all sport groups significantly improved in the total POE score. The aesthetic group of athletes included diving, artistic gymnastic and figure skating. It might be that these athletes improve functional performance, including awareness of body alignment, within their sports as it has been demonstrated that sport skill has an impact on balance ability [38,39]. Nevertheless, due to the small sample size of aesthetic athletes (n = 6) in the present study, further studies are needed on the possible association between sport type and postural orientation.
There was a significant increase of 12 kg in TBLBM for the male athletes, from baseline to follow-up, corresponding to an increase in rTBLBM of 2%. Although female athletes' TBLBM also increased (4 kg), the rTBLBM had decreased by 3% from baseline. The fact that males obtain greater amounts of muscle mass, whereas females gain significantly more fat mass, has previously been reported [40]. In addition, whereas no sex differences in muscle strength seem to exist before the age of 14, male athletes have a much greater muscle strength development from age 14 to 17 compared to female athletes [10]. However, in our study, no strength data are available, and the development of muscle mass did not seem to influence postural orientation as no relationship was found between the total POE score and RLLBM. Thus, the improvement in POE scores, baseline to follow-up, may be affected by other factors than development of muscle mass, such as neural adaptations to training and natural maturity of the nervous system. Further studies may investigate if muscle strength and state of maturity influence postural orientation.
Strengths and limitations
This study is the first to provide values of postural orientation and changes over time in youth athletes, related to age, sex and sport type. Another strength is the use of a reliable and valid clinically applicable scoring protocol for assessing POEs. Yet, some limitations in our study need to be recognized. First, the values presented in the present study can only be applied to young athletes aged 13-16 years and cannot be generalized to the general population of the same age. In addition, only 86 athletes, of the 144 at baseline, were evaluated at the 2-year follow-up. We cannot exclude that the drop-outs could have affected the result, although we observed no clinically relevant differences between follow-up participants and drop-outs in baseline total POE score, or characteristics (weight, height). Another limitation is that no definite maturation evaluation, except from age and TBLBM, or muscle strength assessment was performed. In addition, although injured athletes, at the time of testing, were excluded from participating, we had no data as to whether the included athletes had sustained any previous injuries. Although declared healthy, previous injury might effect physical performance long after onset [41]. Lastly, as there were few aesthetic athletes (n = 6) in the present study, a larger sample size is desirable in future investigations to explore any differences in POEs between type of sports.
Conclusion
Postural orientation errors appear to be quite common in a young athletic population, although improvements were noted from early to mid-adolescence, particularly among females. At early adolescence, there seems to be no sex differences in postural orientation, whereas female athletes may perform better in some functional tests at mid-adolescence. Further, differences between types of sports could not be demonstrated in the present study and the lack of relation between postural orientation and lean body mass indicates that the amount of muscle mass does not seem to influence postural orientation. | 2021-07-24T13:38:55.440Z | 2021-07-24T00:00:00.000 | {
"year": 2021,
"sha1": "43641f34f1bbb19303a3bdf43066bc2f4dbb807a",
"oa_license": "CCBY",
"oa_url": "https://bmcsportsscimedrehabil.biomedcentral.com/track/pdf/10.1186/s13102-021-00307-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43641f34f1bbb19303a3bdf43066bc2f4dbb807a",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252010745 | pes2o/s2orc | v3-fos-license | SQSTM1/p62 promotes miR-198 loading into extracellular vesicles and its autophagy-related secretion
MicroRNA dysregulation is a hallmark of hepatocellular carcinoma (HCC), leading to tumor growth and metastasis. Previous screening on patient specimens identified miR-198 as the most downregulated miRNA in HCC. Here, we show that miR-198 compensation leads to self-release into extracellular vesicles (EVs). Importantly, the vesicular secretion is mediated by autophagy-related pathway, initiated by sequestration of p62/miR-198 complexes in autophagosome-associated vesicle fractions. miR-198 is selectively recognized and loaded by p62 into autophagosomal fractions, whereas mutated miR-198 forms neither induce autophagy and nor interact with p62. Gain and loss of function experiments, using a CRIPR/Cas knockout (KO) and transgenic site-specific p62 mutants, identified p62 as an essential repressor of cellular miR-198 abundancy. Notably, EVs, harboring miR-198/p62 protein complexes, can be uptaken by cells in the close vicinity, leading to change of gene expression in recipient cells. In conclusion, miR-198 enhances autophagy; conversely autophagic protein p62 reduces the miR-198 levels by sorting into extracellular space. Graphical abstract miR-198 is at first transcribed as primary miRNA, after being processed into single stranded mature miR-198 form, it is transported into cytoplasm ①. By interaction with p62 protein, miR-198 conglomerates and forms a binding complex ②. Since LC3 protein is an interaction partner of p62 protein, hence miR-198 is included into autophagosomes ③. By fusion with multivesicular bodies (MVB), miR-198-binding complex was recruited into amphisomes ④, the latter of which quickly turns into secretory MVB containing intraluminal vesicles⑤. By fusion with cell membrane, intraluminal vesicles were released into extracellular space as EVs ⑥. Supplementary Information The online version contains supplementary material available at 10.1007/s13577-022-00765-7.
Introduction
Although macroautophagy (hereafter referred as autophagy) is classically considered as a degradative process [1], accumulative evidences have implicated its role in secretion of pro-inflammatory cytokines [2], lysozyme release into extracellular environment [3], and vesicle production [4]. These processes [2][3][4] are collectively termed as autophagyassociated vesicle secretion, indicating an autophagy pathway steadily maintaining cell homeostasis and shaping tissue microenvironment under both normal and pathological conditions. The concept of secretory autophagy is recently described and is not well understood until now. Most of the studies focus on the recruitment of cargos into autophagosomes, but a limited number of them investigate autophagy-associated vesicle release [2,5,6]. Moreover, the evidences were mostly presented as the effect of changes of autophagic activity, either by inhibition or by induction, on protein secretion [2,5,6], however, none of the molecular mechanisms regarding the cargo loading, vesicle release and uptake were revealed until now.
As autophagy scaffold protein, SQSTM1/p62 protein loads cargos into autophagosomes by interacting with LC3 protein [7]. Under physiological conditions, p62 protein was barely detectable due to rapid degradation in lysosomal compartment during autophagy process [8]. However, in hepatocellular carcinoma (HCC), p62 protein was highly enriched implicating the impairment of degradative autophagy [9], the latter of which was shown to trigger miRNA dysregulation in Hungtington disease [10]. In parallel, aberrant miRNA expression is one of the major characteristics in HCC where tumor suppressor miRNAs are generally downregulated and oncogenic ones are upregulated [11].
MicroRNAs (miRNAs) are small, endogenous oligonucleotides, typically 18-24nt long. They bind to the 3'-untranslated region (3'-UTR) of transcripts and repress the expression of target genes. Among the screened 80 miR-NAs, the primate specific miR-198 is the most downregulated in HCV-associated liver cancer and classified as one potent tumor suppressor [12,13]. To note, the expression of miR-198 was gradually decreased with the progress of HCV-related liver diseases (including fibrosis, cirrhosis, dysplastic nodule and HCC) [12] and it was nearly undetectable in hepatoma cells in contrast to highly enrichment in parenchymal liver tissues [13]. In parallel, miR-198 was released into blood serum of HCV-infected patients at different stages [14] and it was also detected in the serum of patients with HCC or liver cirrhosis [15], pointing to the link between cellular decrease and its release into extracellular space. Importantly, EV is the major vehicle for miRNA carrier for the release [16][17][18], however, few of them have studied the mechanism of packaging miRNA into vesicles.
Here, we describe the autophagy-associated vesicle release mechanism, by which p62 selectively recruits miR-198 into autophagosome-derived vesicular fraction and secretes into EVs. The vesicular miR-198 is uptaken up by recipient cells, in which it is functional, leading to alteration of targeted miR-198 sensor reporter expression. Furthermore, we demonstrate that p62 is a strong repressor for miR-198 accumulation, presenting as inversely regulating miR-198 levels. Importantly, we show that p62 or autophagy massively antagonizes the suppressive function of HCC cell growth by miR-198. Thus, p62 dependent autophagymediated secretion emerges as an essential mechanism for miR-198 dysregulation.
Patient biopsies
All FFPE biopsy specimens were collected by the Institute of Pathology at University Hospital of Cologne (Cologne, Germany) were utilized in accordance with the policy of institutional review board of the hospital (18-052). The histology and immunohistology were made under blinded basis and confirmed by pathologists as described earlier [12].
Generation of miR-198 overexpression systems
For Tet-On inducible expression system: HuH-7 and Hep3B cells were transfected by Jetprime transfection reagent (Polyplus, #114-15) with two plasmids, harboring the expression of Tet-On inducible expression system (Takara, #631,120). Pri-miR-198 encoding sequence was cloned in the downstream of Tet-on promoter. The two plasmids were fused into one vector by molecular cloning. Cells were transfected with the fused plasmid coding Tet-On miR-198 expression cassette and then selected by G418 for 14 d. Monoclones were isolated by serial dilution in 96-well plate. Here, mock plasmid without miR-198 expression was used as control. For induction, stably transfected cells were treated by dox at final concentration of 1 ng/ml.
For pCMV stable expression system: cells were transfected using the plasmid carrying miR-198 expression cassette under CMV promoter and selected by G418 for 14 d.
Here, polyclonal cells were used for further experiment.
Generation of p62 knockout cell line by the Crispr/ Cas technology
For the generation of p62 knockout cell line, single guide RNAs targeting the p62 locus were predicted using the CRISPR online tool (http:// crisp or. tefor. net; Version 4.98) [19], ordered from Sigma-Aldrich, annealed, and cloned into pSpCas9(BB)-2A-puro (PX459) (Addgene, #62,988). The generated plasmids were transfected into HuH-7 cells using the Jetprime transfection reagent (Polyplus, #114-15) according to manufacturer's instructions. As negative control parental plasmid (without sgRNA) was used. Cells were treated with puromycin for 14 d and single clones were isolated by serial dilution in 96-well plate.
RNA isolation
The isolation of total RNA from mammalian cells was performed using the TRIZOL method. Trizol (Sigma, #T9424) was used to lyse the cells and chlorophorm isoamylalcohol was added and incubated for centrifugation. The aqueous phase was transferred to a fresh tube and precipitated using isopropanol. The RNA pellet was washed with ethanol, air-dried and resuspended in RNase-free water. RNA concentration was determined by A 260 -measurement using the spectrophotometer (NanoDrop, #ND-1000) and quality was measured by microcapillary electrophoresis (2100 BioAnalyser, Agilent Technologies).
For RNA isolation from supernatant or EVs, prior isolation, 1 pmol/100 µl spike-in RNA was added directly in the sample and RNA was isolated using TRIZOL method.
Quantitative real-time PCR (qPCR)
MicroRNA was analyzed by a two-step qPCR using the miScript-Reverse Transcription Kit (Qiagen, #218,161) and GoTaq®qPCR Master Mix (Promega, #A6001) and primer sets. Primers used for cDNA synthesis and real-time PCR were selected and purchased from the GeneGlobe Search Center (Qiagen). All steps were performed in triplicate and in agreement with the supplier's guidelines. After demonstration that primer sets exert equal and high efficiencies, relative expression was calculated by the ∆∆Ct method using the transcript levels of hypoxanthine-guanine phosphoribosyl transferase (HPRT) for normalization. Cellular miRNA levels were normalized using RNU6 as reference. For normalization of extracellular miRNA levels, SV40-miRNA and C.elegans miRNA (Qiagen) were added to samples and cell culture supernatants (2 pmol/200 µl) prior to the RNA isolation procedure.
Analysis of cell growth and proliferation
For analysis of cell proliferation, cells were plated in 96-well plate and treated with dox or transfected as indicated. MTT test was performed to analyze cell growth using Cell Titer aqueous solution (Promega, #G3582). Incucyte system (Satorius, #SX5) was used to monitor cell proliferation in the duration of 3 d. Cell confluency was quantified every 20 min.
siRNA and miRNA mimic transfection
Cells were seeded in six well plates in DMEM containing 10% FCS. Prior to transfection, the confluency was maintained at 30%. Cells were transfected with siRNA or miRNA mimics using Jetprime transfection reagent (Polyplus, #114-15) according to manufacturer's instructions.
After transfection, cells were incubated with DMEM containing 10% FCS for 24 h and treated as indicated. Cells were lysed for RNA isolation using TRIZOL method or in RIPA buffer for protein analysis.
Generation of p62 deletion constructs and site-directed mutagenesis
PCR was carried out using template plasmid DNA and each primer in site-directed mutagenesis kit (NEB, #E0554), according to the manufacturer instructions. Briefly, PCR was performed to generate the mutant plasmid. After digestion by KLD enzymes, it was transformed into competent E. coli cells. Antibiotic selected colonies were picked and plasmids were extracted using miniprep purification method. All miR-198 or p62 mutant plasmids were sequenced by Sanger method and aligned by CLC sequence viewer v6.0.
Dual reporter luciferase assays
The psiCheck™-2 plasmid (Promega, #C8021) harbors two expression cassette, driving expression of the Renilla and Firefly luciferase enzymes. Two miiR-198-binding domains were cloned into the multicloning site (MCS) located between Renilla luciferase gene and poly A tail. Firefly luciferase expression serves as an internal transduction control.
HSC-T6 cells were plated in 12-well plates 1 day before transfection. After sensor plasmid transfection, cells were incubated for 6 h and medium was changed to DMEM without FBS. Simultaneously, isolated vesicles were added to the growth medium for another 48 incubation. Cells were lysed and Renilla and Firefly luciferase signals were analyzed by dual luciferase reporter assay kit (Promega, #E1910) and luminescence microplate reader (Berthold Technologies, #LB960) according to manufacturer instructions.
Immunofluorescene staining and immunohistochemistry
Cells were seeded on glass coverslips the day before cells were either treated with dox or transiently transfected with siRNA or plasmids. 24 h after treatment or transfection, cells were washed three times with PBS, fixed with methanol and permeabilized with 0.1% Triton X-100 in PBS for 10 min. After three washing steps with PBS, cells were blocked with 5% gelatin in PBS for 1 h at room temperature (RT). Cells were then stained with primary antibody in blocking solution at 4 °C overnight, followed by three washes with PBS. Cells were incubated with Alexa Fluor 594-or 488conjugated secondary antibody at RT for 1 h and mounted with Mowiol medium containing DAPI (ROTH, #HP20.1). Cells were viewed under a confocal fluorescence microscope (Zeiss, #Meta 710).
Serial sections of FFPE liver biopsies from HCC patients were applied to p62 immunocytochemistry using p62 antibody(Santa Cruz, #sc-2438) immunodetection was developed using peroxidase-polymer conjugated secondary antibodies and diaminobenzidine as substrate. The stained tissue were scanned by Hamamatsu NanoZoomer Digital Pathology system.
RNA immunoprecipitation (RIP)
Cells were harvested in lysis buffer (25 mM Tris (pH 8), 150 mM NaCl, 5 mM EDTA and 0.5% NP-40) supplemented with protease inhibitor cocktails (Sigma, #4,693,159,001) and 1 mM PMSF (Sigma, #329-98-6) and washed with ice cold PBS. For antibody conjugation, antibodies were coupled to Dynabeads by head-to-head rotations at RT, according to the manufacturer's instructions. Cell lysates were centrifuged for 15 min under 4 °C at 14,000 × g. The transparent supernatants were collected and protein concentration were determined using BCA protein assay kit (Thermo Fischer, #23,227). 300 μg of cell lysates were pre-cleared for 1 h with end-overend rotation with 3 μg rabbit IgG (Cell Signaling Technology, #2729) conjugated protein G coupled Dynabeads (Thermo Fischer, #10003D) at 4 °C. Samples were placed on a magnet stone, and the supernatants were collected and subsequently incubated with antibody conjugated Dynabeads on the rotator at 4 °C for 4 h. After 4 h incubation, the samples were placed on the magnet, washed three times with lysis buffer and another three times with ice cold PBS. Samples were collected and directly mixed with TRIZOL for RNA isolation.
Western blot analysis
SDS-polyacrylamide gel electrophoresis (SDS-PAGE) was performed using the Bio-Rad Mini protein gel system. Briefly, protein samples were mixed with 4 × Laemmli buffer (Biorad, #1,610,747), denatured and directly loaded onto the gels. After electrophoresis and transfer, the membranes were then incubated in a blocking solution. After incubation with first and secondary antibodies, membranes were developed using Pierce™ ECL Western Blotting Substrate (Thermo Scientific, #32,109) according to the manufacturer's instructions.
Vesicle preparations, negative staining and electron microscopy
Conditioned medium was collected from cell culture. After serial centrifugation under 500 × g, 3000 × g, 12,000 × g, medium was filtered through 0.8 µm membrane. EVs were isolated by ultracentrifugation (Beckmann, #L8M) under 4 °C, 100,000 × g for 14 h. After PBS washing, vesicle pellet were resuspended in PBS for immunoblotting and RNA isolation.
Alternatively, the prefiltered medium was subject to EV isolation using exoEasy Maxi kit (Qiagen, #76,064) following the manufacturer's instructions. The isolated vesicles were subsequently used for NTA vesicle tracking, miRNA analysis and negative staining following electron microscope imaging. 20 µl of vesicle suspension were placed on carbon-coated grids and blotted on filter paper after washing with an aqueous solution of 2% uranyl acetate. Stained vesicles were viewed using electron microscope (Zeiss, #EM902A).
Statistical analysis
Statistical analysis was performed using SPSS software 17. Bar graphs show means ± SEM. Student's t test was used. P values are indicated in the figure legends, p < 0.05 was considered statistically significant. GraphPad Prism v9.0 were used to create plots. Images were processed by Image J and Las X. Biorender (www. biore nder. com) was used to illustrate signaling pathways in graphic abstract.
miR-198 is preferentially secreted into supernatant via extracellular vesicles (EVs)
To study the miR-198 dysregulation in HCC [12,13,21], we established a stable Tet-On inducible miR-198 expression system in hepatoma cell lines, HuH-7 and Hep3B. In Tet-On miR-198 cells, miR-198 expression is induced by doxycycline (dox) treatment. We analyzed the miR-198 expression levels at different time intervals. Interestingly, the robust increase of cellular miR-198 expression was followed by a rapid decrease (Fig. 1A). Noteworthy, the time course analysis of cellular and extracellular miR-198 levels revealed the cellular decrease was accompanied by a prominent increase in the supernatant (Fig. 1B), indicating that miR-198 levels are released from liver cancer cells. Since miR-198 is a crucial tumor suppressor miRNA, we hypothesized that in response to high transgenic expression other tumor suppressor miRNAs will also be immediately eliminated via their secretion by liver cancer cells. Therefore, we stably overexpressed the tumor suppressors miR-29a and miR-145 in comparison to the miR-21, which is a main oncogenic miRNA, promoting cancer progression. In supplemental figure S1A B, we show that stable overexpression of miRNAs in hepatoma HuH-7 cells did neither result in autophagy activation nor vesicular secretion of both oncogenic miR-21 and tumor suppressor miRNAs.
Since miRNA presents universally in vesicles secreted from cells [22], we postulated miR-198 was released via EV. At first, we collected conditioned medium from Tet-On miR-198 cells. Next, EVs were isolated by columnaffinity method. The vesicular fractions were characterized by immunoblotting using antibodies that recognize the EV marker proteins, CD63, TSG101 (data not shown), and HSP70, and as a negative marker β-actin was used (Fig. 1C). Nanoparticle tracking analysis (NTA) (Fig. 1D) identified the EV size between 100 and 200 nm. Quantitative PCR (qPCR), analyzing the vesicular fraction, showed more than 50-fold increase of miR-198 release from Tet-On miR-198 HuH-7 cells (Fig. 1E). Consistently, this EV accumulation was also found from Hep3B cells (Fig. S1 C D). We have calculated the copy numbers of the vesicular miR-198 secreted from cells and found that around ~ fourfold of the overexpressed miR-198 were released into EVs (data not shown). The analysis of supernatant miR-198, comprising vesicular and soluble levels, showed more than 95% of the released miR-198 are enclosed in EVs (Fig. S1 E). These data showed miR-198 decrease was mainly due to vesicle release.
Because miR-198 is a prominent tumor suppressor [13,21], we analyzed the apoptotic effects by Annexin V staining. Stable Tet-On miR-198 HuH-7 cells were at first treated with dox for 0 h, 8 h, 24 h, and 48 h, stained with Alexa 594 fluorochrome conjugated Annexin V and subsequently analyzed by flow cytometry. Here, we observed no obvious change of apoptotic effects by miR-198 increase (Fig. 1F). Additionally, we ensured that miR-198 levels in transgenic Tet-On stable cell systems did not exceed the physiological expression of liver parenchymal cells 1 3 obtained from a healthy liver (Fig. S1F). Therefore, our miR-198 Tet-On expression system mimics endogenous expression under physiological conditions.
Hence, miR-198 in liver cancer cells shows a preference to be packaged in EVs and released into extracellular space.
miR-198 enhances autophagic activity in liver cancer cells
Previous studies have correlated autophagic process to vesicle secretion [2,23] and autophagy contributes to the secretion of IL-1β [24]. To analyze autophagic effects by miR-198, we performed immunoblotting using antibodies against autophagy marker proteins, LC3 and p62. Corresponding to cellular miR-198 'increase and decrease' pattern ( Fig. 1A,B), we observed elevated expression of both proteins, p62 and LC3, in particular of the mature, autophagosome membrane bound LC3 II form in the first 8 h in response to dox-induced miR-198 expression ( Fig. 2A,B). Likewise, high p62 and LC3 expression levels were also tuned back to its original levels at 48 h ( Fig. 2A,B), implicating a correlation between miR-198 and autophagy.
To investigate this correlation, we silenced ATG5 and ATG7 expression by RNAi to impede autophagic activity. siRNA-mediated knock-down was validated at both transcriptional levels (Fig. S2A,B) and protein levels (Fig. 2C). Interestingly, ATG5 KD and ATG7 KD led to not only 5-sixfold increase of cellular miR-198 ( Fig. 2E) but also more than 50% reduction of both EV amount (Fig. S2 C) and vesicular miR-198 level (Fig. S2 D). Since ATG5 and ATG7 disrupts the synthesis of LC3 protein [25], a strong decrease of LC3 protein expression was found (Fig. 2C). Importantly, p62 protein expression was higher in miR-198 expressing cells compared with non-overexpressed cells (Fig. 2C,D). These data implicate that miR-198 enhances autophagy and reversely autophagy contributes miR-198 secretion.
To validate miR-198 enhances autophagy, we used two autophagy inhibitors, chloroquine (CQ) and bafilomycin A1 (BAF), preventing the maturation of autophagosome and p62 disposal [26]. We validated the autophagy inhibition by an increased expression of both LC3 and p62 proteins (Fig. 2 F,G). Corresponding to ATG5/7 KD, BAF and CQ treatment caused miR-198 accumulation in Tet-On miR-198 cells (Fig. 2H), confirming autophagy leads to miR-198 reduction. To note, in miR-198 expressing cells, BAF raised p62 protein levels by ~ sevenfold, whereas in the non-expressed cells only ~ threefold was found (Fig. 2F,G). Furthermore, we repeated the treatment of the Tet-On control and miR-198 cells with dox and BAF in Hep3B cells, and validated the increased expression of LC3 and p62 protein by miR-198 (Fig. S2E), confirming an enhanced autophagic flux triggered by miR-198. These data reinforce miR-198 induces autophagy and autophagy in turn reduces miR-198 level.
miR-198 is recruited to autophagosome-derived vesicular fractions
That we have discovered miR-198 is secreted into EVs and enhances autophagy, prompted us to hypothesize miR-198 is enclosed in autophagosomes. We used Cy3 conjugated miR-198 mimic to study intracellular tracking and EV loading. To disrupt the entrapment of miRNA mimic in endosome, we used a polyethylenimine-based transfection reagent [27], which belongs to the cationic polymers. Cationic polymers induce proton sponge mechanism leading to efficient si/ miRNA endosomal escape [28][29][30][31]. Indeed, after staining of the cellular vesicles by lipophilic carbocyanine dye, we observed no co-localization of scramble (sc) RNA and miR-29a with intracellular vesicles, validating the endosomal escape and efficient cytoplasmic release (Fig. S3 A). Subsequently, we used miR-198-Cy3 mimic, performed vesicle staining and immunochemistry using antibodies against autophagic proteins. Strikingly, we observed strong co-localization of miR-198 with both intracellular vesicles (Fig. S3A) and p62 protein (Fig. 3A,B), whereas no such colocalization was found for scRNA and miR-29a. It indicates that miR-198 is recruited into p62 protein-related vesicle fractions.
To confirm the recruitment into p62 protein vesicular fractions, we performed in situ hybridization (ISH) combined with p62 immunostaining to simultaneously detect endogenous miR-198 and p62 protein. MiR-198 was detected by its antisense LNA probes and p62 protein labeled by its antibody. Consistently, we detected strong co-localization of p62 protein and miR-198 (Fig. 3C,D). As an autophagosome membrane marker, LC3 protein was also found to co-localize with miR-198 (Fig. 3E,F), suggesting miR-198 is encapsulated into autophagosome-derived vesicular fractions.
To validate miR-198 in the vesicular fractions, we utilized antibodies against vesicular marker proteins, TSG101 and CD63 [32]. Interestingly, the same co-localization was observed (Fig. 3G-J). Therefore, our data illustrate the recruitment of miR-198 into autophagosome-derived vesicular fractions.
miR-198 is selectively loaded by p62 protein into vesicle fractions
Since the co-localization with vesicular proteins was only found for miR-198, we postulated that miR-198 was selectively recruited to vesicle fractions. At first, we performed site-directed mutagenesis to mutate the seed region of (Fig. 4A); but the autophagic proteins (LC3 and p62) were not increased (Fig. 4B), indicating high level of mutants doesn't enhance autophagy. This data points to the selective induction of autophagic effects by miR-198.
In the next step, we investigated which autophagic protein decreases miR-198 level. At first, we analyzed the most important autophagic proteins, p62 and LC3. Surprisingly, we found p62 decreased nearly 90% of miR-198 level, whereas LC3 protein did not (Fig. 4C). Interestingly, the same effect was also found in non-tumor HEK293 cells (Fig. S3 B), revealing an important role of p62 protein in miR-198 decrease. Subsequently, we knocked out p62 genetic locus in HuH-7 by the Crispr/Cas9 technology. The lack of p62 protein was confirmed at protein level by immunoblotting (Fig. 4D). Notably, we detected more than 60-fold increase of endogenous miR-198 levels after p62 KO (Fig. 4D). Therefore, we identified p62 protein as essential repressor of miR-198 level; specifically, an inverse correlation between p62 and miR-198 expression in vitro was found.
Furthermore, we attempted to validate this correlation in vivo. Because miR-198 expression is primate specific [33], formalin-fixed and paraffin-embedded (FFPE) liver tissues were collected from patients at different stages of liver diseases including cirrhosis, dysplastic nodules and HCC. We performed p62 protein-based immunohistochemistry and analyzed the corresponding miR-198 expression by qPCR. In agreement with previous study [34], p62 protein expression was enormously upregulated in HCC (Fig. 4F), however, the miR-198 expression was reduced by nearly 90% compared to healthy controls (Fig. 4G). Therefore, we identified the inverse correlation between miR-198 and p62 protein expression in vivo.
It is plausible that p62 protein represses miR-198 level by EV secretion. To investigate the repressive effect on miR-198, we isolated EVs from p62KO cells and analyzed the vesicular miR-198 release. Remarkably, p62 KO strongly decreased nearly 50% of the amount of released EVs (Fig. S3C) and led to more than 90% reduction of miR-198 secretion (Fig. S3D). These results show that p62 protein contributes to vesicle secretion to reduce miR-198 expression level.
As p62 protein dissipates miR-198 by vesicle secretion, we hypothesized if p62 loads miR-198 through a protein-RNA interaction manner. Following this idea, we analyzed RNA-protein interaction by RNA immunoprecipitation followed by quantitative PCR (RIP-qPCR); strikingly, we found an immense amount of miR-198 molecules bound to p62 proteins (Fig. 4H). LC3 protein is an interacting partner of p62 protein [7] as validated by co-immunoprecipitation (Fig. S4 A). RIP-qPCR showed that less than 10% of p62 bound miR-198 were detected in the LC3 precipitates (Fig. 4H). Importantly, we confirmed the selective binding of p62 protein to miRNAs by expressing miR-29a, miR-21 and miR-198 mutant under the same conditions. Here, none of the significant enrichments were found ( Fig. 4I and Fig. S4B). That corresponds to the finding that p62 protein fails to reduce both miR-21 and miR-29a levels ( Fig. S4 C D). Moreover, the selective interaction of p62 with miR-198 could also be seen from p62 immunochemistry, showing strong co-localization for miR-198 but not for scRNA and miR-29a (Fig. 3C,D). Taken together, it reveals that p62 protein selectively loads miR-198 into autophagosome-derived vesicular fractions.
PB1 domain drives the self-oligomerization, p62 protein utilizes its UBA domain to interact with (poly)ubiquitinated protein complex, TB domain with TRAF6 protein, KIR region with Keap1 protein, LIR region with LC3 protein. To study the functional roles of different domains of p62 protein, we constructed expression vectors expressing p62 mutant variants by site-directed mutagenesis. Neither point mutation nor deletion of PB1 domain did affect p62 protein binding with miR-198 (Fig. S5A). Importantly, we found impaired co-localization of miR-198 and the p62 protein mutants (including p62 UBA del , p62 LIR mu , p62 KIR del p62 TB del and p62 ZZ del ) ( Fig. 5B and S5A), and the lowest co-localization was shown for p62 TB del (Fig. S3D), suggesting TB domain might be an indispensable region for miR-198 interaction. Tet-On control and miR-198 stable HuH-7 cells were treated with dox for 0, 8, 24, 48 h. Cells were harvested and cell lysates were subject to immunoblotting for the expression of the autophagy marker p62 and LC3 protein expression (A). Quantitative analysis of p62 and LC3 protein expression of Tet-On miR-198 stable HuH-7 cells were performed using Image J (B). Furthermore, cells were at first transfected with scRNA, siATG5 or siATG7. 24 h post-transfection, cells were treated with dox for another 24 h. Immunoblotting was performed to analyze p62, LC3, ATG5 and ATG7 protein expression (C), and statistical analysis of p62 protein expression by image J (D), miR-198 expression was analyzed by qPCR (E). Moreover, cells were treated with dox for 8 h and then subject to BAF or CQ for another 16 h. Cells were harvested for both RNA isolation and protein analysis. miR-198 expression was analyzed by qPCR (F), p62 and LC3 protein expression by immunoblotting (G) and statistical analysis of p62 protein expression by image J (H). * means p value < 0.05, **p < 0.001 Moreover, miR-198 binding with different p62 mutant variants was studied by RIP-PCR. Here, we observed increased cellular miR-198 levels after LIR and UBA domain deletion (Fig. 5C), p62 LIR del and p62 UBA del protein were still bound with miR-198 albeit their co-localization was mildly decreased (Fig. 5D). However, p62 TB del protein overexpression elevated miR-198 expression (Fig. 5C) but we did not detect any miR-198 in the precipitated protein compound (Fig. 5D).
These data indicate that miR-198 interaction with p62 protein are collectively dependent on the integrity of the functional domains.
The p62 protein loads miR-198 into secreted EVs
We have discovered p62 protein dissipates miR-198 into extracellular space, however, p62 protein level was reduced when miR-198 was secreted ( Fig. 2A,B), pointing that p62 protein might be co-secreted into EVs.
To study p62 protein secretion during miR-198 release, we stably expressed GFP-fused p62 protein in p62 KO hepatoma cells and used Cy3 conjugated miR-198 mimics. The EVs were isolated from conditioned media, resuspended in fresh cell growth media and directly incubated with recipient cells (Fig. 6A). To best mimic the cell-cell interaction, we chose rat hepatic stellate cells (HSC-T6) as recipient cells, which is a type of liver cancer-associated fibroblasts (CAFs) and demonstrated to assign protumorigenic effects via crosstalk with cancer cells [37]. Since miR-198 is not expressed in murine species [33], no endogenous miR-198 in recipient cells (HSC-T6) would complicate the analysis of vesicular miR-198 uptake. Notably, we observed both the internalization and co-localization of p62 protein and miR-198 in recipient cells (Fig. 6B). Consistently, p62 secretion into EVs was confirmed by Western Blotting where an enormous amount of p62 protein was detected in miR-198 enriched EVs (Fig. 6C). However, LC3 protein was not secreted, indicating that the release of p62/miR-198 complex did not occur from direct expulsion of autophagosomes, rather the fusion of autophagosome with multivesicular bodies (MVB) is possible as characterized by CD63 and TSG101 proteins (Fig. 6C). Nevertheless, we have discovered p62 chaperons miR-198 secretion into EVs.
To validate the cargo sequestration in EVs, we attempted to study the function of EV transfer and focused on the changes of gene expression of recipient cells after miR-198 intake. We applied dual luciferase reporter system to detect miR-198 using a miR-198 sensor construct. To this end, miR-198-binding sequences were inserted into the 3´UTR of Renilla luciferase gene using Firefly luciferase as internal control. HSC-T6 cells were transfected with miR-198 sensor vector and subsequently treated with EVs. In consistence to mimic miR-198 transfection as positive control, we found treatment of miR-198 enriched EVs strongly inhibited Renilla luciferase reporter expression (Fig. 6D), revealing vesicular miR-198 binds with its target genes and leads to change of Renilla luciferase enzyme gene expression. Whereas direct application of naked mimic miR-198 did not cause the inhibition (Fig. 6D), ruling out the cell uptake of soluble, unpackaged miR-198. It corroborates that miR-198 is enclosed in EVs and vesicular miR-198 can be delivered to recipient cells.
Taken together, miR-198 secretion is chaperoned by p62 protein into EVs and miR-198 is still functional after transfer.
Autophagy inhibition enhances miR-198-mediated tumor suppression
MiR-198 is a potent tumor suppressor and inhibits cell growth [13,21]. We analyzed the vesicular miR-198 function by treating recipient cells with EVs secreted from Tet-On stable control and miR-198 cells. Here, we observed the inhibition of more than 90% of cell growth after treatment with miR-198 EVs (Fig. 7A). However, in the donor cells that secretes miR-198, we observed neither inhibition of cell viability (Fig. 7B) nor suppressed cell migration (data not shown). Considering that autophagy downregulates miR-198 level, it is conceivable that autophagy disrupts miR-198's function as tumor suppressor. To follow this path, we treated the cells with dox to elevate miR-198 expression and simultaneously with siATG7 to inhibit autophagy. In the mock controls, no obvious growth inhibition was found; however, we detected a strong growth inhibition of Tet-On miR-198 cells in the duration of 2 d (Fig. 7C), indicating autophagy is one barrier for miR-198 mediated growth inhibition.
To confirm the functional inhibitory role of autophagy in miR-198, we used Incucyte system to analyze cell proliferation. Real time monitoring of cell proliferation during 3 days confirmed that cell growth was not inhibited in response to dox-induced elevation of the tumor suppressor miR-198 as well as the mock control (Fig. 7D). However, in combination with p62KO, miR-198 led to a rapid cell growth stagnation (Fig. 7E). Notably, we observed partially restored cell proliferation by p62 compensation (Fig. 7E), confirming that autophagy impedes miR-198 effect on cell proliferation. Therefore, we conclude that autophagy strongly impairs miR-198 tumor suppressive function.
Discussion
In this study, we provide compelling evidences that HCC cells control tumor suppressor miR-198 expression by autophagy-mediated vesicle release, involving the autophagy receptor protein SQSTM1/p62. Thus, we can assign a mechanism-based function to autophagy-associated EV release for dissipating tumor suppressor miR-198. Our data also show that the disruption of autophagy greatly restores cellular miR-198 levels and enhances miR-198 mediated tumor suppression, which has strong implications for miRNA therapy against HCC.
We show that miR-198 in liver cancer cells is chaperoned by p62 protein and this process is mediated by autophagyassociated EV release. Previous studies have shown p62 protein is degraded by fusion of autophagosome and lysosome [7,8]. However, our results revealed for the first time that the autophagy, induced by miR-198, directs p62 protein into EVs, thereby providing a new avenue for cellular p62 protein disposal.
Our gain-and loss-of-function experiments demonstrate that p62 strongly represses miR-198 level. This conclusion is corroborated by the findings that miR-198 expression is elevated by p62 knockout and mutation. Noteworthy, we show for the first time that p62 protein selectively controls tumor suppressor miR-198 levels, maintaining its low cellular presence via autophagy-associated secretion. It is possible that high expression of p62 in tumor cells [34] triggers autophagy-associated vesicular miR-198 release. Indeed, we observed increased intracellular miR-198 levels, decreased vesicular miR-198 levels and lower number of released vesicles after p62 knockout; in contrast, however, LC3 protein seems to inhibit miR-198 release from HCC cells. This discrepancy coincides with their opposite outcomes in patient survival and recurrence where LC3 improves [38,39] but p62 exacerbates it [34]. This disparity was also found in our study that LC3 protein was less than direct interaction with miR-198, compared with p62. Therefore, the autophagic secretion of miR-198 depends more on p62 than LC3.
We have found enormous miR-198 in p62 protein immunoprecipitates and identified their co-localization not only in liver cancer cells, but also in recipient cells that are treated with miR-198 enriched EVs. Since mutation of p62 protein has only impaired, but not blocked the interaction with miR-198, miR-198 might not directly interacts with p62 protein. Instead, we would envisage p62 as a 'magnet' to capture protein complex where miR-198 is conglomerated. Importantly, the role of p62 in complexing and sorting out superfluous miRNA seems to be miRNA-specific, because scRNA, miR-29a and the miR-198 mutant did not interact with p62 protein. Notably, sequence motif of miRNA was shown to determine its sorting into exosomes [40]. Consistently, we have mutated miR-198 seed region and its efficient interaction with p62 protein is disrupted. Moreover, protein-protein interaction mechanisms could contribute to miR-198 recognition. Mutation of UBA, LIR, TB domains of p62 protein impaired its interaction with miR-198. However, the ZZ domain of p62 is shown to bind to small non-coding RNA [41], but in our study, p62 ZZ del protein still interacts with miR-198, therefore, we postulate that the small non-coding RNA interaction with p62 protein varies among different RNA species and is a highly selective process.
While our experiments explore autophagy, miRNA secretion, and vesicle uptake, we have not tested other non-coding RNAs and proteins that are secreted by autophagy so far. It will be informative to investigate whether p62 dependent autophagy also regulates other RNAs and proteins via vesicular secretion. It will also be relevant to explore possible molecular mechanisms that inherently induce p62 overexpression, and the exact EV groups for miRNA sequestration, which are destined to deliver to specific cell types.
Here, we provide evidences that autophagy specifies miRNA-protein complex into EVs. Previous work has focused mainly on interconnections between autophagy and cargo loading. For example, autophagy conducts the recruitment of miRNA processing enzyme DICER and the major miRNA effector AGO2 as miRNA-free entities for degradation [42]. In addition, miRNA itself, miR-224 was recruited to autophagosomes of HCC cells upon autophagy restoration [43]. Because autophagosomes can alternatively fuse with late endosomes, EV biogenesis and autophagy are proposed to be functionally connected [44]. However, until now, limited number of studies have elucidated the vesicle secretion of autophagosomes loaded cargos. In our study, we delineate a process for p62 protein directing miR-198 into autophagosomes and further secreted as EV enclosed miRNA. The regulation of miRNA function by a protein as seen for p62 could present a general principle of miRNA and protein control, complementing a well-recognized form of regulation such as post-translational modification and protein-protein interaction. Fig. 6 p62 loads miR-198 secretion into EV. A Workflow of p62 protein chaperoning miR-198 secretion in EVs: Supernatants were collected from HuH-7 cells, transiently or stably transfected with miR-198, centrifuged to eliminate dead cells and cell debris. The vesicles were isolated by ultracentrifugation. Pelleted vesicles were resuspended either in PBS for protein component analysis or in DMEM medium for vesicle uptake assays, analyzed by dual luciferase reporter assay and microscopic imaging. (B) Representative fluorescence confocal microscopy Z-stack images of recipient hepatic stellate cells (HSC-T6 cells). HuH-7 p62KO cells were co-transfected with miR-198-Cy3 and p62-GFP encoding plasmid. 24 h after transfection, medium was changed using fresh DMEM without FBS supplement and cells were further incubated for another 48 h. EVs were isolated from the conditioned medium by ultracentrifugation. The pelleted EVs were treated with RNase A and resuspended in DMEM medium before treating HSC-T6 cells. 24 h after treatment, HSC-T6 cells were fixed by 4% formaldehyde, stained with DAPI and viewed under confocal microscope. Z-stack images were acquired at different horizontal distances of the cells by confocal microscope. Blue, DNA; Red, miR-198; Green, p62 protein. Scale bar: 15 µm. C Immunoblotting analysis of vesicle entrapped proteins. Tet-On control and Tet-On miR-198 stable HuH-7 cells were treated with dox and incubated in DMEM medium (without FBS) for 48 h. Cell supernatant were collected and centrifuged to eliminate dead cells and cell debris. The vesicles were isolated by ultracentrifugation. Pelleted vesicles were resuspended in PBS for Western Blot analysis using the antibodies against p62, β-actin, LC3, TSG101 and CD63 protein. D Gene expression analysis of vesicle uptake into hepatic stellate cells (HSC-T6 cells). Vesicle preparation: Tet-On control and Tet-On miR-198 stable HuH-7 cells were treated with dox and incubated in DMEM medium (without FBS) for 48 h and cell supernatant were collected and centrifuged to eliminate dead cells and cell debris. The vesicles were isolated by ultracentrifugation. Pelleted vesicles were resuspended in PBS and subjected to RNase A treatment. Finally, the vesicles were ultracentrifuged and resuspended in DMEM for further use. Treatment of recipient cells: HSC-T6 cells were transiently transfected using miR-198 sensor plasmid. 24 h after transfection, medium was changed using vesicle-DMEM mixture (described above). Here, we used mimic miR-198 transfection into HSC-T6 cells as positive control; as negative control miR-198 mimic was directly added into the culture medium. Cells were incubated for another 24 h. Finally, cells were harvested for measurement of Renilla and Firefly luciferase enzyme activity. NS: no statistical significance. ** means p value < 0.001 ◂ To note, miRNA expression profiles of HCC tissue are distinct from that of non-tumor tissues; most anti-proliferative miRNAs are found downregulated. Tumor suppressor miR-198 was previously detected in exosomes secreted from T-lymphocytes [40]. However, mechanisms regarding vesicle release amongst different cell types are de facto diversified and the vesicular miRNA secretion from hepatoma cells remains to be investigated. Here, we have demonstrated autophagy receptor protein, SQSTM1/p62 protein tasks as sponge carrier to absorb tumor suppressor miRNAs and shed off cell membrane as tumor derived EV. EVs from tumor cells are packaged with mRNAs and miRNAs [16] that markedly influence the tumor microenvironment or even enhance disease progression. We observed that miR-198-p62 was loaded in EVs, secreted from hepatoma cells and are transferrable to hepatic stellate cells (HSCs), the precursors of cancer-associated fibroblasts (CAFs). Using luciferase reporters as sensors for miR-198, we show that uptaken miR-198 is functional, inhibiting expression of the reporter which harbours the stable HuH-7 cells was treated with dox for 2d, 4d and 7d and tested by the MTT assay. Cell viability was analyzed at 492 nm wavelength by Multiscan Ascent photometer. NS: no statistical significance. Tet-On control and miR-198 stable HuH-7 cells were at first transfected with scRNA or siATG7 and then treated with dox for 24 h. Cell viability (C) was analyzed using the same method as above. The values were presented in average ± SEM. *p < 0.05. Tet-On control and miR-198 stable HuH-7 cells were treated with dox for 3 d and cell proliferation was analyzed by Incucyte cell proliferation system (D). The Tet-On control and miR-198 stable HuH-7 p62KO cells were treated with dox for 3 d and analyzed by Incucyte system. For p62 compensation, cells were transfected with plasmid encoding p62 protein expression. 24 h after transfection, cells were treated with dox for 3 d and cell proliferation was analyzed by Incucyte system (E). The values were presented in average ± SEM. **p < 0.001, NS no significance miR-198 target sequence. It is shown that the tumor surrounding CAF cells undergo increased p62 protein levels [45], indicating the possibility of p62 influx into CAF cells. As well, miR-198 is detected in serum samples of HCC patients [15]. These data all point to the involvement of not only paracrine uptake but also endocrine stimulation for vesicle uptake in vivo.
The future work will be needed to embed miRNA in EVs derived from hepatoma cells and study different cell types in the context of vesicle uptake in vivo, as tissues such as liver and spleen are composed of heterogeneous population of cells that could maintain different capability to accept vesicles. Therefore, the identification of potential recipient cell types of HCC EVs is the main research of interest to understand how liver cancer cells utilize selfsecreted vesicles to grow and proliferate. Our study provides first evidences to the mechanism of miRNA-protein complex secretion and its uptake of accept cells, which could represent a general principle to supplement the wellrecognized autophagy-mediated elimination of protein and tumor suppressor miRNAs. We predict that the secretory principle employed by p62 and miR-198 will be found to be more widespread in biology, especially among tumor cells. | 2022-09-03T06:18:25.045Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "74538440c80f51b8e4a604038a9d8ef54d731ed5",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13577-022-00765-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "448119649d1b147c2af6fc218228b0caab71ecca",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250578865 | pes2o/s2orc | v3-fos-license | Phase II Study of ENZAlutamide Combined With Hypofractionated Radiation Therapy (ENZART) for Localized Intermediate Risk Prostate Cancer
Background Intermediate-risk prostate cancer (PCa) is usually treated by a combination of external beam radiation therapy (EBRT) and a short course of androgen deprivation therapy (ADT). ADT is associated with multiple side effects, including weight gain, loss of libido, and hot flashes. In contrast, anti-androgen monotherapy is generally better tolerated in spite of higher rates of gynecomastia. Objective This study assessed the effectiveness of enzalutamide monotherapy combined with hypofractionated EBRT (Hypo-EBRT) for treating intermediate risk prostate cancer. Method This trial was a multicenter, open-label phase II study of 6 months of enzalutamide monotherapy combined with Hypo-EBRT for intermediate-risk prostate cancer. Hypo-EBRT was initiated 8–12 weeks after initiating enzalutamide. The primary endpoint was PSA decline >80% measured at the 25th week of enzalutamide administration. Secondary end-points included assessment of toxicity, changes in anthropomorphic body measurements, sexual hormones, and metabolic changes. Results Sixty-two patients were included in the study from January 2018 to February 2020. A PSA decline of >80% was observed in all evaluable patients at the end of enzalutamide treatment and 92% achieved PSA values under 0.1 ngr/ml. All patients remain in PSA response (<80% reduction of the initial values) 6 months after the end of enzalutamide treatment. The most frequent adverse events were hypertension, asthenia, and gynecomastia. There were no significant changes in bone density, body mass index (BMI), or patient-reported outcomes (PROs). Conclusion Enzalutamide monotherapy is very effective along with hEBRT in reducing PSA levels for patients with intermediate-risk prostate cancer. Longer follow-up is needed to confirm the potential use of this combination in future randomized trials.
INTRODUCTION
Radiation therapy (RT) is the standard treatment for localized prostate cancer patients (1). When external beam radiotherapy (EBRT) is used, conventionally fractionated external beam RT (cEBRT) with total escalated doses of 75.6-79.2 Gy (2) is usually prescribed.
Due to the favorable a/b ratio of prostatic cancer, as compared to the surrounding normal tissues (3), the use of hypofractionated schedules would be of interest. For patients, hypofractionated EBRT (Hypo-EBRT) is very convenient, as it reduces the treatment time, improves access to treatment, and lowers the treatment cost (4). Hypo-EBRT administered in 4 to 5 weeks had resulted, in an equivalent disease control rate, compared with escalated cEBRT administered at 8 weeks, with similar acute and late toxicity rates in non-inferiority randomized trials (5)(6)(7).
Androgen deprivation therapy is usually combined as adjuvant treatment with EBRT in localized and locally advanced prostate cancers (8). Although it is effective in reducing tumor mass and prostate-specific antigen (PSA) levels (9), limitations to the use of adjuvant ADT in these localized tumors mainly derive from the short-and long-term adverse effects (AEs), which may worsen the quality of life of the patient or be potentially harmful (10)(11)(12).
Antiandrogens are considered an alternative to ADT along with EBRT. The use in monotherapy of the first-generation antiandrogen, bicalutamide, along with cEBRT, improves survival in prostate cancer patients in very unfavorable situations without resulting in testosterone-suppressioninduced side effects (13)(14)(15).
Enzalutamide is a second-generation oral androgen receptor (AR) inhibitor (16) that, unlike classical antiandrogens, blocks different steps in the AR signaling pathway (17,18). In castration resistant metastatic patients, enzalutamide resulted in better clinical outcomes and reduced toxicity when compared with bicalutamide and ADT (19). Enzalutamide plus ADT is approved for treating adult men with castration-sensitive or resistant metastatic prostate cancer (20)(21)(22)(23).
The possibility of using enzalutamide as monotherapy has been extensively studied by Tombal et al. (24)(25)(26) as the first treatment in patients with localized and metastatic prostate cancer. They chose the PSA response (<80% PSA decline over pretreatment levels) to assess the activity of enzalutamide, according to previous results from prospective studies with the LHRH antagonist degarelix (27). The use of enzalutamide has a better tolerance profile than LH-RH agonists in terms of body mass, lipid profile, or bone density. The quality of life of the patients did not change with the treatment, and from the sexual perspective, the results were similar to those of bicalutamide. As testosterone levels remain elevated during enzalutamide treatment, sexual toxicity is lower than that observed with ADT therapy, but there was a higher rate of disorders related to the breast (24)(25)(26).
Therefore, enzalutamide in monotherapy in men with previously untreated prostate cancer produces an adequate level of suppression of the disease as measured by a long and sustained decrease in PSA with less toxicity than LH-RH agonists (26).
Then, if localized intermediate-risk prostate cancer is to be managed with a combination of radiotherapy and hormonal therapy (28), the possibility of improving the toxicity profile of this treatment, using enzalutamide monotherapy, would be of great benefit to these patients with a good prognosis, who should not suffer bothersome undesirable effects.
Enzalutamide monotherapy radiosensitizes prostate cancer cells to radiation (29) by inducing the suppression of DNA repair mechanisms, mainly through non-homologous end-joining repair suppression mediated by DNAPKc proteins (30). This sensitizing effect was also demonstrated in androgen-sensitive and resistant prostate cancer cell lines, animal models, and xenografts on castration-resistant human prostate cancers (31). Enzalutamide provides a stronger radiosensitation than ADT (32) and, furthermore, this effect is more relevant when higher than 2 Gy doses per fraction (29) are used and enzalutamide is administered concurrently with RT (31). This improved effect on concomitant-adjuvant hormonal therapy with radiotherapy has also been observed for standard ADT in the clinical setting (33).
Therefore, if we consider the use of enzalutamide along with radiotherapy for localized prostate cancer, several questions still need to be answered. First, the immediate acute tumor response estimated by PSA decline of combined enzalutamide with the new standard modern Hypo-EBRT. This Hypo-EBRT schedule would favor radiosensitization induced by enzalutamide and improve tumor response. Second, there is no evidence about the possibility of a durable PSA response after cessation of enzalutamide treatment. This issue is of particular interest as it would encourage the development of future trials comparing standard ADT with enzalutamide monotherapy in this particular setting. Third, the toxicity of such a combination and the quality of life of prostate cancer patients are still unknown.
Based on the clinical and biological findings, we analyze for the first time the use of modern hEBRT along with concurrent enzalutamide monotherapy as treatment for localized intermediate-risk prostate cancer.
PATIENTS AND METHODS
This open-label, single-arm, phase 2 study was done across 8 recruiting sites in Spain. Patients were enrolled if they were aged 18 years or older; had histologically confirmed localized (after diagnostic work-up, namely, pelvic MRI and/or abdomen CTscan and bone-scan) intermediate risk prostatic adenocarcinoma (defined as PSA 10-20 ng/ml and/or T2b-C and/or Gleason score 7, if all three factors were present, less than 50% of cores were required to be positive); had an Eastern Cooperative Oncology Group (ECOG) score of 0-1, adequate renal/liver function, and normal blood counts.
Exclusion criteria included previous or current hormonal manipulation, prior treatment for prostate cancer, previous radiation therapy for a pelvic tumor, history of cancer in the last 5 years, history of seizure or treatment with antiepileptic drugs. The full inclusion/exclusion criteria are given in Supplementary Material Table 1.
All patients provided written informed consent. This study was conducted in accordance with the Helsinki Declaration and the International Conference on Harmonization: Harmonized Tripartite Guideline: Guideline for Good Clinical Practice. The protocol was approved by local institutional review boards of each center, independent ethics committees, and the Anonymized for Review Government Competent Authority in Spain. The trial was registered at ClinicalTrials.gov, NCT01302041.
Procedures
After a 4-week screening period, the participants were given a study drug-dosing diary for each of the 6 treatment cycles. Each treatment cycle lasted 28 days (4 weeks), while the participant received the study drug enzalutamide orally. Starting on Day 1, all patients will ingest enzalutamide 160 mg/day at the same time each day, without breaks (except as outlined for toxicity), for 6 (28 days ±3 days) cycles. The dose reduction of enzalutamide to 120 mg/day was allowed with the approval of the principal investigator of the study. Patients were instructed to return all unused capsules at each study visit to assess compliance and received the study drug every 28 days ( ± 3 days) for 6 cycles.
In patients suffering from grade 3 or greater toxic side effects that cannot be reduced by the use of standard medical intervention, treatment should be interrupted until these adverse effects improve. Then, patients could restart on a reduced enzalutamide dose with the written approval of the principal investigator of the study.
Between 8 and 12 weeks after starting enzalutamide, the patients were treated with Hypo-EBRT for a duration of 5.5 weeks. Treatment was administered on an outpatient basis. Hypo-EBRT was administered under Image Guided Radiation Therapy (IGRT) technology. The participant centers were required to routinely use IGRT in these patients, either by ConeBeam CT study and/or fiducial markers placed within the prostate. The External Beam Radiation Dose was normalized such that exactly 98% of the PTV (planned target volume) receives the prescription dose and will be scored as per protocol. The maximum allowable dose within the PTV is 107% of the prescribed dose to a volume that is at least 0.03 cc. The minimum allowable dose within the PTV is >95% of the prescribed dose to a volume that is at least 0.03 cc. The EBRT/IGRT protocol delivered a total dose to the PTV (CTV including the prostate and the proximal seminal vesicles with a 4 mm posterior margin, 8 mm lateral margin, and 5 mm margin in all other directions) of 70 Gy delivered in 28 fractions of 2.5 Gy each. The EQD2 (considering the alpha/beta ratio of 1.5 Gy) was 80 Gy (34).
Blood samples to establish PSA and circulating hormone levels were collected at screening, at the 4th and 25th weeks, and 1, 3, and 6 months after the end of enzalutamide. All patients had monthly clinical visits during treatment and safety follow-up visits at 1, 3, and 6 months after their last dose of enzalutamide, recording adverse events graded according to the National Cancer Institute Common Terminology Criteria for Adverse Events, version 4.0.
Blood samples assessing renal, liver, and blood counts were performed at screening and monthly until the end of enzalutamide administration. Fasting serum lipids and fasting glucose levels were assessed on samples collected on day 1, the 12th, and the 25th weeks.
Changes in bone mineral density were assessed by a dualenergy X-ray absorptiometry scan on day 1 and the 25th week. HRQoL was assessed with self-administered EORTC QLQ-C30 and EORTC QLQ-PR25 instruments (35,36) completed by patients on day 1, at the 12th and 25th week, and at the safety follow-up visit 1 month after the end of enzalutamide.
Outcomes
The primary outcome was PSA response, defined as a decline from baseline in PSA level of 80% or greater at the 25th week, based on the PSA response observed in registration trials of enzalutamide and other hormonal treatments (24,27). Enzalutamide-induced PSA decline after 1, 3, and 6 months of cessation of enzalutamide treatment for the primary analysis has also been considered a relevant treatment response marker to assess the activity of enzalutamide combined with hypofractionated radiotherapy. Secondary outcomes were, changes from baseline in hormone level, bone mineral density, fasting serum lipids and quality of life. Safety outcomes included the frequency and severity of adverse events as scored by the CTCAE 4.0.
Statistical Analysis
The primary activity outcome was the proportion of patients with a PSA responseatthe25thsincethestartofenzalutamideand1,3,and6months after the cessation of enzalutamide treatment. This was calculated as the numberofpatientswithPSAresponse(≥80%PSAdeclinefrombaseline) at the prespecified time-points, divided by the number of patients who started treatment, and presented as the percentage of patients responding. Patients who discontinued enzalutamide treatment were included in the intention-to-treat analysis. Secondary and exploratory outcomes are summarized descriptively.
The primary endpoint for this trial was to assess the number of patients with a more or equal 80% reduction in baseline PSA at the 25th week. We assume a null hypothesis if 70% of patients do not achieve PSA declines of over 80% and a positive hypothesis if more than 85% of the patients achieve such a decline at the 25th week. We aimed for a "maximum" recruiting scenerio, calculating the target evaluable sample size for an alpha = 0.05 and beta = 0.1 error to be 66 patients, resulting in 70 cases of target recruiting size if a 5% patient loss was considered. A second "standard" calculation of the target evaluable sample size for an alpha = 0,05 and beta = 0.2 error, resulted in 47 evaluable patients to be recruited, reaching 50 patients if a 5% loss was considered.
Safety analyses were performed on all patients who had taken at least one dose of the study drug. All reported toxicities were summarized as acute toxicity regardless of attribution by maximum grade and were sorted by the number of patients experiencing the toxicity during the enzalutamide and Hypo-EBRT treatments and until 1 month post-treatment. Late toxicity was recorded at 6 months after cessation of enzalutamide.
Activity analysis was performed according to the "intention to treat" analysis, including patients who had taken at least one dose of study drug and had both pretreatment and at least one activity evaluation after treatment initiation.
The mean, standard deviation, range, and 95% confidence interval of the mean were calculated to describe the quantitative variables. The Shapiro-Wilk (n ≤50) or Kolmogorov-Smirnov (n >50) test was used to verify the normality of the data of the quantitative variables as a function of the sample size. The qualitative variables have been described by means of the absolute frequency, relative frequency, and the CI (95%) calculated using the Clopper-Pearson method. When the sample size is greater than 30, the Student's t-test has been used for paired data to compare numerical variables at two different moments of time. In the opposite case, and if the variables do not follow a normal distribution, we have used the Wilcoxon test for paired data. A p-value of less than 0.05 is considered significant. The statistical program used was R Core Team 2021, version 4.1.1 (37).
Role of the Funding Source
This is an independent academic study supported by an unrestricted educational grant from Astellas. The authors performed the protocol design, data analysis, interpretation, and preparation of this report. Data analysis was performed by an independent statistician (JMGM). All authors had access to the study data. All decisions relating to the manuscript writing and content were made jointly by the authors, including the final decision to submit it for publication.
Patient's Characteristics
Sixty-two out of the maximum recruiting scenery of 70 patients were finally included in the present study from 16 January 2018 to 4 February 2020. The study was closed earlier than expected to achieve the maximum recruiting schedule (31 March 31 2020), due to the COVID-19 pandemic that strongly affected Spain. The number of recruited patients at that time was already over the expectation of the standard calculated sample size, heading for an alpha = 0.05 and beta = 0.2 error.
Four patients resulted in screening failure, and one patient retracted consent after the screening period. Patients and tumor characteristics for the 57 patients who started enzalutamide treatment are described in Table 1
Protocol Compliance and Security
One of the 57 patients taking enzalutamide, retract consent to participate in the study at the 4th week due to general discomfort, unrelated to any objective toxicity. Therefore, 56 patients were finally included in the study (Figure 1).
During enzalutamide treatment, three severe adverse effects were reported. One severe hepatic toxicity (Grade 4) related to enzalutamide, displaying a rise in liver enzymes at the 7th week, normalized after complete and definitive enzalutamide cessation. The responsible investigator considered this adverse effect as related to enzalutamide. Anyhow, the patient continued with the study program evaluations and tests. Two patients suffered severe adverse effects non-related to enzalutamide. One patient had sepsis after fiducial implantation in the prostate for IGRT in the 2nd week, and one patient suffered an ictus in the 9th week. This patient had a previous hypertensive clinical history, and the event was not related by the responsible investigator to enzalutamide treatment. Both patients completed the enzalutamide treatment but with a dosage reduction to 120 mg/day as per protocol in the hypertensive patient.
One patient abandoned enzalutamide treatment at week 11 due to general discomfort unrelated to any objective toxicity. The patient agreed to continue the study follow-up. Two patients from the same center misunderstood the trial instructions and stopped enzalutamide during the 5 weeks of radiotherapy treatment.
Radiotherapy was administered as scheduled (total dose of 70 Gy in 28 fractions, 2.5 Gy per fraction) to all 56 patients. All 56 cases but one (a patient who started radiotherapy in the 5th week) started radiotherapy between the 8th and the 13th week as scheduled. Radiotherapy was completed in all cases, for a total treatment time of 41.63 ± 3.30 days (CI 95% 40.75-42.51). Dosimetry recommendations were well accomplished in all cases. IIn most cases, PTV coverage and OAR constraints were achieved in most cases (Supplementary Material Table 2).
Acute toxicity was recorded as the maximum toxicity observed during treatment and until one month after cessation of enzalutamide ( Table 2). Two patients, as described above, presented grade 4 toxicity (hypertensive in one case, liver enzyme elevation in the other case). Severe grade 3 acute systemic toxicity observed was related to hypertension (systolic in all cases) in 19/56 (33.93%). Urinary and gastrointestinal toxicity 2 were present in 18/ 56 (32.14%) and 5/56 (8.9%) patients, respectively. Common (one third of the cases) mild toxicity included asthenia, breast pain, gynecomastia, urinary pain, and polaquiuria ( Table 3). Other acute general, hormonally-related and gastrointestinal toxicity were also mild and uncommon.
Late toxicity was recorded 6 months after enzalutamide cessation. Most of the urinary and hypertensive severe toxicity disappeared. Toxicity was mainly related to hormonally derived symptoms such as breast pain and gynecomastia. Severe grade 3 toxicity was present in 2 patients, one with urinary pain and retention, and the other showing grade 3 proctitis. Grade 3 hypertension was observed in 5 patients ( Table 2).
PSA
All 56 patients included in the study were analyzed for PSA response in an intention-to-treat analysis and evaluated according to the PSA response data time-point available. All 56 patients evaluable for PSA treatment-induced modifications at pre-specified time points showed PSA reduction higher than 80%. At the 25th week, all evaluable patients (50 cases) achieved PSA values of 0.2 ng/ml and PSA was under detectable levels (<0.1 ng/ml) in 92% of all patients ( Table 3). PSA values dropped from pretreatment levels of 7.61 ± 2.82 (3.53-16.77) ng/ml to 0.04 ± 0.04 (0.00-0.16) ng/ml at the 25th week and remained low 6 months after cessation of enzalutamide ( Table 4).
Hormone Levels
Patients treated with enzalutamide showed a sharp increase in testosterone and estradiol after 4 weeks of enzalutamide treatment ( Table 4). LH and FSH levels were also increased at week 25. Testosterone and estradiol levels decreased to pretreatment levels, but LH and FSH levels remained elevated at 6 months ( Figure 2).
Anthropometric, bone, and metabolic changes at a prespecified time point.
At the time of last evaluation, there was no statistically significant weight change after enzalutamide treatment, either in bone density as measured in densitometric analysis or the bone resorption marker, alkaline phosphatase. Metabolic changes in fasting glucose, cholesterol, or triglyceride levels were not present after enzalutamide treatment. There was a modest increase in HDL cholesterol at the last evaluation ( Table 5).
Patients Reported Outcomes (PROs) at Pre-Specified Time Points
PROs were analyzed through the EORTC QLQC30 and EORTC QLQ-PR25 at pretreatment, the 12th week of treatment, at the 25th week, and one month after cessation of enzalutamide. A reduction in QoL scores as estimated by the EORTC QLQC30 and an increase in symptoms were observed at the 12th and 25th weeks, recovering one month after cessation of the treatment. Specific PRO analysis of symptoms related to prostate cancer treatment (EORTC-QLQ-PR25) showed a significant impact on the urinary domain during the radiotherapy treatment period (12th-25th week) that recovered one month after cessation of Figure 3).
DISCUSSION
Patients having localized intermediate prostate cancer are usually treated with a combination of radiation therapy and 6 months of ADT. Previous studies have shown an excellent toxicity profile of enzalutamide monotherapy compared with ADT (26). Furthermore, combined enzalutamide and conventionally fractionated radiotherapy has been shown to be well tolerated in this particular clinical situation (38). But little is known about the toxicity and PROs when enzalutamide monotherapy is discontinued. Our study was planned to assess the role of enzalutamide monotherapy combined with modern hypofractionated EBRT for treating patients with localized intermediate-risk prostate cancer.
As previously described in other studies (24), our patients showed a better toxicity profile than that traditionally described in ADT trials, caused by the compensatory elevation of sexual hormones. No changes in body mass index, bone density mass, fasting glucose, cholesterol, or libido were found one month after the end of enzalutamide. Just after the end of enzalutamide treatment, modest changes in HDL-cholesterol were still evident.
As expected, testosterone, estradiol, LH, and FSH levels sharply increase during enzalutamide treatment (24). Our data showed for the first time that testosterone and estradiol levels tend to return to basal levels 6 months after cessation of enzalutamide, although LH and FSH remain elevated.
This fact, would be relevant when assessing the acute and long-term hormonal side effects analyzed either by the physicians, through the CTCAE4.0 toxicity scale [Physician Reported Outcomes, (PhyROs)] or the patients, through the EORTC QLQC30 and EORTC QLQ-PR25 (PROs). In fact, no sexual toxicity was observed, but gynecomastia (CTCAE 4.0) and hormonal related symptoms (QLQPR25) remained a problem for patients one month after the end of enzalutamide treatment. In contrast, the global health status, the functioning area, or symptoms other than hormonally related, returned to pretreatment levels one month after cessation of enzalutamide.
The use of Hypo-EBRT is also a novelty in our study. We treated our patients according to the Hypo-EBRT protocol described by Kupelian et al. (34) and as the treatment arm in the RTOG 0415 trial (6). This schedule and others (39) provide the highest EQD2 (80 Gy) to the PTV, compared to other hypofractionated schemes (5-7). Our acute GU toxicity was slightly higher than that observed in the hypofractionated arm of the RTOG 0415 (32.9% vs 27%), while GI toxicity was very similar (8.9% vs 10.7%). Our 80 Gy EQD2 PTV included the proximal seminal vesicles (the first 1 cm of the seminal vesicles). This extra volume was not treated in the RTOG trial, as only lowrisk patients were included in that trial. This higher PTV volume would be related to the slightly increased urinary toxicity found in our study (6). In the RTOG 0415 study, the hypofractionated arm had a very similar toxicity profile to the conventional arm. This conventionally fractionated radiotherapy scheme had a lower EQD2 (70 Gy) (6).
The study from Kaplan et al. (38) already analyzed this possibility by combining standard escalated cEBRT with enzalutamide monotherapy in intermediate-risk prostate cancer patients. Patients received conventionally fractionated EBRT to a total dose of 79.2 Gy at 1.8 Gy per fraction for 44 fractions (9 weeks), and enzalutamide was administered for 6 months. They reported only 6 cases out of 45 (13.33%) of ≧grade 2 urinary frequency. We observed this particular toxicity in 9/56 patients (16.06%). No data are available regarding the other GU toxicity items described in our study. We must note that due to the selected radiotherapy treatment in the Kaplan study (38) (1.8 Gy per fraction, 44 fractions to a total dose of 79.2), the EQD2 of this cEBRT is 74.67 Gy. This equivalent dose is well below the 80 Gy administered in our study.
The PROs recognized a temporary increase in urinary scores in the evaluations performed in the 12th week (just after the end of Hypo-EBRT) that was rapidly recovered at the end of the study period. However, no gastrointestinal or sexual symptom scores were changed. The primary endpoint of the study deals with the efficacy of the combination of enzalutamide monotherapy and modern Hypo-EBRT, in terms of reduction of PSA levels, in patients with localized intermediate-risk prostate cancer, as used in similar trials (24)(25)(26)38).
As stated earlier, activity analysis was performed with the intention of treating conditions. The PSA response was analyzed by the proportion of patients who showed a reduction of at least 80% of the initial values at the end of the 25 weeks of enzalutamide treatment. The seminal study by Tombal et al. (24) showed a PSA response of 92.5% (95% CI 86.2-98.8), similar to the 100% observed in our study. We also analyzed the kinetics of PSA reduction at pre-specified time-points (1, 3, and 6 months) after the cessation of enzalutamide. Our study showed that all patients remain in PSA response 6 months after the cessation of enzalutamide. Furthermore, 90% of the patients still showed a PSA decline of 90% of the pretreatment values, 6 months after the enzalutamide cessation. Obviously, the effect of radiotherapy on this maintained PSA decline is to be taken into account.
The study by Kaplan et al. (38) combined conventionally fractionated radiotherapy with enzalutamide monotherapy in intermediate-risk prostate cancer. They defined PSA response as PSA levels lower than 0.2 ng/ml at the end of 25 weeks of enzalutamide (39). Forty-nine out of 62 (79%) of their patients showed PSA response, in compared with 51/51 (100%) in our series for the same response evaluator. No data on PSA response was given after enzalutamide cessation in the Kaplan study, but 56.8% of our patients remained in the PSA response (<0.2 ng/ml) 6 months after enzalutamide cessation. Again, the lower EQD2 radiation dose in this study (74.67 Gy) the the present one (80 Gy) would explain the lower response rate observed.
The effect of radiotherapy along with enzalutamide versus enzalutamide alone, would only be indirectly analyzed by comparing the results from Tombal et al. (24) with those results from Kaplan and this study. Enzalutamide alone provided a 45% rate of undetectable PSA (<0.1 ng/ml) compared with 61.3% (38/62) for cEBRT and 88% for Hypo-EBRT. Although patient and tumor characteristics are of poorer prognosis in the enzalutamide alone trial (24), these data would shed light on the effect of radiotherapy along with enzalutamide in this particular setting.
Although available results regarding the role of enzalutamide and hypofractionated radiotherapy (38 and present series) are limited by the short follow-up, recent evidence seems to confirm the role of this approach in prostate cancer patients. Long-term evidence for the role of antiandrogen monotherapy as an alternative to ADT combined with hypofractionated radiotherapy comes from the CHiiP trial (40). In a post hoc analysis, they compared the results of 2,700 patients who
QLQ30
Pretreatment (n = 53) 12 th week (n = 50) 25th week (n = 47) 1 month after enzalutamide (n = 45) P-value received LHRHa and those of 403 patients who received bicalutamide (150 mg/day) as concomitant hormonal treatment. All characteristics of patient and tumor were similar among the two groups unless bicalutamide patients were significantly younger (median 67 vs 69 years LHRHa). After a median follow-up of 9.3 years, there was no difference in biochemical or clinical failure. Late toxicity, as estimated by the LENT-SOMA, was more frequently reported in LHRHa patients compared to bicalutamide patients. The quality of life was similar in both arms. These mature results of a first-generation antiandrogen (bicalutamide) in monotherapy combined with hypofractionated radiotherapy would probably be confirmed when using a more active second-generation antiandrogen like enzalutamide in a similar setting.
The improvement in PSA response by adding radiotherapy to enzalutamide and the better response observed when using modern hypofractionated EBRT are related, in our opinion, not only to the higher EQD2 administered but to the biological basis of the radiosensitizing effect of enzalutamide. If protracted conventional radiotherapy schemes are used (daily fractions for almost nine weeks), tumor proliferation would be relevant during radiotherapy, achieving tumor repopulation during this very long treatment time and therefore, reducing tumor control induced by radiation (41). Furthermore, conventionally fractionated radiotherapy probably does not take full advantage of the increased radiosensitation observed when enzalutamide is given in the presence of fractions higher than 2 Gy (hypofractionated radiotherapy) (38).
We can conclude that the treatment schedule proposed here for the first time is safe and very active in reducing the PSA levels.
Our study also showed that such a PSA reduction is maintained 6 months after the cessation of enzalutamide treatment. Longer follow-up is needed to confirm the potential use of this combination in future randomized trials.
DATA AVAILABILITY STATEMENT
The data presented in the study is available on request, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the CEIm University Hospital Canarias. The patients/participants provided their written informed consent to participate in this study. | 2022-07-16T15:17:52.726Z | 2022-07-14T00:00:00.000 | {
"year": 2022,
"sha1": "b6ded163279db83eeefa11b7e926f609ec88516e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "12b356f947c02128c49d80eeec911d0f5bfed33d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
266884285 | pes2o/s2orc | v3-fos-license | Diversities and interactions of phages and bacteria in deep-sea sediments as revealed by metagenomics
Phages are found virtually everywhere, even in extreme environments, and are extremely diverse both in their virion structures and in their genomic content. They are thought to shape the taxonomic and functional composition of microbial communities as well as their stability. A number of studies on laboratory culture and viral metagenomic research provide deeper insights into the abundance, diversity, distribution, and interaction with hosts of phages across a wide range of ecosystems. Although most of these studies focus on easily accessible samples, such as soils, lakes, and shallow oceans, little is known about bathypelagic phages. In this study, through analyzing the 16S rRNA sequencing and viral metagenomic sequencing data of 25 samples collected from five different bathypelagic ecosystems, we detected a high diversity of bacteria and phages, particularly in the cold seep and hydrothermal vent ecosystems, which have stable chemical energy. The relative abundance of phages in these ecosystems was higher than in other three abyssal ecosystems. The low phage/host ratios obtained from host prediction were different from shallow ecosystems and indicated the prevalence of prophages, suggesting the complexity of phage–bacteria interactions in abyssal ecosystems. In the correlation analysis, we revealed several phages–bacteria interaction networks of potential ecological relevance. Our study contributes to a better understanding of the interactions between bathypelagic bacteria and their phages.
Introduction
Viruses that infect bacteria (phages) are the most abundant and genetically diverse biological entities on Earth, infecting specific hosts in every known environment (Harada et al., 2018).Phages are ubiquitous and exhibit a plethora of morphologies, genetics, and phylogenies (Dion et al., 2020).Much of our knowledge of phage diversity has been overhauled following the advances in large-scale viral metagenomics and culturing efforts.Thousands of viral sequences have been identified from metagenomic sequencing projects; however, many of them share no detectable homology with reference phage genomes (Paez-Espino et al., 2016;Gregory et al., 2019).Examples of such sequences include the numerous discoveries of non-tail ssDNA or dsDNA phages (Kauffman et al., 2018).
In addition to phage diversity, in recent years, growing numbers of studies have focused on the interactions of phages and bacteria in complex communities, such as oceans, soils, guts, among others (Goordial et al., 2017;Kutáková et al., 2018;Cornuault et al., 2020;Zhang et al., 2023).In the ocean, the crucial role of marine phages can be attributed to their tremendous abundance and diversity (Suttle, 2007).Phages, whether virulent or lysogenic, are considered a major force in shaping the composition of microbial communities, steering bacterial evolution, and holding key implications for biogeochemical cycles through bacterial mortality (Obeng et al., 2016;Chevallereau et al., 2022).Such interaction is universal in nature (van Houte et al., 2016;Calero-Cáceres et al., 2019).Diverse environments and biological compositions shape multiple phage-host interactions; for example, the presence of multiple phages influenced the mode and the structure of some environments reduced the coevolutionary rate due to the decreased contact between bacteria and phages (Betts et al., 2018;Lourenço et al., 2020).At present, most studies of phage-host interaction are based on laboratory culture or sequencing of samples that are easily obtained.However, there is limited research on phage-host interactions in areas that are challenging to access, such as the deep sea.
The deep-sea ecosystem, one of the most important but extreme ecosystems on the Earth, occupies approximately two-thirds of the Earth's surface (Orcutt et al., 2011).Generally, the deep sea is considered to be the area below the mesopelagic zone, which possesses the largest aqueous habitat for life (Kobayashi et al., 2012;Reygondeau et al., 2018).According to different environmental characteristics and formation, the deep sea can be roughly divided into hydrothermal vents, cold seeps, seamounts, hadal trenches, mid-ocean ridges, and other habitats (Bian et al., 2010;Orcutt et al., 2011).Microorganisms are at the base of the food web for deepocean organisms and drive abyssal biogeochemical cycles (Choy et al., 2017;Fenibo et al., 2023).Microbes, such as bacteria which are susceptible to phage infection, have been increasingly recognized as major ecosystem players because they are abundant and infect organisms that form the basis of ocean biogeochemical cycling (Suttle, 2007;Steward et al., 2013;Moniruzzaman et al., 2017).However, a substantial body of work in marine phagology has often evaluated phages from the epipelagic and mesopelagic zones, while the bathypelagic zone has received less attention (López-Pérez et al., 2017;Jian et al., 2021).Little is known about phage diversity and the mechanisms by which they interact with their host communities in the deep ocean.A recent study revealed that phages from the bathypelagic zone have a unique genetic repertoire, demonstrating the limited nature of our understanding of phages in the largest marine ecosystem, the deep ocean (Coutinho et al., 2023).Furthermore, bathypelagic ecosystems have higher virusto-prokaryote ratios (De Corte et al., 2012;Lara et al., 2017).In addition, the taxonomic and functional composition, cell densities, and activity levels of the microbial community in the bathypelagic zone are different from shallow and other ecosystems and even vary among different bathypelagic ecosystems (Acinas et al., 2021;He et al., 2023).These differences affected the community and functions of phages.It is necessary to explore phagology in the bathypelagic zone to complete our knowledge of phage diversity and the interaction between them and their host.
In this study, bacterial 16S rRNA sequencing and viral metagenomic sequencing data collected from different bathypelagic ecosystems, including ocean basins, hydrothermal vents, midocean ridges, cold seeps, and hadal trenches, were analyzed to investigate the interaction between phages and their host.The results indicated that the structure of bacterial and bacteriophage communities varies from samples at different locations, even within the same bathypelagic ecological type; however, the dominant species composition in the same type of samples was relatively consistent.The coexistence patterns of bacteriophages and bacteria in cold seeps and hydrothermal samples were distinct compared with other abyssal ecosystems; however, the specific mechanisms need to be further studied.To sum up, this study provided new insights into the characterization of co-occurrence patterns and the interaction between phages and bacteria.
Data analysis of bacterial S rRNA
The paired-end reads were overlapped to assemble the V4-V5 tag sequences of bacteria using the Flash program.After the removal of primers, spacers, low-quality fragments, and sequences shorter than 50 bp, the remaining sequences were denoised and screened for chimeric sequences with the pre.cluster command and chimera.uchimecommand in Mothur software (Schloss et al., 2009).The candidate sequences were classified into operational taxonomic units (OTUs) based on a 97% sequence similarity using the Usearch program (Quast et al., 2013).A representative sequence for each OTU was annotated with threshold 0.8 using UCLUST v1.2.22q by searching the SILVA database.For comparisons between samples, the OTU abundances were normalized by the number obtained from the sample with the lowest counts.
Pre-processing of viral metagenomic data
The raw reads of viral metagenomic sequencing obtained from the database were trimmed after removing the 5′ end containing non-A, G, C, and T bases.During the process of trimming, sequences of adapters, end of reads with low sequencing quality (value <20), reads that contained a proportion of N reaching 10% were trimmed successively.Then, the sequences <75 bp in length were discarded and clean reads were obtained.MetaSPAdes 3.12.0 was used to assemble the clean reads followed by gene prediction through METAProdigal (http://code.google.com/p/prodigal/) (Hyatt et al., 2012;Nurk et al., 2017).
Viral taxonomic annotation
In order to annotate taxonomy of vOTUs, the open reading frames (ORFs) were predicted by Prodigal, as previously described (He et al., 2023).Then, the amino acid sequences of ORFs were used to identify taxonomy of vOTUs using BLASTp (BLAST Version 2.2.28+, http://blast.ncbi.nlm.nih.gov/Blast.cgi)(E-value of <0.0001, bit score ≥ 50) (Castelán-Sánchez et al., 2019;Gregory et al., 2019).Species annotation was obtained from the taxonomic information database corresponding to the NR database.Subsequently, the abundance of each species was calculated using the sum of the corresponding gene abundance of the species.The abundance of species in each sample was counted at the taxonomic levels of domain, kingdom, phylum, class, order, family, genus, and species.
Host prediction
The hosts were collected from the host annotation of the viral RefSeq database.Additionally, oligonucleotide frequency (ONF) method was used.In the processing, VirHostMatcher v1.0 was run with default parameters, with d 2 * values ≤0.2 being considered as a match (Ahlgren et al., 2017;Li et al., 2021).To identify a single predicted host for each viral population, hosts predicted by the highest ranking criterion were chosen.
Correlation analysis
To explore the interaction patterns and co-occurrence patterns of bacteria and phages in the deep sea, the correlation analysis of the predicted hosts and phages was performed.The calculation and analysis of correlation coefficients were performed via R version 4.0.3(2020-10-10).The calculation method used was Spearman's.The image was generated using ggplot2 (3.3.5) and igraph (1.2.6) (Sun and Zhang, 2022).
Results
To investigate the co-occurrence patterns of microbes and phages, along with the ecological functions of phages in the deep ocean, 16S rRNA gene sequencing and viral metagenomic sequencing public data of 25 deep-sea sediment samples were analyzed (NCBI BioProject ID: PRJNA725024; The National Omics Data Encyclopedia database accession number: OEP002479) (He et al., 2023;Sun et al., 2023).The sampling sites, where sequencing data were collected, included five different deep-sea ecosystems: ocean basin (n = 5), hydrothermal vent (n = 5), mid-ocean ridge (n = 5), cold seep (n = 5), and hadal trench (n = 5), which are widespread at the Pacific Ocean, the Atlantic Ocean, and the Indian Ocean (Figure 1).The ocean basin is a low zone at the bottom of the ocean, surrounded by relatively high seamounts, and is the main part of the ocean floor (Breyer and Baltar, 2023).Correspondingly, the mid-ocean ridge refers to a series of seamounts with the same origin and similar characteristics that run through the four oceans of the world, and it is the most extensive magmatic system on the Earth (Bennett et al., 2019).The hadal trench is located in the ocean with two steep, narrow walls and a water depth of more than 5,000 m (León-Zayas et al., 2015;Zhao et al., 2022).In contrast to the above, hydrothermal vent and cold seep ecosystems are unique due to their richness in chemical energy.The former, hydrothermal vent ecosystem, is an extreme environment with high temperatures that is rich in many minerals (such as Mn, Fe, Zn, Cu, Pb, etc.) and other chemicals (sulfur, hydrogen, methane, ammonia, etc.) (Dick, 2019).The latter, cold seep ecosystem, is formed by the emission of subsurface fluid into the seabed and is often rich in hydrocarbons (such as methane, oil), hydrogen sulfide, or carbon dioxide (Sun et al., 2021).Anaerobic oxidation of methane is the essential microbial process in the cold seep ecosystem (Beckmann et al., 2021).Despite the extreme environmental conditions, both of these abyssal environments harbor a substantial number of infective viral particles.
Overview of bacterial communities in the deep ocean
To assess the overall bacterial community structure in these sediments, 16S rRNA gene sequencing data of 25 samples were first analyzed for taxonomic profiling.The classification at the phylum level revealed the dominant bacterial lineages to be Proteobacteria (on average 43, 56, 32, 34, 50% from cold seep, hadal trench, hydrothermal vent, ocean basin, mid-ocean ridge, respectively), Bacteroidetes (17,19,23,15,24%),and Firmicutes (5,13,22,14,8%) in each environment (Figure 2A).The comparison of bacterial composition in different samples indicated that the diversity of bacterial genera varies in each sample even that were from the same kind of deep-sea ecosystems, displaying a high degree of endemism, which may as the result of each sampling site is far from another and similar to what was found previously in other cold seep microbial communities (Figure 2B) (Ruff et al., 2015).In general, Pseudomonas, Bacteroides, Erythrobacter, and Bacteroidales S24-7 group_norank occupied a high abundance and were uniformly distributed in each sample (Figure 2B).Notably, deep-sea cold seep and hydrothermal vent ecosystems exhibited more bacterial genera, especially those with an abundance lower than 1%, compared with other deep-sea ecosystems, suggesting that these two environments contained more complex microbial communities (Figure 2B).Both cold seep and hydrothermal vent ecosystems are with elevated microbial activity which were driven by the availability of energy-rich substrates supplied from below, and that may be the reason for the high microbial diversity in these two deep-sea ecosystems.
Frontiers in Microbiology frontiersin.org
Viruses from di erent deep-sea sediments are diverse
From the 25 viral metagenomic data, 420 non-redundant deep-ocean vOTUs were obtained, through manual filtering and clustering.The results of the annotation of the viral family showed that a total of 25 families were classified, while others were classified as norank, indicating that they were viruses that were undetected whether in the laboratory or in high-throughput sequencing before.In all of the viral families, bacteriophages, such as Siphoviridae, Microviridae, and Myoviridae (47, 13, and 17% in average) constituted the highest proportions (Figure 3A).In addition to bacteriophages, Genomoviridae and Circoviridaealso exhibited high abundance, with the proportion of 8 and 3%, respectively (Figure 3A).Compared to other deep-sea ecosystems, the viral composition of the hydrothermal vent was clearly different.The relative abundance of Myoviridae in thermal vents was as high as 54%, whereas the relative abundance of Myoviridae was only about 0.6, 1.4, 5, and 5% in cold seep, hadal trench, ocean basin, and mid-ocean ridge ecosystems, respectively, reflecting the uniqueness of deep-sea hydrothermal ecosystems (Figure 3A).As bacteriophages dominated the deep-sea viral community, as predicted above, the composition of phages was further analyzed.The results showed that, at the species level, phages with Bacillus, Burkholderia, and Lactococcus as hosts had higher abundance (Figure 3B).The composition of the phage community was different among various ecosystems but exhibited relative convergence within the groups (Figure 3B).The diversity of phages was the highest in the deep-sea cold seep ecosystem, consistent with previous findings that cold seeps may be hotspots for viruses (Figure 3B) (Bryson et al., 2015).Furthermore, cold seeps usually have longer geologic history with slower emission of fluids, which provide advantages for the formation of diverse viral communities and complex interactions with hosts (Joye, 2020).
Phage-host linkages and the predicted host abundance
To investigate the host of detected deep-sea bacteriophage, putative hosts were predicted as previously described (Roux et al., 2015).A total of 90 phages linked to known bacteria hosts, which mainly belonged to three phyla (Firmicutes, Proteobacteria, and Actinobacteria) and were mainly divided into 10 bacterial genera (Figure 4A).The genera of predicted hosts were dominated by Lactococcus, Burkholderia, and Bacillus (Figure 4A).Comparing the relative abundance of bacteriophages that linked to predicted hosts with the relative abundance of the corresponding hosts, some differences occurred between them.Some phages had higher abundance compared to their hosts, such as phages of Lactococcus and Bacillus, suggesting that taxa may be undergoing active viral replication and possibly lysis at the time of sample collection (Figure 4A).The calculation of lineage-specific phage/host abundance ratio for most taxa was from 0.03 to 55, with Lactococcus being the highest (Figure 4B).Phages with a high phage/host ratio may be in the period of a high level of active viral genome replication, suggesting that phage lysis may be the main factor of microbial mortality in deep-sea sediments (Figure 4B).Phages with a lower phage/host ratio may have formed prophages in individual hosts and were not freely in the environment, which were not purified from sediments and detected through sequencing, suggesting a complex interaction with host cells in the deep ocean.
Co-occurrence patterns of bacteriophages and bacteria
To further understand how phages correlated with bacteria in bathypelagic sediment, the correlation analysis of the predicted host bacterial genera and bacteriophages was performed.The results showed that the bacterium Lactococcus was correlated with more than half of the total phages (Figure 5A).Among them, only one, Bacillus virus phi29, had a negative correlation with Lactococcus, while others (46 bacteriophages) were tightly positively correlated with it (Figure 5A).Except for Lactococcus, another bacterial genus that had strong positive correlation with bacteriophages is Geobacillus; it was positively correlated with 19 bacteriophages (Figure 5A).Comparing with correlations of Lactococcus, the correlations of Geobacillus with phages were weaker (Figure 5B).Enterobacter and Escherichia showed a relatively strong negative correlation with bacteriophages; they were consistently negatively correlated with 16 phages (Figure 5A).Brucella was negatively correlated with four bacteriophages, especially with a tight negative correlation with Geobacillus virus E2 (Figures 5A, B).Positive correlations were found between Burkholderia virus Frontiers in Microbiology frontiersin.orgBcepF1 and five bacteria, including Burkholderia, Caulobacter, Brucella, Bacillus, and Escherichia (Figures 5A, B).Only the correlations between Burkholderia virus BcepF1 and Brucella as well as Escherichia can be observed in the network, and Brucella was in turn negatively correlated with Geobacillus virus E2 (Figures 5A, B).In addition, such multiple correlations could also be detected between other bacteria and phages (Figure 5B).Taken together, these findings indicated that the bacteria and phages in the deep sea interacted intricately with each other and played a vital role in modulating the dynamic balance of deepsea ecosystems.
Discussion
The bathypelagic zone is characterized by the absence of light, low oxygen, very low concentrations of labile carbon, and higher concentrations of inorganic nutrients (Edwards et al., 2005;Arístegui et al., 2009).Extreme conditions limit the survival of most organisms.The global deep-ocean is dominated by microbial communities that are essential to sustain life in the extreme dark environments (He et al., 2017).Microbial metabolism in the deep ocean is greatly controlled by the phages.They are ubiquitous and can be found in diverse deep-se a ecosystems, such as cold seeps, hydrothermal vents, hadal trenches, and so on (Breitbart, 2011;Sun et al., 2021;He et al., 2023).Currently, the diversity of phages and their interactions with their hosts-such as AMGs encoded by marine phages that are involved in photosynthesis, carbon metabolism, and nitrate reduction, assist the metabolism of hosts, and have a vital role in the biochemical cycle-are widely described in the shallow sea (Thompson et al., 2011;Roux et al., 2016;Breitbart et al., 2018).Little is known about the diversities of phages and the interactions between microbial hosts and phages in the deep sea.In this study, we report extensive examination of the bacterial and phages' diversity in hadal sediment.Our study revealed that different environmental characteristics shape the biodiversity in different deep-sea environments, especially deepsea cold seep and hydrothermal vent ecosystems, with continuous stable chemical energy as the energy and material resource base of the biosphere, have a more stable and higher biodiversity, both in bacterial and phage composition.The abundance of phages in these two environments was also higher than the other hadal environment.The interaction between bacteria and phages may be more diverse as a result of a stable energy source and the long-term, complex process of biota formation.
As reported, the ratio of phage-to-bacteria is about 10:1; phages are considered to be the main cause of the death of heterotrophic and autotrophic hosts in the ocean owing to their ubiquity and abundance (Suttle, 1994;Breitbart, 2011;Wigington et al., 2016;Breitbart et al., 2018).Phages interact with microbial hosts and other phages in multiple ways, and the interactions with their bacterial hosts and other phages are factors that drive the evolution and diversification of phages and their hosts (Meyer et al., 2012;Betts et al., 2018).In a recent study, it was demonstrated that, over longer timescales, phages and bacteria have evolved more complex resistance and infectivity strategies, along with conserved immunological functions with eukaryotic immune systems (Bernheim et al., 2021;Ofir et al., 2021).However, most findings were obtained from laboratory cultures, and the interaction and coevolution of phages and bacteria in nature were more complex.Since most microorganisms are difficult to culture, it is necessary to seek the interaction from high-throughput sequencing data.Here, through analyzing sequencing data of 25 samples, we revealed the co-occurrence pattern of phages and bacteria in hadal sediment.Furthermore, low phage/host ratios accounted for half of the findings in host predictions, suggesting the prevalence of prophage in deep-sea environments, and indicating a more complex interaction of phages and bacteria in bathypelagic sediment.In addition to this study, a recent surge in viral metagenomic studies (viromics) provides deeper insights into the abundance, taxonomic diversity, and distribution of phages across a wide range of ecosystems (Sunagawa et al., 2020;Peng et al., 2023;Zhang et al., 2023).However, our understanding of phagebacteria interactions in natural environments is far from being complete, and more studies are needed to fully appreciate how biodiversity and abiotic factors influence phage-bacteria ecological and evolutionary dynamics.
FIGURE
FIGUREBacterial diversity of bathypelagic sediment samples.(A) Bubble plot of the relative abundance of bacterial phyla according to the S rRNA gene sequencing data in five di erent sediment types.Group names consist of the abbreviation of the bathypelagic environment.The color and size of bubbles represent the relative abundance.The darker the color, and the larger the size, the higher the relative abundance.(B) Top bacterial composition of each sediment sample at taxonomic genus level.Di erent color represents di erent bacterial genus, and "Others" represent the bacterial genera with abundance < %.
FIGURE
FIGURETaxonomic diversity of deep-sea viruses.(A) Bubble plot of the relative abundance of viral families according to the viral metagenomics sequencing data in five di erent sediment types.Group names consist of the abbreviation of the bathypelagic environment.The color and size of bubbles represent the relative abundance.The darker the color, and the larger the size, the higher the relative abundance."Viruses_norank" represented viruses that were unclassified at the family level.(B) Compositions of bacteriophages in each bathypelagic sample.The depth of the color represents the proportion of di erent phages, and the darker the color represents the more proportion.
FIGURE
FIGURERelative abundance patterns of viruses and their predicted hosts in bathypelagic sediments.(A) Comparison of relative abundances of predicted hosts grouped by the host taxonomy at phylum and genus levels.Colored stacked bar chart on the left represents the relative abundance of predicted hosts, whereas the right represents the actual relative abundance of hosts detected in sequencing.Predicted hosts are indicated in the color bar on the right side.(B) Lineage-specific phage/host abundance ratios for all predicted bacterial hosts.The left axis represents the actual abundance of the host, and the right axis represents the phage/host ratios.
FIGURE
FIGURE Correlations between bacteriophages and bacteria in bathypelagic sediments.(A) Correlation heatmap of bacteriophages and predicted hosts.The predicted bacterial hosts at genus level and all detected bacteriophages in the deep sea were used.The red boxes indicate positive correlations, while the blue boxes show negative correlations.The statistically significant correlations between microbes were indicated with asterisks (*P < .; **P < .). (B) Correlation network of bacteriophages and predicted hosts in deep-sea sediment.Line color represents positive and negative correlation.Line thickness represents the strength of the correlation.Dot size/color depth represents the number of related objects. | 2024-01-10T16:12:11.517Z | 2024-01-08T00:00:00.000 | {
"year": 2024,
"sha1": "aec32fc07596273f578b374a98ab80c1bd7c2f3f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1337146/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "287b97973f1692a9e65cfa2c7f3ae598ff560dc4",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249823547 | pes2o/s2orc | v3-fos-license | Long non-coding RNA PVT1 promotes tumor progression by regulating the Wnt pathway in human esophageal squamous cell carcinoma
To the Editor: Esophageal squamous cell carcinoma (ESCC) is the seventh most common malignant cancer worldwide. [1] There is an urgent need to fi nd sensitive and speci fi c biomarkers to improve ESCC diagnosis and prognosis. [2] Tumor-speci fi c long non-coding RNA (lncRNA) seems to be potential biomarkers for the diagnosis and treatment of cancer. [3] Our previous study has performed data mining analyses for ESCC and identi fi cation of the dysregulated lncRNAs from the Gene Expression Omnibus database (GEO, https://www.ncbi. nlm.nih.gov/sites/GDSbrowser). [4] The results demonstrated that lncRNA plasmacytoma variant translocation 1 (PVT1) gene expression is signi fi cantly up-regulated in ESCC tumor tissues. In the present study, we analyze the feasibility of using PVT1 as a tumor molecular marker and discuss its potential clinical applications. We designed a experiment to investigate the PVT1 expression in ESCC tissues, microarray data, The Cancer Genome Atlas (TCGA) database, and ESCC-related human cell lines. In addition, the relationships between PVT1 expression and clinicopathological parameters and the prognosis of ESCC patients were assessed. The biological function of PVT1 in vitro was assessed using built vectors of silencing lentivirus. Furthermore, bioin-formatics and proteins expression analyses were performed
We designed a experiment to investigate the PVT1 expression in ESCC tissues, microarray data, The Cancer Genome Atlas (TCGA) database, and ESCC-related human cell lines. In addition, the relationships between PVT1 expression and clinicopathological parameters and the prognosis of ESCC patients were assessed. The biological function of PVT1 in vitro was assessed using built vectors of silencing lentivirus. Furthermore, bioinformatics and proteins expression analyses were performed to investigate the regulatory mechanism of PVT1. 13.46 ± 3.29; P < 0.05; Figure 1A). PVT1 expression levels in microarray detection and TCGA database were also remarkably raised in ESCC tumor tissues than noncancerous esophagus tissues (fold change = 6.82 ± 1.79, 9.81 ± 1.57, and 6.94 ± 0.37; Figure 1B). As shown in Supplementary Table 3, http://links.lww.com/CM9/A992, the increased risk of 77 ESCC patients were linked with PVT1 increased expression (odds ratio [OR], 1.639; P < 0.05) and the TCGA database (OR, 1.327; P < 0.05). In this study, we found that the OR value of PVT1 expression in patients with ESCC and TCGA database were both greater than 1.0, which suggested that PVT1 may play an oncogene role during the ESCC development.
Receiver operating characteristic (ROC) curve analysis showed that the PVT1 can distinguish ESCC patients' tumor tissues and non-tumor tissues with high diagnostic power (area under the curve [AUC], 0.753; P < 0.05, 95% Figure 1D and 1E]. Supplementary Kaplan-Meier survival analysis demonstrated that patients with high PVT1 expression had a poor prognosis (P < 0.05; Figure 1F). In addition, the PVT1 was up expressed in ESCC cell lines (EC9706, EC109, and TE-1) which compared with normal human esophageal epithelial cell [ Figure 1C].
The thiazolyl blue tetrazolium bromide assay demonstrated that the proliferative ability of EC109 cells significantly decreased following transfection with lentivirus-PVT1-siRNA than blank and negative controls (P < 0.05; Figure 1G). Cell cycle assay detection results showed that the lentivirus-PVT1-siRNA transfection significantly decreased the number of EC109 cells in the G 2 /M phase, while increasing the number of cells in the S phase, which compared with the blank and negative control group (P < 0. 05; Figure 1H and Supplementary Figure 1D, http://links.lww.com/CM9/A992). Flow cytometry detection showed that the late cell apoptosis rate and the total cell apoptosis rate were significantly increased in the lentivirus-PVT1-siRNA stable transfection group, which compared with the blank and negative control group (P < 0.05; Figure 1I and Supplementary Figure 1E, http:// links.lww.com/CM9/A992).
Based on the TCGA database and the competing endogenous RNA theory, we analyzed the PVT1 coexpressed ESCC differentially expressed mRNAs functionally enriched and potentially regulated signaling pathways. We have found that there were 83 Gene Ontology terms and 43 pathways enrichment score more than two times (P < 0.05). Results suggested that the most enriched function of up-regulated mRNAs was "Regulation of cellular response" and "Biological process," such as "Cell division," "Biological regulation," etc. [Supplementary Figure 1F, http://links.lww.com/CM9/A992]. In addition, we found that the predominantly enriched tumor-associated signaling pathways were in "Cancer," "Wnt signaling pathway," and "MAPK signaling pathway" [Supplementary Figure 1G, http://links.lww.com/ CM9/A992]. Notably, the results of our previous study also revealed that the Wnt signaling pathway is involved in regulating the cancer cell cycle and affecting the process of ESCC. Thus, to further verify whether PVT1 regulates the development of ESCC through the Wnt signaling pathway, western blot was used to analyze the differences in expression levels of the key proteins. Results showed that the protein expression levels of b-catenin and TCF-7 were downregulated in the lentivirus-PVT1-siRNA group, p-GSK-3b/GSK-3b and Axin1 was up-regulated in the lentivirus-PVT1-siRNA group than blank and negative controls (P < 0.05; Figure 1J and 1K). The results demonstrated that the Wnt pathway was inhibited by transfection with lentivirus-PVT1-siRNA.
The present study demonstrated that up-regulated PVT1 expression in ESCC tissues was associated with a higher occurrence rate of ESCC, which was similar to other reports concerning PVT1. [5] ROC curve analysis demonstrated that PVT1 may be a valid diagnostic biomarker of ESCC. Cell functional experiments revealed that the lentivirus-PVT1-siRNA transfection inhibits the proliferation of EC109 cells by affecting the process of the cell cycle. In addition, our results also revealed that the PVT1 may affect the function of EC109 cells by regulating and activating the Wnt signaling pathway.
To conclude, up-regulated PVT1 can induce the ESCC tumorigenesis through regulating the cell cycle and Wnt signaling pathway. These results provide a novel insight that PVT1 may be prospective biomarkers and therapeutic target for patients with ESCC. | 2022-06-19T06:17:38.232Z | 2022-06-20T00:00:00.000 | {
"year": 2022,
"sha1": "0046c508c3c2a253abe7d187da21d3366d8c7b80",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/cm9.0000000000002066",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62537b3e6db1988839c3facbdb30b32b2b28db32",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14609637 | pes2o/s2orc | v3-fos-license | Differentiability of M-functionals of location and scatter based on t likelihoods
The paper aims at finding widely and smoothly defined nonparametric location and scatter functionals. As a convenient vehicle, maximum likelihood estimation of the location vector m and scatter matrix S of an elliptically symmetric t distribution on d-dimensional space with degrees of freedom larger than 1 extends to an M-functional defined on all probability distributions P in a weakly open, weakly dense domain U. Here U consists of P not putting too much mass in hyperplanes of dimension<d, as shown for empirical measures by Kent and Tyler, Ann. Statist. 1991. It is shown here that (m,S) is analytic on U, for the bounded Lipschitz norm, or for d=1, for the sup norm on distribution functions. For k=1,2,..., and other norms, depending on k and more directly adapted to t functionals, one has continuous differentiability of order k, allowing the delta-method to be applied to (m,S) for any P in U, which can be arbitrarily heavy-tailed. These results imply asymptotic normality of the corresponding M-estimators (m_n,S_n). In dimension d=1 only, the t functionals extend to be defined and weakly continuous at all t.
Introduction
This paper is a longer version, with proofs, of the paper Dudley, Sidenko and Wang (2009). It aims at developing some nonparametric location and scatter functionals, defined and smooth on large (weakly dense and open) sets of distributions. The nonparametric view is much as in the work of Bickel and Lehmann (1975) (but not adopting, e.g., their monotonicity axiom) and to a somewhat lesser extent, that of Davies (1998). Although there are relations to robustness, that is not the main aim here: there is no focus on neighborhoods of model distributions with densities such as the normal. It happens that the parametric family of ellipsoidally symmetric t densities provides an avenue toward nonparametric location and scatter functionals, somewhat as maximum likelihood estimation of location for the double-exponential distribution in one dimension gives the median, generally viewed as a nonparametric functional.
Given observations X 1 , ..., X n in R d let P n := 1 n n j=1 δ X j . Given P n , and the location-scatter family of elliptically symmetric t ν distributions on R d with ν > 1, maximum likelihood estimates of the location vector µ and scatter matrix Σ exist and are unique for "most" P n . Namely, it suffices that P n (J) < (ν + q)/(ν + d) for each affine hyperplane J of dimension q < d, as shown by Kent and Tyler (1991). The estimates extend to M-functionals defined at all probability measures P on R d satisfying the same condition; that is shown for integer ν and in the sense of unique critical points by Dümbgen and Tyler (2005) and for any ν > 0 and M-functionals in the sense of unique absolute minima in Theorem 3, in light of Theorem 6(a), for pure scatter and then in Theorem 6(e) for location and scatter with ν > 1. A method of reducing location and scatter functionals in dimension d to pure scatter functionals in dimension d + 1 was shown to work for t distributions by Kent and Tyler (1991) and only for such distributions by Kent, Tyler and Vardi (1994), as will be recalled after Theorem 6.
So the t functionals are defined on a weakly open and weakly dense domain, whose complement is thus weakly nowhere dense. One of the main results of the present paper gives analyticity (defined in the Appendix) of the functionals on this domain, with respect to the bounded Lipschitz norm (Theorem 9(d)). An adaptation gives differentiability of any given finite order k with respect to norms, depending on k, chosen to give asymptotic normality of the t location and scatter functionals (Theorem 18) for arbitrarily heavy-tailed P (for such P , the central limit fails in the bounded Lipschitz norm). In turn, this yields delta-method conclusions (Theorem 20(b)), uniformly over suitable families of distributions (Proposition 22); these statements don't include any norms, although their proofs do. It follows in Corollary 24 that continuous Fréchet differentiability of the t ν location and scatter functionals of order k also holds with respect to affinely invariant norms defined via suprema over positivity sets of polynomials of degree at most 2k + 4.
For the delta-method, one needs at least differentiability of first order. To get first derivatives with respect to probability measures P via an implicit function theorem we use second order derivatives with respect to matrices. Moreover, second order derivatives with respect to P (or in the classical case, with respect to an unknown parameter) can improve the accuracy of the delta-method and the speed of convergence of approximations. It turns out that derivatives of arbitrarily high order are obtainable with little additional difficulty.
For norms in which the central limit theorem for empirical measures holds for all probability measures, such as those just mentioned, bootstrap central limit theorems also hold [Giné and Zinn (1990)], which then via the delta-method can give bootstrap confidence sets for the t location and scatter functionals.
In dimension d = 1, the domain on which differentiability is proved is the class of distributions having no atom of size ν/(ν + 1) or larger. On this domain, analyticity is proved, in Theorem 9(e), with respect to the usual supremum norm for distribution functions. Only for d = 1, it turns out to be possible to extend the t ν location and scatter (scale) functionals to be defined and weakly continuous at arbitrary distributions (Theorem 25).
For general d ≥ 1 and ν = 1 (multivariate Cauchy distributions), a case not covered by the present paper, Dümbgen (1998, §6) briefly treats location and scatter functionals and their asymptotic properties.
Weak continuity on a dense open set implies that for distributions in that set, estimators (functionals of empirical measures) eventually exist almost surely and converge to the functional of the distribution. Weak continuity, where it holds, also is a robustness property in itself and implies a strictly positive (not necessarily large) breakdown point. The t ν functionals, as redescending M-functionals, downweight outliers. Among such M-functionals, only the t ν functionals are known to be uniquely defined on a satisfactorily large domain. The t ν estimators are √ n-consistent estimators of t ν functionals where each t ν location functional, at any distribution in its domain and symmetric around a point, (by equivariance) equals the center of symmetry.
It seems that few other known location and scatter functionals exist and are unique and continuous, let alone differentiable, on a dense open domain. For example, the median is discontinuous on a dense set. Smoothly trimmed means and variances are defined and differentiable at all distributions in one dimension, e.g. Boos (1979) for means. In higher dimensions there are analogues of trimming, called peeling or depth weighting, e.g. the work of Zuo and Cui (2005). Locationscatter functionals differentiable on a dense domain apparently have not been found by depth weighting thus far (in dimension d > 1).
Definitions and preliminaries
In this paper the sample space will be a finite-dimensional Euclidean space R d with its usual topological and Borel structure. A law will mean a probability measure on R d . Let S d be the collection of all d × d symmetric real matrices, N d the subset of nonnegative definite symmetric matrices and P d ⊂ N d the further subset of strictly positive definite symmetric matrices. The parameter spaces Θ considered will be P d , N d (pure scatter matrices), R d × P d , or R d × N d . For (µ, Σ) ∈ R d × N d , µ will be viewed as a location parameter and Σ as a scatter parameter, extending the notions of mean vector and covariance matrix to arbitrarily heavy-tailed distributions. Matrices in N d but not in P d will only be considered in one dimension, in Section 9, where the scale parameter σ ≥ 0 corresponds to σ 2 ∈ N 1 .
Notions of "location" and "scale" or multidimensional "scatter" functional will be defined in terms of equivariance, as follows.
Definitions. Let Q → µ(Q) ∈ R d , resp. Σ(Q) ∈ N d , be a functional defined on a set D of laws Q on R d . Then µ (resp. Σ) is called an affinely equivariant location (resp. scatter) functional iff for any nonsingular d × d matrix A and v ∈ R d , with f (x) := Ax + v, and any law Q ∈ D, the image measure P := Q • f −1 ∈ D also, with µ(P ) = Aµ(Q) + v or, respectively, Σ(P ) = AΣ(Q)A ′ . For d = 1, σ(·) with 0 ≤ σ < ∞ will be called an affinely equivariant scale functional iff σ 2 satisfies the definition of affinely equivariant scatter functional. If we have affinely equivariant location and scatter functionals µ and Σ on the same domain D then (µ, Σ) will be called an affinely equivariant location-scatter functional on D.
To define M-functionals, suppose we have a function (x, θ) → ρ(x, θ) defined for x ∈ R d and θ ∈ Θ, Borel measurable in x and lower semicontinuous in θ, i.e. ρ(x, θ) ≤ lim inf φ→θ ρ(x, φ) for all θ. For a law Q, let Qρ(φ) := ρ(x, φ)dQ(x) if the integral is defined (not ∞−∞), as it always will be if Q = P n . An M-estimate of θ for a given n and P n will be aθ n such that P n ρ(θ) is minimized at θ =θ n , if it exists and is unique. A measurable function, not necessarily defined a.s., whose values are M-estimates is called an M-estimator.
The following definition will be used for d = 1. Suppose we have a parameter space Θ, specifically P d or P d × R d , which has a closure Θ, specifically N d or N d × R d respectively. The boundary of Θ is then Θ \ Θ. The functions ρ and h are not necessarily defined for θ in the boundary, but M-functionals may have values anywhere in Θ according to the following.
Definition.
A θ 0 = θ 0 (P ) ∈ Θ will be called the (extended) M-functional of P for ρ or h if and only if for every neighborhood U of θ 0 , The above definition extends that of M-functional given by Huber (1967) in that if θ 0 is on the boundary of Θ then h(x, θ 0 ) is not defined, P h(θ 0 ) is defined only in a lim inf sense, and at θ 0 (but only there), the lim inf may be −∞.
From the definition, an M-functional, if it exists, must be unique. If P is an empirical measure P n , then the M-functionalθ n := θ 0 (P n ), if it exists, is the maximum likelihood estimate of θ, in a lim sup sense ifθ n is on the boundary. Clearly, an M-estimateθ n is the M-functional θ 1 (P n ) if either exists.
For a differentiable function f , recall that a critical point of f is a point where the gradient of f is 0. For example, on R 2 let f (x, y) = x 2 (1 + y) 3 + y 2 . Then f has a unique critical point (0, 0), which is a strict relative minimum where the Hessian (matrix of second partial derivatives) is ( 2 0 0 2 ), but not an absolute minimum since f (1, y) → −∞ as y → −∞. This example appeared in Durfee, Kronenfeld, Munson, Roy, and Westby (1993).
Multivariate scatter
This section will treat the pure scatter problem in R d , with parameter space Θ = P d . The results here are extensions of those of Kent and Tyler (1991, Theorems 2.1 and 2.2), on unique maximum likelihood estimates for finite samples, to the case of M-functionals for general laws on R d .
For A ∈ P d and a function ρ from [0, ∞) into itself, consider the function where I is the identity matrix. Then if the integral is defined. As a referee suggested, one can differentiate functions of matrices in a coordinate free way, as follows. The d 2 -dimensional vector space of all d × d real matrices becomes a Hilbert space (Euclidean space) under the inner product A, B := trace(A ′ B). It's easy to verify that this is indeed an inner product and is invariant under orthogonal changes of coordinates in the underlying ddimensional vector space. The corresponding norm A F := A, A 1/2 is called the Frobenius norm. Here A 2 F is simply the sum of squares of all elements of A, and · F is the specialization of the (Hilbert)-Schmidt norm for Hilbert-Schmidt operators on a general Hilbert space to the case of (all) linear operators on a finite-dimensional Hilbert space. Let · be the usual matrix or operator norm, A := sup |x|=1 |Ax|. Then with equality in the latter for A = I and the former when A = diag(1, 0, . . . , 0). In statements such as A → 0 or expressions such as O( A ) the particular norm doesn't matter for fixed d.
For fixed A ∈ P d and as ∆ → 0, we have Differentiating f (A) for A ∈ S d is preferably done when possible in coordinate free form, or if in coordinates, when restricted to a subspace of matrices all diagonal in some fixed coordinates, or at least approaching such matrices. It turns out that all proofs in the paper can be and have been done in one of these ways.
We have the following, stated for Q = Q n an empirical measure in Kent and Tyler (1991, (1.3)). Here (7) is a redescending condition.
Then for any law Q on R d , Qh in (4) is a well-defined and C 1 function of A ∈ P d , which has a critical point at A = B if and only if Proof. By the hypotheses, the chain rule, and (6) we have for fixed A ∈ P d as For a fixed A ∈ P d , the A −1 t are all in some compact subset of P d , so that their eigenvalues are bounded and bounded away from 0. From this and boundedness of xu(x) for x ≥ 0, it follows that y → ρ(y ′ A −1 y)−ρ(y ′ y) is a bounded continuous function of y. We also have: It follows that for an arbitrary law Q on R d , Qh(A) in (4) is defined and finite. Also, Qh(A) is continuous in A by dominated convergence and so lower semicontinuous.
For any B ∈ S d let its ordered eigenvalues be λ and (11) follows. By (9), and because the gradient there is bounded, derivatives can be interchanged with the integral, so we have It follows that the gradient of the mapping A → Qh(A) from P d into R is which, multiplying by A on the left and right, is zero if and only if This proves the Proposition.
The following extends to any law Q the uniqueness part of Kent and Tyler (1991, Theorem 2.2).
Proposition 2. Under the hypotheses of Proposition 1 on ρ and u(·), if in addition u(·) is nonincreasing and s → su(s) is strictly increasing on [0, ∞), then for any law Q on R d , Qh has at most one critical point A ∈ P d .
Proof. By Proposition 1, suppose that (8) holds for B = A and B = D for some D = A in P d . By the substitution y = A 1/2 z we can assume that A = I = D.
Let t 1 be the largest eigenvalue of D. Suppose that t 1 > 1. Then for any y = 0, by the assumed properties of u(·), u(y ′ D −1 y) ≤ u(t −1 1 y ′ y) < t 1 u(y ′ y). It follows from (8) for D and I that for any z ∈ R d with z = 0, where the last equation implies that Q is not concentrated in any (d − 1)dimensional vector subspace z ′ y = 0 and so the preceding inequality is strict. Taking z as an eigenvector for the eigenvalue t 1 gives a contradiction.
If t d < 1 for the smallest eigenvalue t d of D we get a symmetrical contradiction. It follows that D = I, proving the Proposition.
We saw in the preceding proof that if there is a critical point, Q is not concentrated in any proper linear subspace. More precisely, a sufficient condition for existence of a minimum (unique by Proposition 2) will include the following assumption from Kent and Tyler (1991, (2.4)). For a given function u(·) as in Proposition 2, let a 0 := a 0 (u(·)) := sup s>0 su(s). Since s → su(s) is increasing, we will have (13) su(s) ↑ a 0 as s ↑ + ∞. Kent and Tyler (1991) gave the following conditions for empirical measures.
Definition. For a given number a 0 := a(0) > 0 let U d,a(0) be the set of all probability measures Q on R d such that for every linear subspace H of dimension So we will need a 0 > d and assume it, e.g. in the following theorem. In the t ν case later we will have a 0 = ν + d > d for any ν > 0. For a(0) > d, U d,a(0) is weakly open and dense and contains all laws with densities. In part (b), Kent and Tyler (1991, Theorems 2.1 and 2.2) proved that there is a unique B(Q n ) minimizing Q n h for an empirical Q n ∈ U d,a(0) . Theorem 3. Let u(·) ≥ 0 be a bounded continuous function on [0, ∞) satisfying (7), with u(·) nonincreasing and s → su(s) strictly increasing. Then for a(0) = a 0 as in (13), if a 0 > d, a(0) , then Qh has no critical points. (b) If Q ∈ U d,a(0) , then Qh attains its minimum at a unique B = B(Q) ∈ P d and has no other critical points.
Proof. (a): Tyler (1988, (2.3)) showed that the condition Q(H) ≤ 1 − (d − q)/a 0 for all linear subspaces H of dimension q > 0 is necessary for the existence of a critical point as in (8) for Q = Q n . His proof shows necessity of the stronger condition Q n ∈ U d,a(0) when su(s) < a 0 for all s < ∞ (then the inequality Tyler [1988, (4.2)] is strict) and also applies when q = 0, so that H = {0}. The proof extends to general Q, using (7) for integrability.
Then {e j } d j=1 is an orthonormal basis of R d . Let S j be the linear span of e 1 , . . . , e j for j = 1, . . . , d, S 0 := {0}, D j := S j \ S j−1 for j = 1, . . . , d and noting that on D 0 , the integrand is 0. So we need to show that d j=1 ζ j,k → +∞. If we add and subtract ρ(δ 1 y ′ y) in the integrand and note that ρ(y ′ y) − ρ(δ 1 y ′ y) is a fixed bounded and thus integrable function, by (10), letting we need to show that d j=1 γ j,k → +∞. Since τ j,k ≥ δ 1 for all j and k and by (ii), γ j,k are bounded below for j = 1, . . . , r − 1. Because Q ∈ U d,a(0) , there is an a with d < a < a 0 close enough to a 0 so that for j = r, . . . , d, noting that S j−1 is a linear subspace of dimension j − 1 not depending on k. It will be shown that as k → ∞, for m = r, . . . , d, which for m = r will imply Claim 2. The relation (23) will be proved by downward induction from m = d to m = r.
For coordinates y j := e ′ j y, each ε > 0 and j = r, . . . , d, we have (24) τ j,k (e ′ j,k y) 2 ≥ (1 − ε)τ j,k y 2 j for k ≥ k 0,j for some k 0,j . Choose ε with 0 < ε < 1 −δ 1 . Let k 0 := max r≤j≤d k 0,j , so that for k ≥ k 0 , as will be assumed from here on, (24) will hold for all j = r, . . . , d. It follows then that since τ i,k ≥ δ 1 for all i, For j = r, . . . , d and τ ≥ δ 1 > 0 we have and the quantity bounded above by a 0 /2 converges to a 0 /2 as τ → +∞ by (13) for all y ∈ D j since y j = 0 there. Because the derivative is bounded, the differentiation can be interchanged with the integral, and we have where the quantity in square brackets converges to a 0 Q(D j ) − 1 as k → ∞ and so ∂γ ′ j,k /∂τ j,k ∼ [a 0 Q(D j ) − 1]/(2τ j,k ). Choose a 1 with a < a 1 < a 0 . It follows that for k large enough with equality if Q(D j ) = 0 and strict inequality otherwise. Now beginning the inductive proof of (23) for m = d, For the induction step in (23) from j + 1 to j for j = d − 1, . . . , r if r < d, it will suffice to show that is bounded below. Since a > 0, α j+1 > 0 by (22), and τ j+1,k ≥ τ j,k , it will be enough to show that ∆ j,k := γ j,k + a 2 (α j+1 − α j ) log τ j,k is bounded below. Inserting the definitions of α j and α j+1 from (22) gives This is identically 0 if Q(D j ) = 0. If Q(D j ) > 0, then ∆ j,k → +∞ by (26) for j. The inductive proof of (23) and so of Claim 2 is complete. By (18), (19), and Claim 2, we then have The infimum of Qh(A) equals the infimum over the set K of A with τ 1 (A) ≥ δ 1 by (19) and τ d (A) ≤ M for some M < ∞ by Claim 2. Then K is compact. Since Qh is continuous, in fact C 1 , it attains an absolute minimum over K at some B in K, where its value is finite and it has a critical point. By Claims 1 and 2 again, . Thus Qh has a unique critical point B by Proposition 2, and Qh has its unique absolute minimum at B. So the theorem is proved.
Location and scatter t functionals
The main result of this section, Theorem 6, is an extension of results of Kent and Tyler (1991, Theorem 3.1), who found maximum likelihood estimates for finite samples, and Dümbgen and Tyler (2005) for M-functionals, defined as unique critical points, for integer ν, to the case of M-functionals in the sense of absolute minima and any ν > 0. Kent and Tyler (1991, §3) and Kent, Tyler and Vardi (1994) showed that location-scatter problems in R d can be treated by way of pure scatter problems in R d+1 , specifically for functionals based on t log likelihoods. The two papers prove the following (clearly A is analytic as a function of Σ, µ and γ, and the inverse of an analytic function, if it exists and is C 1 , is analytic, e.g. Deimling [1985, Theorem 15.3
(iii) If (28) holds, then for any y ∈ R d (a column vector), For M-estimation of location and scatter in R d , we will have a function ρ : [0, ∞) → [0, ∞) as in the previous section. The parameter space is now the set of pairs (µ, Σ) for µ ∈ R d and Σ ∈ P d , and we have a multivariate ρ function (the two meanings of ρ should not cause confusion) For any µ ∈ R d and Σ ∈ P d let A 0 := A 0 (µ, Σ) := A(Σ, µ, 1) ∈ P d+1 by (28) with γ = 1, noting that det A 0 = det Σ. Now ρ can be adjusted, in light of (10) and (30), by defining .
We will need a hypothesis on P corresponding to Q ∈ U d+1,a(0) . Kent and Tyler (1991) gave these conditions for empirical measures.
Definition. For any
The next fact is rather straightforward to prove.
Proposition 5. For any law P on R d , a > d + 1, and Q : For laws P ∈ V d,a(0) with a(0) > d + 1, one can prove that there exist µ ∈ R d and Σ ∈ P d at which P h(µ, Σ) is minimized, as Kent and Tyler (1991) did for empirical measures, by applying part of the proof of Theorem 3 restricted to the closed set where γ = A d+1,d+1 = 1 in (30). But the proof of uniqueness (Proposition 2) doesn't apply in general under the constraint A d+1,d+1 = 1. For minimization under a constraint the notion of critical point changes, e.g. for a Lagrange multiplier λ one would seek critical points of Qh(A)+λ(A d+1,d+1 −1), so Propositions 1 and 2 no longer apply. Uniqueness will hold under an additional condition. A family of ρ functions that will satisfy the condition, as pointed out by Kent and Tyler [1991, (1.5), (1.6)], comes from elliptically symmetric multivariate t densities with ν degrees of freedom as follows: for 0 < ν < ∞ and 0 ≤ s < ∞ let For this ρ, u is u ν (s) := u ν,d (s) := (ν + d)/(ν + s), which is decreasing, and s → su ν,d (s) is strictly increasing and bounded, so that (7) holds, with supremum and limit at +∞ equal to a 0,ν := a 0 (u ν (·)) = ν + d > d for any ν > 0.
The following fact was shown in part by Kent and Tyler (1991) and further by Kent, Tyler and Vardi (1994), for empirical measures, with a short proof, and with equation (34) only implicit. The relation that ν degrees of freedom in dimension d correspond to ν ′ = ν − 1 in dimension d + 1, due to Kent, Tyler and Vardi (1994), is implemented more thoroughly in the following theorem and the proof in Dudley (2006). The extension from empirical to general laws follows from Theorem 3, specifically for part (a) of the next theorem since a 0 = ν +d > d. (28). Then for any y ∈ R d and z := (y ′ , 1) ′ , we have
In particular, this holds for
for h as in part (e) has no critical points. Kent, Tyler and Vardi (1994, Theorem 3.1) showed that if u(s) ≥ 0, u(0) < +∞, u(·) is continuous and nonincreasing for s ≥ 0, and su(s) is nondecreasing for s ≥ 0, with a 0 := lim s→+∞ su(s) > d, and if equation (35) holds with u in place of u ν,d at each critical point (µ, Σ) of Q n h for any Q n , then u must be of the form u(s) = u ν,d (s) = (ν + d)/(ν + s) for some ν > 0. Thus, the method of relating pure scatter functionals in R d+1 to location-scatter functionals in R d given by Theorem 6 for t functionals defined by functions u ν,d does not extend directly to other functions u. For 0 < ν < 1, we would get ν ′ < 0, so the methods of Section 3 don't apply. In fact, (unique) t ν location and scatter M-functionals may not exist, as Gabrielsen (1982) and Kent and Tyler (1991) noted. For example, if d = 1, 0 < ν < 1, and P is symmetric around 0 and nonatomic but concentrated near ±1, then for −∞ < µ < ∞, there is a unique σ ν (µ) > 0 where the minimum of P h ν (µ, σ) with respect to σ is attained. Then σ ν (0) . = 1 and (0, σ ν (0)) is a saddle point of P h ν . Minima occur at some µ = 0, σ > 0, and at (µ, σ) if and only if at (−µ, σ). The Cauchy case ν = 1 can be treated separately, see Kent, Tyler and Vardi (1994, §5) and references there.
For d > 1 there is no weakly continuous extension to all P , because such an extension of µ ν would give a weakly continuous affinely equivariant location functional defined for all laws, which is known to be impossible [Obenchain (1971)].
Differentiability of t functionals
One can metrize weak convergence by a norm. For a bounded function f from R d into a normed space, the sup norm is Then because φ is linear, φ * f doesn't depend on the choice of basis.
Let P(R d ) be the set of all probability measures on the Borel sets of R d . Then Then β is a metric on P(R d ) which metrizes the weak topology, e.g. Dudley (2002, Theorem 11.3.3).
Then (C k b (U), . k,U ) is a Banach space. For k = 1 and U convex in R d it's easily seen that C 1 b (U) is a subspace of BL(U, R), with equal norm for d = 1.
Substituting ρ ν,d from (33) into (2) gives for y ∈ R d and A ∈ P d , Then, reserving h ν := h ν,d for the location-scatter case as in Theorem 6(e), we get in (3) for the pure scatter case It follows from (11) and (37) that for A ∈ P d and C = A −1 , gradients with respect to C are given by For 0 < δ < 1 and d = 1, 2, ..., define an open subset of P d ⊂ S d by (c) The functional C → J(C, Q, H) has the Taylor expansion around any C ∈ P d The term 1 2 log det C doesn't depend on y and is clearly an analytic function of C, having derivatives of each order with respect to C bounded for A ∈ W δ,d . For ∆ < 1/ A , we can interchange the Taylor expansion of the logarithm with the integral and get part (c), (42). Then part (a) follows, and part (b) also from (39). For part (d), as in the Appendix, Proposition 29 and (94), the jth derivative D j f of a functional f defines a symmetric j-linear form d j f , which in turn yields a j-homogeneous polynomial. Such polynomials appear in Taylor series as in the one-variable case, (95). Thus from (42), the jth Taylor polynomial of C → J(C, Q, H), times j!, is given by which clearly is bounded for ∆ ≤ 1 when the eigenvalues of C are bounded away from 0, in other words A is bounded above. Then the jth derivatives are also bounded by facts to be mentioned just after Proposition 29.
To treat t functionals of location and scatter in any dimension p we will need functionals of pure scatter in dimension p + 1, so in the following lemma we only need dimension d ≥ 2.
Usually, one might show that the Hessian is positive definite at a critical point in order to show it is a strict relative minimum. In our case we already know from Theorem 6(a) that we have a unique critical point which is a strict absolute minimum. The following lemma will be useful instead in showing differentiability of t functionals via implicit function theorems, in that it implies that the derivative of the gradient (the Hessian) is non-singular.
Proof. Each side of (42) equals The second-order term in the Taylor expansion of C → I(C, Q, H), e.g. (95) in the Appendix, using also (11) with C in place of A, is the quadratic form, for ∆ ∈ S d , (94), is positive definite if and only if the quadratic form (44) is positive definite. The Hessian also defines a linear map H A from S d into itself via the Frobenius inner product, it suffices to consider QH as a function of C = A −1 , in other words, to consider I(C, Q, H). Then we need to show that (44) is positive definite in ∆ ∈ S d at the unique A = A ν (Q) ∈ P d such that ∇ A I(C, Q, H) = 0 in (41), or equivalently ∇ C I(C, Q, H) = 0. By the substitution z := A −1/2 y, and consequently replacing Q by q with dq(z) = dQ(y) and ∆ by A 1/2 ∆A 1/2 , we get I = A ν (q). It suffices to prove the lemma for (I, q) in place of (A, Q). We need to show that (8) and (41) with B = A = C = I. Now, z ′ z < ν + z ′ z for all z = 0, and z ′ ∆ 2 z = 0 only for z with ∆z = 0, a linear subspace of dimension at most d − 1. Thus q(∆z = 0) < 1, (46) follows and the Lemma is proved.
Example. For Q such that A ν (Q) = I d , the d × d identity matrix, a large part of the mass of Q can escape to infinity, Q can approach the boundary of U d,ν+d , and some eigenvalues of the Hessian can approach 0, as follows. Let e j be the standard basis vectors of R d . For c > 0 and p such that 1/[2(ν + d)] < p ≤ 1/(2d), let To get A ν (Q) = I d , by (8) and (41) Also, an amount of probability for Q converging to d/(ν + d) is escaping to infinity. The Hessian, cf. (46), has d arbitrarily small eigenvalues ν/(ν + c 2 ).
For the relatively open set P d ⊂ S d and G (ν) from (39), define the function Then F is well-defined because G (ν) (·, A) is a bounded and Lipschitz S d -valued function of x for each A ∈ P d ; in fact, each entry is C 1 with bounded derivative, as is straightforward to check.
For d = 1, and a finite signed Borel measure τ , let Let P and Q be two laws with distribution functions F P and F Q .
The next statement and its proof call on some basic notions and facts from infinite-dimensional calculus, which are reviewed in the Appendix.
given by Theorems 3 and 6 is also analytic for the norm on X.
Proof. (a): The function (φ, f ) → φ(f ) is a bounded bilinear operator, hence analytic, from BL * (R d )×BL(R d ) into R, and the composition of analytic functions is analytic, so it will suffice to show that A → G (ν) (·, A) from (39) is analytic from the relatively open set P d ⊂ S d into BL(R d , S d ). By easy reductions, it will suffice to show that C → (y → yy ′ /(ν + y ′ Cy)) is analytic from P d into BL(R d , S d ). Fixing C ≡ A −1 and considering C + ∆ for sufficiently small ∆ ∈ S d , we get which we would like to show gives the desired Taylor expansion around C. For j = 1, 2, ... let g j (y) := (−y ′ ∆y) j (ν + y ′ Cy) −j−1 ∈ R and let f j be the jth term of (49), f j (y) := g j (y)yy ′ ∈ S d . It's easily seen that for each j, f j is a bounded Lipschitz function into S d . We have for all y, since ν + y ′ Cy ≥ ν + |y| 2 / A , that For the Frobenius norm . F on S d , it follows that for all y Thus for ∆ < 1/ A , the series converges absolutely in the supremum norm.
For any u ∈ R d , having in mind u = u t = y + t(z − y) with 0 ≤ t ≤ 1, we have For the first factor in the first term on the right we will use From (50) and (53) it follows that for all u and v in R d Now, for all v, 2|v|/(ν + |v| 2 / A ) ≤ A 1/2 and |v| 2 /(ν + |v| 2 / A ) 3/2 ≤ A . It follows, integrating along the line (u, v) from v = y to v = z for each fixed u, that By this and (53), since |u| 2 /(ν + |u| 2 / A ) 3/2 ≤ A , the first term on the right in (54) is bounded above by For the second term on the right in (54), the second factor is g j (z)(u ′ z)z − g j (y)(u ′ y)y. The gradient of a vector-valued function is a matrix-valued function, in this case non-symmetric. We have It follows by (50) and (53) that for any v Multiplying by 2g j (u), and integrating with respect to v along the line segment from v = y to v = z, we get for the second term on the right in (54) Combining with (55) gives in (54) Then integrating this bound with respect to u on the line from u = y to u = z we get |G(z, z) − 2G(y, z) + G(y, y)| ≤ ∆ 2j A 2j+3 C 2 (6j + 6) 2 |y − z| 2 and so by (52) f j L ≤ ∆ j A j+3/2 C (6j + 6). Since the right side of the latter inequality equals a factor linear in j, times ∆ j A j , times factors fixed for given A, not depending on j or ∆, we see that the series (49) converges not only in the supremum norm but also in · L for ∆ < 1/ A , finishing the proof of analyticity of A → (y → yy ′ /(ν + y ′ Cy) into BL(R d , S d ) and so part (a).
For (b), A ν exists by Theorem 3 with u = u ν,d , so a(0) = ν + d > d. The gradient of F with respect to A is the Hessian of QH ν , which is positive definite at the critical point A ν by Lemma 8 and so non-singular.
For (c), by parts (a) and (b), all the hypotheses of the Hildebrandt-Graves implicit function theorem in the analytic case, e.g. Theorem 30(c) in the Appendix, hold at each point (φ Q , A ν (Q)), giving the conclusions that: on some open neighborhood U of φ Q in X, there is a function φ → A ν (φ) such that F (φ, A ν (φ)) = 0 for all φ ∈ U; the function A ν is C 1 ; and, since F is analytic by part (a), so is A ν on U. Existence of the implicit function in a BL * neighborhood of φ Q , and Theorem 3, imply that U d,ν+d is a relatively · * BL open set of probability measures, thus weakly open since β metrizes weak convergence. We know by Theorem 3, (33) and the form of u ν,d that there is a unique solution A ν (Q) for each Q ∈ U d,ν+d . So the local functions on neighborhoods fit together to define one analytic function A ν on U d,ν+d , and part (c) is proved.
For part (d), we apply the previous parts with d + 1 and ν − 1 in place of d and ν respectively. Theorem 6 shows that in the t ν case with ν > 1, µ = µ ν and Σ = Σ ν give uniquely defined M-functionals of location and scatter. Proposition 4 shows that the relation (28) with γ ≡ 1 gives an analytic homeomorphism with analytic inverse between A with A d+1,d+1 = 1 and (µ, Σ), so (d) follows from (c) and the composition of analytic functions.
For part (e), consider the Taylor expansion (49) related to G (ν) , specialized to the case d = 1, recalling that we treat location-scatter in this case by way of pure scatter for d = 2, where for a law P on R we take the law P • T −1 1 on R 2 concentrated in vectors (x, 1) ′ . The bilinear form (f, τ ) → f dτ is jointly continuous with respect to the total variation norm on f , and the sup (Kolmogorov) norm . K on finite signed measures (48). Thus it will suffice to show as for part (a) that the S 2 -valued Taylor series (49) has entries converging in total variation norm for ∆ < 1/ A . An entry of the jth term f j ((x, 1) ′ ) of (49) is a rational function R(x) = U(x)/V (x) where V has degree 2j + 2 and U has degree 2j + i for i = 0, 1, or 2. We already know from (51) that R sup ≤ ∆ j A j+1 . A zero of the derivative rational function R ′ (x) is a zero of its numerator, which after reduction is a polynomial of degree at most 2j + 3. Thus there are at most 2j + 3 (real) zeroes. Between two adjacent zeroes of R ′ the total variation of R is at most 2 R sup . Between ±∞ and the largest or smallest zero of R ′ , the same holds. Thus the total variation norm R [1] ≤ (4j + 9) R sup . Since ∞ j=1 (4j + 9) ∆ j A j+1 < ∞ for ∆ < 1/ A , the conclusion follows.
If a functional T is differentiable at P for a suitable norm, with a non-zero derivative, then one can look for asymptotic normality of √ n(T (P n ) − T (P )) by way of some central limit theorem and the delta-method. For this purpose the dual-bounded-Lipschitz norm · * BL , although it works for large classes of distributions, is still too strong for some heavy-tailed distributions. For d = 1, let P be a law concentrated in the positive integers with ∞ k=1 P ({k}) = +∞. Then a short calculation shows that as n → ∞, √ n ∞ k=1 |(P n − P )({k})| → +∞ in probability. For any numbers a k there is an f ∈ BL(R) with usual metric such that f (k)a k = |a k | for all k and f BL ≤ 3. Thus √ n P n − P * BL → +∞ in probability. Giné and Zinn (1986) proved equivalence of the related condition ∞ j=1 Pr(j − 1 < |X| ≤ j) 1/2 < ∞ for X with general distribution P on R to the Donsker property [defined in Dudley (1999, §3.1)] of {f : f BL ≤ 1}. But norms more directly adapted to the functions needed will be defined in the following section.
Some Banach spaces generated by rational functions
For the facts in this section, proofs are omitted if they are short and easy, or given briefly if they are longer. More details are given in Dudley, Sidenko, Wang and Yang (2007). Throughout this section let 0 < δ < 1, d = 1, 2, ... and r = 1, 2, ... be arbitrary unless further specified. Let MM r be the set of monic monomials g from R d into R of degree r, in other words g( where g ∈ MM 2r , and for s = 1, ..., r, C s ∈ W δ . δ,r,d be the set of f ∈ F δ,r such that C s has at most j different values (depending on f ). Then F δ,r = F δ,v . We will be interested in j = 1 and 2. Clearly F (1) δ,r ⊂ F (2) δ,r ⊂ · · · ⊂ F δ,r for each δ and r.
Let h C (x) := 1 + x ′ Cx for C ∈ P d and x ∈ R d . Then clearly f ∈ F (1) δ,r if and only if for some P ∈ MM 2r and C ∈ W δ , f (x) ≡ f P,C,r (x) := P (x)h C (x) −r . The next two lemmas are straightforward: Lemma 11. Let f = f P,C,r and g = f P,D,r for some P ∈ MM 2r and C, D ∈ P d . Then For 1 ≤ k ≤ l ≤ d and j = 0, 1, ..., r − 1, let Then each h C,D,k,l,r,j is in F δ,r+1 and or +∞ if no such λ s , g s with s |λ s | < ∞ exist. Lemma 10 implies that for s |λ s | < ∞ and g s ∈ G (r) δ,r , s λ s g s converges absolutely and uniformly on R d . Let Y j δ,r := Y j δ,r,d := {f : R d → R, f * ,j δ,r < ∞}. It's easy to see that each Y j δ,r is a real vector space of functions on R d and · * ,j δ,r is a seminorm on it. The next two lemmas and a proposition are rather straightforward to prove.
Lemma 13. For any j = 1, 2, ..., we have Y j δ,r ⊂ Y j δ,r+1 . The inclusion linear map from Y j δ,r into Y j δ,r+1 has norm at most 1. Proposition 14. For any P ∈ MM 2r , let ψ(C, x) := f P,C,r ( δ,r , viewed as a map into the larger space Y 2 δ,r+2 , is Fréchet C 1 . Theorem 15. Let r = 1, 2, ..., d = 1, 2, ..., 0 < δ < 1, and f ∈ Y 1 δ,r , so that for some a s with s |a s | < ∞ we have f (x) ≡ s a s P s (x)/(1 + x ′ C s x) ks for x ∈ R d where each P s ∈ MM 2ks , k s = 1, ..., r, and C s ∈ W δ . Then f can be written as a sum of the same form in which the triples (P s , C s , k s ) are all distinct. In that case, the C s , P s , k s and the coefficients a s are uniquely determined by f .
Proof. If d = 1, then P s (x) ≡ x 2ks and C s ∈ (δ, 1/δ) for all s. We can assume the pairs (C s , k s ) are all distinct. We need to show that if f (x) = 0 for all real x then all a s = 0. Suppose not. Any f of the given form extends to a function of a complex variable z holomorphic except for possible singularities on the two line segments where ℜz = 0, √ δ ≤ |ℑz| ≤ 1/ √ δ, and if f ≡ 0 on R then f ≡ 0 also outside the two segments. For a given C s take the largest k s with a s = 0. Then by dominated convergence for sums, |a s | = lim t↓0 t ks |f (t + i/ √ C s )| = 0, a contradiction (cf. Ross and Shapiro, 2002, Proposition 3.2.2). Now for d > 1, consider lines x = yu ∈ R d for y ∈ R and any u ∈ R d with |u| = 1. We can assume the triples (P s , C s , k s ) are all distinct by summing terms where they are the same (there are just finitely many possibilities for P s ). There exist u (in fact almost all u with |u| = 1, in a surface measure or category sense) such that P s (u) = P t (u) whenever P s = P t , and u ′ C s u = u ′ C t u whenever C s = C t , since this is a countable set of conditions, holding except on a sparse set of u's in the unit sphere. Fixing such a u, we then reduce to the case d = 1.
For any P ∈ MM 2r and any C = D in W δ , let By Lemma 11, for C fixed and D → C we have f P,C,D,r * ,2 δ,r+1 → 0. The following shows this is not true if r + 1 in the norm is replaced by r, even if the number of different C s 's in the denominator is allowed to be as large as possible, namely r: Proposition 16. For any r = 1, 2, ..., d = 1, 2, . . . , and C = D in W δ , we have f P,C,D,r * ,r δ,r = 2. The proof is similar to that of the preceding theorem. Let h C,ν (x) := ν + x ′ Cx, r = 1, 2, . . . , P ∈ MM 2r , and Then ψ (ν) (C, x) ≡ ν −r ψ(C/ν, x) and we get an alternate form of Proposition 14: Proposition 17. For any d = 1, 2, ..., r = 1, 2, ..., and 0 < δ < 1, Let R⊕Y j δ,r be the set of all functions c+g on R d for any c ∈ R and g ∈ Y j δ,r . Then c and g are uniquely determined since g(0) = 0. Let c + g * * ,j δ,r,d := |c| + g * ,j δ,r,d .
Proof. To adapt the proof of (a), A ν (Q) given by Theorem 6(a) exists and is in W δ for some δ ∈ (0, 1). Fix such a δ. For each A ∈ W δ and entry δ/ν,3,d by Proposition 17(d), and since the term −A in (39) not depending on y is analytic, thus C ∞ , with respect to C = A −1 . Now for k ≥ 2 and r = k − 1 we consider d r C G (ν) (·, A)∆ ⊗r in (59) in place of G (ν) (·, A) and spaces Y m δ/ν,2m−1+r,d in place of Y m δ/ν,2m−1,d for m = 1, 2. Each additional differentiation with respect to C adds 1 to the power of ν + y ′ Cy in the denominator. Then the proof of (a), now proving C k under the corresponding hypothesis, can proceed as before.
For (b), the Hessian is the same as before. For (c), given Q ∈ U d,ν+d and δ > 0 such that A ν (Q) ∈ W δ,d , parts (a) and (b) give the hypotheses of the Hildebrandt-Graves implicit function theorem, C k case, Theorem 30(b) in the Appendix. Also as before, there is a · δ,k+2,ν neighborhood V of φ Q on which the implicit function, say A ν,V , exists. By taking V small enough, we can get A ν,V (φ) ∈ W δ,d for all φ ∈ V . For any Q ′ ∈ U d,ν+d such that φ Q ′ ∈ V , we have uniqueness A ν,V (φ Q ′ ) = A ν (Q ′ ) by Theorem 3. Thus the C k property of A ν,V on V with respect to · δ,k+2,ν , given by the implicit function theorem, applies to A ν (·) on Q such that φ Q ∈ V , proving (c).
Part (d), again using earlier parts with (d + 1, ν − 1) in place of (d, ν), and now with C k , then follows as before.
Here are some definitions and a proposition to prepare for the next theorem. We claim that if 1 ≤ q < r < d and K ∈ G(q, d), then γ d,r {H ∈ G(r, d) : H ⊃ K} = 0. It suffices to prove this for q = 1. Let v be one of the two unit vectors ±v in K. Then for g ∈ O(d), K ⊂ gH if and only if g −1 v ∈ H. Now g −1 v is uniformly distributed on the unit sphere and so is in H with probability 0 as claimed.
For r = 1, ..., d − 1 let I(r) be the set of all subspaces H ∈ J(r) such that there is no K ∈ J(q) with 1 ≤ q < r and K ⊂ H. For any H 1 = H 2 in I(r) we have H 1 ∩ H 2 ∈ G(m, d) for some m < r and Q((H 1 ∩ H 2 ) \ {0}) = 0 by assumption. Thus the sets H \ {0} for H ∈ I(r) are essentially disjoint for Q, with probability > 0, so I(r) is countable for each r. It follows that for each r = 1, ..., d − 1, γ d,r (J(r)) = r q=1 γ d,r {H ∈ G(q, d) : H ⊃ K for some K ∈ I(r)} = 0 by the claim and since each I(r) is countable. The Proposition is proved.
Here is a delta-method fact.
Theorem 20. (a) For any d = 2, 3, ..., ν > 0, and Q ∈ U d,ν+d with empirical measures Q n , we have Q n ∈ U d,ν+d with probability → 1 as n → ∞ and √ n(A ν (Q n ) − A ν (Q)) converges in distribution to a normal distribution N(0, S) on S d . The covariance matrix S has full rank d(d + 1)/2 if Q is not concentrated in any set where a non-zero second-degree polynomial vanishes, e.g. if Q has a density. For general Q ∈ U d,ν+d , if d = 1 the rank is exactly 1, and for d ≥ 2, the smallest possible rank of S is d − 1.
(b) For any d = 1, 2, ..., 1 < ν < ∞ and P ∈ V d,ν+d with empirical measures P n , we have P n ∈ V d,ν+d with probability → 1 as n → ∞ and the functionals µ ν and Σ ν are such that as n → ∞, converges in distribution to some normal distribution with mean 0 on R d × R d 2 , whose marginal on R d 2 is concentrated on S d . The covariance of the asymptotic normal distribution for µ ν (P n ) has full rank d. The rank of the covariance for Σ ν (P n ) has the same behavior as the rank of S in part (a).
By Lemma 10, for any k = 1, 2, ..., Γ k+2,d δ,ν is a uniformly bounded class of functions. It is a class of rational functions of the y j and C kl in which the polynomials in the numerators and denominators have degrees ≤ m := 2k + 4. If A(y) and B(y) are any polynomials in y of degrees at most m, with B(y) > 0 for all y (as is the case here), then for any real c, the set {y : A(y)/B(y) > c} = {y : (A − cB)(y) > 0}, where A − cB is also a polynomial of degree at most m.
Let E(r, d) be the collection of all sets {x ∈ R d : p(x) > 0} for all polynomials p (in d variables) of degree at most r. Then for each r and d, E(r, d) is a VC (Vapnik-Chervonenkis) class of sets, e.g. Dudley (1999, Theorem 4.2.1). So Γ k+2,d δ,ν is a VC major class of functions for E(2k + 4, d), and a VC hull class (defined in Dudley [1999, pp. 159-160]). It is uniformly bounded and has sufficient measurability properties by continuity in the parameter A ∈ P d [Dudley (1999, Theorem 5.3.8)]. It follows that Γ k+2,d δ,ν is a universal Donsker class [Dudley (1999, Corollary 6.3.16, Theorem 10.1.6)], in other words, for any δ > 0 and r = 1, 2, ... and any law Q, √ n f d(Q n − Q) is asymptotically normal (converges to a Gaussian process G Q indexed by f ) uniformly for f ∈ Γ k+2,d δ,ν . In particular we have the bounded Donsker property, i.e. √ n Q n −Q δ,k+2,ν is bounded in probability, where we now identify φ Q with Q and likewise for Q n . We also have that Γ k+2,d δ,ν is a uniform Glivenko-Cantelli class by Dudley, Giné and Zinn (1991, Theorem 6), so that Q n − Q δ,k+2,ν → 0 almost surely as n → ∞. Thus almost surely for n large enough, Q n ∈ V for the neighborhood V of Q defined in the proof of Theorem 18, so Q n ∈ U d,ν+d and A ν (Q n ) is defined.
By Theorem 18(c) for k = 1 and (61), we have as n → ∞. The remainder term is o p (1/ √ n) by the bounded Donsker property mentioned above.
To make DA ν more explicit, one can use partial derivatives of F as follows. For any ζ ∈ X and A ν : . The partial derivative of F (φ, A) with respect to C, at A = A ν , is given as mentioned previously by the Hessian (44), shown to be positive definite in Lemma 8.
Recall the Hessian linear map H := H A from S d to itself defined by (45). By a classical formula for derivatives of inverse functions, e.g. Deimling (1985, p. 150), Multiplying by √ n, the resulting expression is asymptotically normal by a finitedimensional central limit theorem. The rank of the covariance is preserved by the nonsingular H −1 . The rank is the largest size of a subset S of the set {(i, j) : 1 ≤ i ≤ j ≤ d} for which the functions f ij with f ij (y) := y i y j /(ν + y ′ Cy) for (i, j) ∈ S are linearly independent with respect to Q modulo constant functions, i.e. there do not exist constants a ij , (i, j) ∈ S, not all 0, and a constant c such that (i,j)∈S a ij f ij = c almost surely for Q. By a linear change of variables we can assume that A = I = C.
For d = 1, f 11 cannot be a constant a.s. since Q ∈ U 1,ν+1 is not concentrated in two points, so the rank (of the covariance) is exactly 1.
For any d, a linear dependence relation (i,j) a ij f ij = c with a ij not all 0 is equivalent to a quadratic polynomial equation (i,j) a ij y i y j = c(ν + y ′ y) holding a.s. Q. If no such equation holds, e.g. Q has a density, then the rank has its maximum possible value d(d + 1)/2.
For any d ≥ 2, let e j , j = 1, ..., d, be the standard unit vectors in R d . Let Then for each i, j, (ν + d) ∫ y i y j dQ(y)/(ν + |y| 2 ) = δ ij , so A = I = C as desired. Clearly f ij = 0 Q-a.s. for i = j. One can check that Q ∈ U d,ν+d for any d ≥ 2 and ν > 0.
We have d i=1 f ii = |y| 2 /(ν + |y| 2 ) = d/(ν + d) almost surely with respect to Q, so the rank is at most d − 1. Conversely consider g(y) : where some a i = 0. Then g(y) = 0 for y = ± √ de d and g(y) = a i d/(ν + d) = 0 for y = ± √ de i , each occurring with Q-probability > 0, so g is not constant a.s. Q, the d − 1 functions are not linearly dependent mod constants, and the rank is exactly d − 1 in this case. Now for d ≥ 2 and any q ∈ U d,ν+d , still with A = C = I, by Proposition 19 and a rotation of coordinates we can assume that Q(y 1 = 0) = Q({0}). We claim that then the functions f 1j for j = 2, ..., d are linearly independent mod constants with respect to Q. Suppose that for some real a 2 , ..., a d not all 0 and constant c, y 1 z(y)/(ν + |y| 2 ) = c a.s. Q where z(y) := d j=2 a j y j . Since y 1 y j dQ(y)/(ν + |y| 2 ) = 0 for j ≥ 2 we must have c = 0 and so 1 = Q(y 1 z(y) = 0) = Q(z(y) = 0) + Q(y 1 = 0 = z(y)) but the latter probability is 0 by choice of y 1 . Thus Q(z(y) = 0) = 1 but {z(y) = 0} is a (d − 1)-dimensional vector subspace, contradicting Q ∈ U d,ν+d . Thus the rank is always at least d − 1 for d ≥ 2, which is sharp by the example. Now √ n(A ν (Q n ) − A ν (Q)) has the same asymptotic normal distribution as √ n times the expression in (63) since the other term in (62) yields ∈ U d+1,ν+d and apply part (a) to it with d, ν replaced by d + 1, ν ′ = ν − 1. We can write Q n = P n • T −1 1 . As in part (a), we will have almost surely P n ∈ V d,ν+d for n large enough. From the resulting A ν ′ , we get µ ν and Σ ν for P and P n via Proposition 4(a) with γ = 1. Then (µ ν ) j = (A ν ′ ) j,d+1 for j = 1, ..., d, both for P, Q and for P n , Q n . We also have for i, j = 1, . . . , d, and likewise for P n and Q n . This transformation of matrices, although nonlinear, is smooth enough to preserve asymptotic normality (the finite-dimensional deltamethod), where the following will show how uniformity in the asymptotics is preserved: Lemma 21. If random vectors {U in } d i=1 for n = 1, 2, . . . and a constant vector converges in distribution to a normal distribution with mean 0 on R d , then so does Proof. For one product term, we have where the last term is O p (1/n) and so negligible and the other terms are jointly asymptotically normal. The uniformity holds for the first two terms since the U i are uniformly bounded. Each factor in the last term is uniformly O p (1/ √ n), so their product is uniformly O p (1/n).
Via an affine transformation of R d , we can assume that µ ν (P ) = 0 and Σ ν (P ) = I d . Then for Q = P • T −1 we get A ν ′ (Q) = I d+1 . If for some a 1 , ..., a d not all 0 we have d j=1 a j y j y d+1 /(ν + |y| 2 ) = c a.s. (Q) for a constant c, we must have c = 0 and thus d j=1 a j y j y d+1 = d j=1 a j y j = 0 a.s. for Q, where the latter equation also holds a.s. (P ), contradicting P ∈ V d,ν+d . Thus the asymptotic normal distribution for µ ν (P n ) has full rank d. The rank of the covariance of the asymptotic normal distribution for Σ ν (P n ) behaves as in part (a) by the same proof. Part (b) of the theorem is proved. Now, here is a statement on uniformity as P and Q vary. Recall W δ as defined in (40).
Proposition 22. For any δ > 0 and M < ∞, the rate of convergence to normality in Theorem 20(a) is uniform over the set Q := Q(δ, M, ν) of all Q ∈ U d,ν+d such that A ν (Q) ∈ W δ and or in part (b), over all P ∈ V d,ν+d such that Σ ν (P ) ∈ W δ and (66 ) holds for P in place of Q.
Remark. The example after Lemma 8 shows that A = A ν (Q) itself does not control Q well enough to keep it away from the boundary of U d,ν+d or give an upper bound on the norm of H −1 A , which is needed for uniformity in the limit theorem. For a class Q of laws to have the uniform asymptotic normality of A ν , uniform tightness is not necessary, but a special case (66) of uniform tightness is assumed.
Proof. A transformation as in the proof of Lemma 8 gives a law q with A ν (q) = I d such that (66) holds with Q replaced by q and M by K := M/ √ δ, noting that τ 1 ≤ 1/δ where τ 1 is the largest eigenvalue of A ν (Q) −1 .
In the proof of Theorem 20, it was shown that for any δ > 0 and k = 1, 2, ..., Γ k+2,d δ,ν is a uniformly bounded VC major class of functions with sufficient measurability properties for empirical process limit theorems. To show that Γ k+2,d δ,ν is a uniform Donsker class in the sense defined and characterized by Giné and Zinn (1991), one can apply a convex hull property proved by Bousquet, Koltchinskii and Panchenko (2002).
It follows that, replacing M by K to allow for the transformation, which implies (67) and so finishes the proof of part (a). As part of the proof of part (b), the next fact will show that the specialcase tightness hypothesis (66) itself implies a bound on A ν (Q) (although not, of course, on A ν (Q) −1 ). A bound exists since A ν has a breakdown point of 1/(ν + d) with regard to mass going to infinity [Tyler (1986, §3);Dümbgen and Tyler (2005, Theorem 5 and its proof)]. The next lemma provides specific constants which may not be sharp.
Norms based on classes of sets
Suppose . 1 and . 2 are two norms on a vector space V such that for some K < ∞, x 2 ≤ K x 1 for all x ∈ V . Let U ⊂ V be open for .
2 and so also for .
1 . Let v ∈ U and suppose a functional T from U into some other normed space is Fréchet differentiable at v for .
2 . Then the same holds for . 1 since the identity from V to V is a bounded linear operator from (V, .
and so equals its own Fréchet derivative everywhere on V , and we can apply a chain rule, e.g. Dieudonné [1960, (8.12.10)]. If F is a class of bounded real-valued functions on a set χ, measurable for a σ-algebra A of subsets of χ, and φ is a finite signed measure on A, (e.g. Let F be a VC major class of functions for E (defined in Dudley [1999, pp. 159-160]), where E ⊂ A and suppose for some M < ∞, |f (x)| ≤ M for all f ∈ F and x ∈ χ. Then for any finite signed measure φ on A having total mass φ(χ) = 0 (e.g., φ = P − Q for any two laws P and Q), we have by the rescaling f → (f +M)/(2M) to get functions with values in [0, 1] and then a convex hull representation [Dudley (1987, Theorem 2.1(a)) or (1999, Theorem 4.7.1(b))]; additive constants make no difference since φ(χ) = 0. As noted in the proof of Theorem 20, each Γ k+2,d δ,ν is a uniformly bounded VC major class for the VC class E(2k + 4, d) of sets (positivity sets of polynomials of degree ≤ 2k + 4). So by (61) and (68), for some M < ∞ depending on r, δ, ν, and d, we have for all finite signed measures φ on R d with φ(R d ) = 0. We have by the preceding discussion: Corollary 24. For each d = 1, 2, ..., and ν > 1, the Fréchet C k differentiability property of the t ν location and scatter functionals at each P in V d,ν+d , as shown in Theorem 18 with respect to . δ,k+2,ν , also holds with respect to . E(2k+4,d) . Each class E(r, d) for r = 1, 2, . . . is invariant under all non-singular affine transformations of R d , and hence so is the norm . E(r,d) . Davies (1993Davies ( , pp. 1851Davies ( -1852 defines norms .
L based on suitable VC classes L of subsets of R d and points out Donsker and affine invariance properties. The norms . δ,r,ν are not affinely invariant.
On the other hand, note that M in (69) depends on δ, and there is no corresponding inequality in the opposite direction. Thus, Fréchet differentiability is strictly stronger for . δ,k+2,ν than it is for . E(2k+4,d) .
The one-dimensional case
In dimension d = 1, the scatter matrix Σ reduces to a number σ 2 . The ρ and h functions in this case become, for θ := (µ, σ) with σ > 0, by (31) and (32), The function h ν is bounded uniformly in x and for |µ| bounded and σ bounded away from 0 and ∞. Thus it is integrable for any probability distribution P on R.
Proof. Part (a) holds by the case of general dimension, Theorem 9(d), since σ 2 → σ is analytic for σ > 0. The other parts are special to d = 1.
Taking second partial derivatives we get It's easily seen that these second partials are also bounded uniformly for σ ≥ δ for any δ > 0.
The following shows that σ(·) is C 1 and strictly positive except possibly at one large atom. (Here C 1 suffices for present purposes; it could be improved to analyticity, as in the proof of Theorem 9(c).) Lemma 26. On the set U := U ν,Q of µ for which Q({µ}) < ν/(ν + 1), namely the whole line if (72) holds or the complement of a point if it fails, the function µ → σ(µ) > 0 is C 1 , as is the function µ → Qh(µ, σ(µ)).
Next, let's consider a general Q such that (72) fails. The next fact, with part (a), implies parts (b) and (c) of Theorem 25.
By (86) again, (ν + 1)βµ 2 < νσ 2 µ + µ 2 unless Q is concentrated at the two points 0, µ. That case is treated by Lemma 27(b), so we can neglect it here. Then the denominator of the last expression displayed is positive. Since (ν + 1)β ≥ 1 and Q(x = 0) ≤ 1/(ν + 1), it will suffice to show that for all real x, and as always, The fraction on the left goes to 1 as x → ±∞, and there the inequality holds. At x = 0, a minimum of that fraction, the inequality also holds. Setting the derivative of the fraction equal to 0 gives one other root, where x = µ + (νσ 2 µ /µ) and where the inequality holds (with equality just for this one value of x). Thus (88) and (85) are proved.
The proof that µ ν (Q) = σ ν (Q) = 0 is now completed as in the end of the proof of Lemma 27(b), where now if µ j → 0 and σ(µ j ) ≥ δ > 0, (86) is contradicted for j large enough. So Lemma 28 is proved.
If σ k = 0 for all k then we have P k ({t k }) ≥ ν/(ν + 1) for some t k . By weak convergence, we must have t k → 0, and µ k = t k by Lemma 28, so the conclusion holds. Thus we can assume from here on that σ k > 0 for all k ≥ 1, taking another subsequence. For k = 0, 1, 2, . . ., let for all x and k, a domination condition which is used below without further mention. For k ≥ 1, since σ k > 0, we have by (77) and Lemma 28 that (90) I k dP k = 1/(ν + 1).
If µ 0 is finite and non-zero and σ 0 = 0 then we have I k (x) → 1 except possibly for x = µ 0 , and the convergence is uniform on compact subsets of {µ 0 } c . Thus again contradicting (90). So the proof is complete except if µ 0 = ±∞ and σ 0 = +∞. Then by symmetry we can assume that µ 0 = +∞.
If σ k = o(µ k ) as k → ∞ then I k → 1, or if µ k = o(σ k ) as k → ∞ then I k → 0, in either case uniformly on compact sets and so contradicting (90). So, taking another subsequence, we can assume that as k → ∞, µ k /σ k → c for some c with 0 < c < ∞. Then uniformly on bounded intervals, I k → c 2 /(ν + c 2 ) as k → ∞, an increasing function of c, so (90) implies that c = 1.
Appendix
Derivatives in Banach spaces. Fréchet differentiability is often defined by statisticians, e.g. Huber (1981, §2.5), for functionals defined on the convex set of probability measures. As long as the definition is for a norm, this usually seems to cause no problems. But, in this paper, we need to apply implicit function theorems which require that a function(al) be defined on an open set in a Banach space. Thus we need the set U in the following usual mathematicians' definition of Fréchet differentiability to be open. No set of probability measures is open in any Banach space of signed measures.
Let X and Y be Banach spaces over the real numbers. Let B(X, Y ) be the space of bounded, i.e. continuous, linear operators A from X into Y , with the norm A := sup{ Ax : x = 1}. Let U be an open subset of X, x ∈ U, and f a function from U into Y . Then f is said to be Fréchet differentiable at x iff there is an A ∈ B(X, Y ) such that as u → x. If so let (Df )(x) := A. Then f is said to be C 1 on U if it is Fréchet differentiable at each x ∈ U and x → Df (x) is continuous from U into B(X, Y ). Iterating the definition, the second derivative D 2 f (x) = D(Df )(x), if it exists for a given x, is in B(X, B(X, Y )), and the kth derivative D k f (x) will be in B(X, B(X, . . . , B(X, Y )) . . .) with k B's. Then f is called C k on U if its kth derivative exists and is continuous on U. If f is C k on U for all k = 1, 2, . . ., it is called C ∞ on U. In some cases, higher order derivatives will be seen to simplify or to reduce to more familiar notions.
Suppose X is a finite-dimensional space R d . Let e 1 , . . . , e d be the standard basis vectors of R d . If x ∈ U, an open set in R d , and f : U → Y , partial derivatives are defined by ∂f (x)/∂x j := lim t→0 [f (x + te j ) − f (x)]/t, the usual definition except that the functions are Y -valued. Just as for real-valued functions, f is C 1 from U into Y if and only if each ∂f /∂x j for j = 1, . . . , d exists and is continuous from U into Y , e.g. by Dieudonné [1960, (8.9.1)] and induction on d. Any linear map A from R d into Y is automatically continuous and is given by A(x) ≡ d j=1 x j A j for some A j ∈ Y , so we can identify A with {A j } d j=1 ∈ Y d . Then if Df (x) exists, each ∂f (x)/∂x j exists and Df (x) = {∂f (x)/∂x j } d j=1 . Again as for real-valued functions, we can define higher-order partial derivatives if they exist. Then, f is C k from U ⊂ R d into Y if and only if each partial derivative D p f (x) := ∂ [p] f /∂x p 1 1 . . . ∂x p d d , with p := (p 1 , . . . , p d ) and [p] := p 1 + · · · + p d ≤ k, exists and is continuous from U into Y , e.g. by Dieudonné [1960, (8.9.1), (8.12.8)] and induction.
A function g from X into Y is called a k-homogeneous polynomial iff for some k-linear T : X k → Y , we have g(x) ≡ g T (x) := T (x, x, . . . , x) for all x ∈ X. Since g Ts ≡ g T one can assume that T is symmetric. For the following, one can obtain T from g by the "polarization identity," e.g. Chae (1985), Theorem 4.6.
Proposition 29. For any two real vector spaces X and Y and k = 1, 2, . . . , there is a 1-1 correspondence between symmetric k-linear mappings T from X k into Y and k-homogeneous polynomials g = g T from X into Y . Now suppose (X, . ) and (Y, | · |) are normed vector spaces. It is known and not hard to show that a k-linear mapping T from X k into Y is jointly continuous if and only if T := sup{|T (x 1 , . . . , x k )| : x 1 = · · · = x k = 1} < ∞, and that a k-homogeneous polynomial g from X into Y is continuous if and only if g := sup{|g(x)| : x = 1} < ∞. In general, for a symmetric k-linear T with T < ∞ we have g T ≤ T ≤ k k g T /k!, e.g. Chae (1985), Theorem 4.13. The bounds are sharp in general Banach spaces [Kopeć and Musielak (1956)] but if X is a Hilbert space we have g T ≡ T [Bochnak and Siciak (1971)].
If f is a C k function from an open set U ⊂ X into Y then at each x ∈ U, D k f (x) defines a k-linear mapping d k f (x) from X k into Y , d k f (x)(x 1 , . . . , x k ) := (· · · ((D k f )(x)(x 1 ))(x 2 ) · · · (x k )).
Then d k f (x) is symmetric, e.g. Chae (1985), Theorem 7.9. The corresponding k-homogeneous polynomial u → g d k f (x) (u) will be written as u → d k f (x)u ⊗k . Also, f will be called analytic from U into Y iff it is C ∞ and for each x ∈ U there exist an r > 0 and k-homogeneous polynomials V k from X into Y for each k ≥ 1 such that for any v ∈ X with v − x < r, we have v ∈ U and (95) It is known that then necessarily for each k ≥ 1 and u ∈ X (96) V k (u) = d k f (x)u ⊗k /k!.
As (ψ, y) → (φ, x), clearly (ψ − φ)(x) and φ(y − x) give first derivative terms and (ψ − φ)(y − x) a second derivative term. We have that Dγ is continuous (linear) and D 2 γ has a fixed value (η, u) → ((ζ, v) → η(v) + ζ(u)) in B(X ′ × X, B(X ′ × X, R)), so D 3 γ ≡ 0. If U is an open subset of a Banach space Y and f is a C k function from U into X, then is C k on X ′ × U by a chain rule, e.g. Dieudonné [1960, (8.12.10)]. For a point x in a normed space (X, · ) denote the open ball of radius r around x by B r (x) := {y ∈ X : y − x < r}. The Hildebrandt-Graves implicit function theorem and related facts, essentially as stated by Deimling (1985, Theorem 15.1 p. 148, Corollary 15.1 p. 150, and Theorem 15.3 p. 151) are as follows: Theorem 30. Let X, Y, Z be real Banach spaces, U ⊂ X and V ⊂ Y neighborhoods of x 0 and y 0 respectively. Let F : U × V → Z be jointly continuous, and continuously differentiable with respect to y ∈ V . Let F 2 be the (partial Fréchet) derivative of F with respect to y ∈ V , so that for each x ∈ U and y ∈ V , F 2 (x, y)(·) is a bounded linear operator from Y into Z. Suppose that F (x 0 , y 0 ) = 0 and that F 2 (x 0 , y 0 )(·) is onto Z and has a bounded inverse, i.e. it is a topological isomorphism of Y onto Z. Then there exist r > 0, δ > 0 with B r (x 0 ) ⊂ U and B δ (y 0 ) ⊂ V such that there is exactly one map T from B r (x 0 ) into B δ (y 0 ) with F (x, T (x)) = 0 for all x ∈ B r (x 0 ), and: (a) T is continuous. (b) If for some m ≥ 1, F ∈ C m (U × V ), then for some ρ with 0 < ρ < r, T is C m on B ρ (x 0 ). (c) If F is analytic on U × V then for some τ with 0 < τ < r, T is analytic on B τ (x 0 ).
The two Banach spaces Y and Z are topologically isomorphic if they are finitedimensional and of the same dimension, e.g. both are R d or both are S d as in the present paper. Then we need that the linear transformation F 2 (x 0 , y 0 )(·), or the associated matrix of partial derivatives in coordinates, is non-singular. | 2009-03-19T23:27:38.000Z | 2008-01-19T00:00:00.000 | {
"year": 2008,
"sha1": "872322a225f3534b887fa4ed22b7fdb1f553cc1f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "618b358d0342c050f733e59b0a5e817fd919bb2b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
520123 | pes2o/s2orc | v3-fos-license | Oestrogen-receptor status and endocrine therapy of breast cancer: response rates and status stability.
The concentration of cellular oestrogen receptor (RE) was measured in both the soluble and nuclear-pellet fractions of biopsies from 1,000 breast cancers. Data suggest that functional steroid RE is always in equilibrium between the soluble and nuclear fractions. However, biopsies from only one-third of patients contained detectable amounts of high-affinity RE in both fractions. Thirty patients out of 42 (71%) whose biopsies contained RE in both fractions, showed objective remission after receiving some form of hormonal manipulation as sole treatment. Response rates in the other categories ranged from 9% for those whose biopsies contained no detectable RE to 24% for those who displayed soluble RE alone. The presence of RE in both fractions of primary disease, whereas RE-negativity was maintained during progression from primary to secondary disease. Other aspects of RE status in relation to stage of disease are analysed.
in both fractions of primary disease was found to be an unreliable index of RE status in subsequent secondary disease, whereas RE-negativity was maintained during progression from primary to secondary disease. Other aspects of RE status in relation to stage of disease are analysed. ENDOCRINE THERAPY is a long-established treatment of secondary breast cancer (Beatson, 1896). It is, however, successful in only a small (, 25%) proportion of cases (King & Roberts, 1979). The choice of endocrine therapy for a particular patient has been made on the basis of several clinical features, such as menopausal status, disease-free interval, site of dominant lesion, etc., and the response to any earlier endocrine therapy (Pearson & Ray, 1960;McGuire et al., 1977). The absence of a single reliable index of hormone dependence of breast tumours has led to a marked decrease in recent years of the use of ablative therapies.
Preliminary data from our laboratory (Laing et al., 1977) have already indicated that, in patients with advanced breast cancer, response to endocrine therapy was more likely when the tumour contained oestrogen receptor RE in both the soluble and pellet fractions. The same data revealed the existence of oestrogen receptor (REN) in the pellet, in the absence of any soluble receptor (REC), a previously unconsidered possibility. This study also raised the question of the existence of REN that was either unfilled by steroid or, alternatively, bound to chromatin in a manner which allowed the steroid to dissociate at low temperatures. The present paper reports both soluble and pellet RE status for 1000 patients. It then analyses the breakdown of RE status in relation to menopausal and nodal status and to stage of disease. Responses to endocrine therapy of 129 patients with advanced disease in relation to RE status is also reported.
Given the value of RE status as a tool in determining therapy for secondary disease, it would be very useful if such status could be shown to reflect that in primary disease. This would be particularly valuable in cases in which secondary disease is surgically inaccessible. Similarly, the maintenance of RE status between early (often local) and later recurrences is also of considerable interest. RE status in biopsies of both primary and secondary disease is reported for 32 patients and corresponding data between early and later recurrence for 20 patients. umol) was obtained from The Radiochemical Centre, Amersham, England.
3H
All reagents were AnalaR grade. Solutions were prepared in glass-distilled water, since the presence of metal ions was found to interfere with the assay of receptors. Human breast tumour tissue was obtained from 8 hospitals in the Glasgow area.
Methods
Tissue fractionation.-Tissue was collected fresh and transported from the operating theatre to the laboratory on ice. Wherever possible, RE assay was performed the same day, but, when this could not be achieved, storage was at -20°C in sucrose buffer (0-25M sucrose, 1-5mM MgCl2, 10mM Hepes, pH 7.4)/50% glycerol (v:v) (Leake et al., 1979). Soluble and nuclear fractions were then prepared as follows.
About 150 mg of tissue was dissected from the area adjacent to that removed for pathological examination. Homogenization was carried out at 50 mg/ml in 10mM Hepes, 1 5mM EDTA, 0-25mM DTT, pH 7-4 (HED buffer) using 2 x 10s bursts at a setting of 150 on an Ultra-Turrax, model TP 18/2, followed by further homogenization with a glass tissue grinder (Kontes Duall). The homogenate was centrifugated at 5000 g for 5 min at 4°C to yield a "cytosol" supernatant and a crude nuclear pellet. The pellet was washed x 3 in 0-15M NaCl, 10mM Hepes (pH 7.4), and finally resuspended to the original volume in buffered saline. A wash with 0.1% Triton X-100 was on occasion incorporated at this stage to further purify the nuclear material, but this did not appreciably alter the level of nuclear binding. Further purification of the pellet fraction by differential centrifugation through sucrose (finally 2f4M sucrose, 1 5mM MgC92) did not significantly alter RE content expressed per unit DNA.
Assay of receptors.-The initial procedures in the assay system were identical for both tissue fractions. 1501p aliquots of cytosol or nuclear suspension were added to 50u1l aliquots of 3H-oestradiol-17/ to give final concentrations of steroid of 1, 1-5, 2, 4, 6 and 8 x 10-10M. Two additional tubes were also set up containing 10-9M 3H-oestradiol with or without 10-7M unlabelled diethylstilboestrol (DES) to determine the specificity of binding. All tubes were then incubated at 4°C for 18 h. The inclusion of protease inhibitors Trasylol and/or phenylmethylsulphonyl fluoride (PMSF) in the incubation medium did not appear to enhance RE measurement. After incubation, the amount of steroid bound was determined for each fraction as follows.
Nuclear receptors (REN).-Following incubation, 1001ul aliquots were removed from each tube and added to 5ml saline. This mixture was poured down the chimney of a Millipore filter apparatus on to a pre-wetted Whatman GF/C glass-fibre filter. The tube was washed out with 5 ml saline, the washing poured on to the filter and the filter further washed with 3 x 4 ml saline under suction. After removal of the chimney, the edge of the filter was washed and the filter removed into a scintillation vial prior to drying overnight at 60°C. 10 ml toluene/PPO (5 g/l) scintillant was added, and the samples counted at 35% efficiency in a Philips or Searle Mark III Liquid Scintillation analyser.
Protein and DNA assay.-Cytosol protein concentration was determined by the method of Lowry. DNA content was determined by a modification of the method of Burton (1956) as described by Katzenellenbogen & Leake (1974).
Definition of positivity
To be classed as RE+ the binding displayed by either tissue fraction was required to fulfil 3 criteria: (a) yield an unambiguous Scatchard plot, which produces (b) a straight line, giving a Kd in the range 0 5-5 x 10-10M; (c) specificity must be established by competition with excess diethylstilboestrol. RE concentrations as low as 3 fmol/mg protein and 25 fmol/mg DNA were detected for the soluble and pellet fractions respectively.
Response to hormone therapy was assessed in patients with secondary disease for whom (a) RE status had been determined before the initiation of any therapy, (b) endocrine therapy alone was applied as first treatment during the period of assessment. The criteria for response were those suggested by the British Breast Group (1974). In brief, these involve at least 50% regression of existing lesions, and no appearance of new lesions within a 6-month period. Only patients satisfying these criteria for at least 6 months are recorded as having responded (Table VII).
Primary disease
The distribution of patients by RE status is shown in Table I. This is a compilation of data from preor post-menopausal patients with primary disease. Patients with RE in both soluble and pellet fractions are classified as ( + + ), those with only REC as (+ /0), those with only REN as (0/ + ) and those with RE in neither fraction as (0/0).
Tumours with functional oestrogen RE would be expected to display both REc and REN, even at very high plasma oestrogen levels, since an equilibrium is always maintained between filled receptor in the 2 pools (Williams & Gorski, 1971; first presentation when biopsies are classified as containing functional RE (+ / +) or completely lacking in REc (0/0). This is, perhaps, surprising in view of the concept that absence of RE indicates a more rapidly progressing tumour (Meyer et al., 1977). The distribution of RE was also reanalysed in relation to nodal status. It was thought that patients with REtumours would be more likely to exhibit nodal involvement than those with RE+ disease. However, the data in Table IV do not support this idea. The potential for nodal infiltration is clearly not dependent on receptor status. This observation agreed with that of Hahnel et al. (1979) although a loose relationship between nodal involvement and RE-negativity was reported by Allegra et al. (1979).
Receptor status stability Much of the early interest in RE status was derived from the idea that measurements on biopsies of primary disease would act as reliable therapeutic indices once secondary growth was detected (King, 1975;Jensen, 1975). However, practical demonstration of the stability of RE status between biopsy and the appear- ance of secondary disease has been limited due to the difficulties in (a) maintaining a stable patient population and (b) obtaining sufficient material from the metastatic site. Table V shows RE status determined in primary and subsequent secondary disease from 32 patients. None of the patients received known relevant therapy in the intervening period. In 20 cases (63%) RE status is the same for both biopsies. Only 5/10 receptor-positive cases remained +/+ indicating that hormone dependence in primary disease is not necessarily retained in secondary disease. Only one out of 17 REpatients (0/0) developed RE+ + ( + ) secondary growth. Both tamofixen (Leake et al., 1979) and chemotherapy have been found to either block RE synthesis or interfere with the RE assay, but this patient had received no such relevant therapy prior to biopsy.
Thus a RE-primary is almost certain to give rise to hormone insensitive secondary disease.
When RE status is compared between first occurrence and later recurrences (Table VI) it is again clear that 0/0 disease generally retains this status. Of 12 biopsies examined, only one changed status. Once more, it was striking that change of status was common in biopsies with RE in only one fraction. Of the RE+ biopsies obtained in early recurrence, 3/4 retained functional RE (+ / +).
Further examination of the group of patients whose biopsies had REC alone was carried out. It was apparent ( Figure) that the RE concentration in tissues with REc alone (+ /0) was relatively lower than that in RE+ (+ / +) biopsies. However, a good chance of response to endocrine therapy, no absolute rule applies.
Endocrine therapy of advanced disease All patients with advanced disease for whom the RE status of an appropriate biopsy was known, were monitored throughout subsequent treatment. The response of those patients who received any type of hormonal therapy as first-line treatment for any period was noted in relation to the criteria listed earlier. The results are summarized in Table VII. Patients whose biopsies showed an intact RE system had a very good chance of responding to some type of endocrine therapy (most commonly tamoxifen treatment). Only 5 patients (9 %) of those in the truly RE-class showed good response. In each case these patients had received tamoxifen, and may have responded to one of the actions of this drug which is not RE-mediated (Tisdale, 1977). It is striking that the patients whose biopsies contained RE in only one fraction (0/+ or + /0) behaved in a manner similar to the REgroup, suggesting that these receptors are non-functional, though there is no indication whether the fault lies in the RE itself or in some cellular recognition site. Of the patients in Table VII, those who did not experience a complete response for 6 months were divided as follows: in the ( + / + ) category 8 had progressive disease, 1 was static and 3 showed a partial response; in the (0/0) category 48 had progressive disease, 3 were static and 2 showed a partial response; in the (+/0) category 10 had progressive disease, 1 was static and 2 showed partial response; in the (0/ + ) category all 8 patients had progressive disease. Of the 129 patients considered, only 27 were pre-menopausal and 10 menopausal. The response rates quoted, therefore, apply principally to post-menopausal disease. It is, however, significant that 18/27 pre-menopausal patients had both biopsies with no detectable RE (0/0) and suffered progressive disease. Of the 42 patients experiencing complete response, 18 had local recurrence, 10 had recurrence in gland and/or skin, 7 in bone and the remainder at one or more distant sites. Of 74 patients with progressive disease, 13 had local recurrence, 21 skin and/or gland, 21 bone, 7 pleura, 7 liver and the remainder at one or more distant sites.
DISCUSSION
Both established dogma (Leake, 1976) and recent interpretations (Sheridan et al., 1979) of steroid hormone action essentially require that functional hormone RE complex forms an equilibrium between the soluble and nuclear-pellet fractions of the cell. Such an equilibrium is rapidly established at 37°C, but can also be established at 0°C over 22 h (Traish et al., 1979). Similarly, the distribution of RE between the soluble and nuclear-pellet fractions of target tissue has also been successfully measured at both 37°C and 4°C by use of different incubation times, though the decreased stability of receptor at 37°C in the cell-free environment meant that assay at 4°C (or 20°C) gave more reproducible results (Leake et al., 1979). Thus, hormonal dependence of a particular human breast tumour biopsy should be reflected in the presence of measurable quantities of RE in both soluble and pellet fractions of said biopsy.
After adopting strict criteria for the measurement of cellular RE (Leake et al., 1979) it was found that only one-third of patients with primary disease yielded biopsies containing functional RE (Table I), i.e. RE in both soluble and pellet fractions. Biopsies from about half the patients had undetectable levels of highaffinity RE. This is a surprisingly large proportion, but has been maintained throughout the study. Further, the low rate of response of advanced disease to hormone therapy in patients lacking RE (Table VII), taken together with the observation that RE-(0/0) primaries give rise to REsecondaries (Table V), suggest that such a high incidence of hormonal insensitivity is real. The 2 initially unexpected groups of patients (+ /0) and (0/ + ) (Laing et al., 1977) continue to present. Such patients with RE in one fraction of the biopsy only have now also been observed in other studies (Panko & MacLeod, 1978;Thorsen, 1979;Barnes et al., 1979).
Much of the value of determining RE status in primary disease depends upon the assumption that in subsequent secondary disease RE status will faithfully reflect that in the primary biopsy. However, in a study of 32 patients for whom RE status was determined in both primary and advanced disease (Table V), only half the primaries with fully functional RE gave rise to ( + / + ) secondaries. This is a disappointingly low level of consistency of RE status between primary and secondary disease, but may reflect the fact that the secondary samples are necessarily selected from surgically accessible sites. The consistency of RE status might be higher if all sites of secondary disease were con-sidered. These patients had received no adjuvant therapy, so the loss of RE must have resulted during the natural progression of the disease. Further studies in progress may clarify this situation.
It was more encouraging to find that RE status in only 1 patient out of 17 reverted from RE-primary to fully RE+. Patients whose receptors fell in the abnormal categories (+/0 or 0/ +) were found to show a high level of variation between primary and secondary disease. However, there were no cases of change to RE+ status. Hence patients whose primaries are either RE-or abnormal have very little chance of subsequently responding to hormonal manipulation.
The follow-up data in Table VII show that patients whose biopsies of secondary disease contain fully functional RE have a much better chance of objective response to human manipulation than do those with either no RE or RE in one fraction only. The criteria of clinical response used in this paper are quite severe (British Breast Group, 1974) similar to those proposed by the UICC (Hayward et al., 1977). Stoll (1977) proposed shorter periods of sustained response. Adoption of less stringent criteria will increase the response rate in any series. However, no biological index is ever likely to identify potential responders with complete accuracy, since so many variables are involved. Alternative indices of hormonal-dependence have been tried, and perhaps the most successful is measurement of soluble progesterone receptor, a product of oestrogen action in normal target tissue. Recent studies by Barnes et al. (1979), Thorsen & Stoa (1979) and in our own laboratory suggest that although the presence of RP is not always associated with an improved clinical response, it is usually associated with the presence of fully functional RE and so yields a similar success rate in the identification of responders to hormone therapy. We are extremely grateful to the Cancer Research Campaign whose financial assistance has been essential to this study. We should also like to thank Professor R. M. S. Smellie for his provision of facilities and also for his helpful comments and criticisms. We are very pleased to acknowledge the co-operation of many of our surgical colleagues, especially Mr Frank Crossling, Mr Colin McArdle and Mr John Maxwell Anderson. | 2014-10-01T00:00:00.000Z | 1981-01-01T00:00:00.000 | {
"year": 1981,
"sha1": "7d0cca9d72b269d3f59ec58bd5c09f8647114f73",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc2010491?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "7d0cca9d72b269d3f59ec58bd5c09f8647114f73",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271567896 | pes2o/s2orc | v3-fos-license | Autophagy‐mediated activation of the AIM2 inflammasome enhances M1 polarization of microglia and exacerbates retinal neovascularization
Abstract Retinopathy of prematurity (ROP) is a retinal neovascularization (RNV) disease that is characterized by abnormal blood vessel development in the retina. Importantly, the etiology of ROP remains understudied. We re‐analyzed previously published single‐cell data and discovered a strong correlation between microglia and RNV diseases, particularly ROP. Subsequently, we found that reactive oxygen species reduced autophagy‐dependent protein degradation of absent in melanoma 2 (AIM2) in hypoxic BV2 cells, leading to increased AIM2 protein accumulation. Furthermore, we engineered AIM2 knockout mice and observed that the RNV was significantly reduced compared to wild‐type mice. In vitro vascular function assays also demonstrated diminished angiogenic capabilities following AIM2 knockdown in hypoxic BV2 cells. Mechanistically, AIM2 enhanced the M1‐type polarization of microglia via the ASC/CASP1/IL‐1β pathway, resulting in RNV. Notably, the administration of recombinant protein IL‐1β exacerbated angiogenesis, while its inhibition ameliorated the condition. Taken together, our study provides a novel therapeutic target for ROP and offers insight into the interaction between pyroptosis and autophagy.
INTRODUCTION
Retinopathy of prematurity (ROP) is a disorder characterized by retinal neovascularization (RNV) that has emerged as a prominent contributor to childhood blindness globally. 1,2ROP typically presents as retinal ischemia, pathological angiogenesis, and fibrous tissue hyperplasia, leading subsequently to retinal detachment and eventually irreversible blindness. 3Anti-vascular endothelial growth factor (VEGF) drugs have been widely used in clinical practice to inhibit the growth of neovascularization in ROP.However, their efficacy is limited to a partial blockade of angiogenesis. 4,57][8] Presently, there remains an urgent need for a safe and effective therapy for ROP.
Researchers have recently investigated the role of immune-mediated inflammation in the initiation, progression, and treatment of RNV diseases. 9Microglia, as resident immune cells in the retina, play a crucial role in both the immune response and angiogenesis. 10,11It has been reported that microglia interact with endothelial cells to form neurovascular units (NVUs) during the early stages of central nervous system development, which supports blood vessel formation and integrity. 12,13Gao et al. found that retinal vascularization is reduced following selective clearance of microglia by PLX3397 or clodronate liposomes. 14These studies highlight the pivotal role of microglia in angiogenesis.
Microglia can detect and respond to danger signals and pathogen-associated molecular patterns (PAMPs) through pattern recognition receptors (PRRs), such as Tolllike receptors, NOD-like receptors (NLRs), and absence in melanoma 2 (AIM2). 15,16Upon activation, microglia can become polarized and produce a variety of proinflammatory cytokines and pro-angiogenesis factors that contribute to neuroinflammation and the promotion of vessel formation. 17,18Furthermore, the activation of the inflammasome in microglia can amplify the inflamma-tory response, leading to the release of mature IL-1β and IL-18. 19,20n ROP, oxidative stress occurs at an early stage, leading to the activation of molecular patterns associated with cellular damage as well as extensive DNA strand breaks and damage.AIM2, an intracellular DNA sensor, represents a deficiency in one of the PRRs, thereby exerting a substantial influence on the modulation of the innate immune response. 21,22The Cterminal region of AIM2 can recognize double-stranded DNA (dsDNA) and other ligands.Additionally, AIM2 recruits the downstream molecule apoptosis-associated speck-like protein containing a CARD (ASC).In this process, ASC acts as an important bridging molecule that recruits the effector protein CASP1 into the inflammasome complex. 23The activation of AIM2 brings CASP1 molecules in close proximity to each other, thus producing autocatalytically active CASP1 molecules.Then, the precursor cell factors IL-1β and IL-18 are cleaved and subsequently secreted extracellularly. 24Previous studies reported that intercellular adhesion molecule 1 was regulated by IL-1β, resulting in neovascularization. 25However, the involvement of AIM2 in angiogenesis has yet to be explored.
In this study, we developed an oxygen-induced retinopathy (OIR) mouse model that is commonly used to study ROP.To investigate the role of microglia-mediated pyroptosis in OIR, we examined the expression of inflammasome-related genes, including Nlrp1, Nlrp3, Nlrc4, Nlrp6, Nlrp12, and Aim2.Notably, we discovered that while Aim2 mRNA expression was significantly decreased in OIR, its protein level was significantly increased.Interestingly, it was found that under hypoxic conditions, autophagy was inhibited, resulting in an increase in AIM2 protein level.Furthermore, we engineered AIM2 knockout (AIM2 KO) mice and found that the neovascularization in AIM2 KO mice with OIR was attenuated.A series of genetic and functional deletion experiments in BV2 cells revealed that silencing AIM2 inhibited microglial M1-type activation and reduced the expression of ASC, CASP1, IL-18, and IL-1β, which ultimately attenuated angiogenesis.Collectively, our data suggest that the AIM2/ASC/CASP1/IL-1β microglial pyroptosis axis represents a potential anti-angiogenic therapy for ROP.
Single-cell analysis reveals a strong association between microglia and RNV
To clarify the pathology of RNV diseases, we analyzed the single-cell data from our previous study (GSE228370). 26t was found that microglia were strongly associated with RNV diseases, including ROP, diabetic retinopathy (DR), and age-related macular degeneration (AMD); this effect was especially pronounced for ROP (Figure 1A-C).In addition, analysis using uniform manifold approximation and projection (UMAP) revealed a significantly elevated score for microglia, indicating their prominent role in RNV diseases (Figure 1D-G).Gene set variation analysis (GSVA) analysis and the FeaturePlot showed that microglia were enriched in positive regulation of angiogenesis, inflammasome, and autophagy pathways (Figure 1H-N).Furthermore, the cell-cell communication analysis showed that microglia interact closely with endothelial cells (Figures 1O and Figure S1A,B).Taken together, these results identify a pivotal role for microglia in angiogenesis.
AIM2 protein levels increased in the retinas of OIR mice and in hypoxic BV2 cells
We designed a flowchart schematizing the OIR experiment and the phenotypic characteristics of angiogenesis (Figure 2A).Retinal flat-mount images were taken using a confocal microscope on postnatal days 13 (P13), 15 (P15), and 17 (P17).Prolonged hypoxia resulted in aggravated neovascularization, which peaked on P17 (Figure 2B), consistent with previous results. 27,28Supporting these findings, hematoxylin and eosin (H&E) staining revealed that prolonged hypoxia resulted in an increased number of cells breaking through the internal limiting membrane, reaching the maximum on P17 (Figure 2C).To further investigate the involvement of inflammasomes during peak angiogenesis, we used RT-qPCR to assess the mRNA expression levels of inflammasome-related genes, including Nlrp1, Nlrp3, Nlrc4, Nlrp6, Nlrp12, and Aim2, in OIR P17 retinas.We found that the expression of Nlrp3 was significantly increased, while that of Aim2 was significantly decreased (Figure 2D).Likewise, we examined the mRNA expression of the same genes in both hypoxia-treated and normoxiatreated microglia (BV2 cells) and found an increase in Nlrp3 and a decrease in Nlrp1 and Aim2 in the hypoxic cells (Figure 2E).
Because the role of NLRP3 in retinal angiogenesis has been well described, we focused on exploring the effect of AIM2 in OIR, which remains understudied and unclear.Western blot analysis revealed that AIM2 protein levels were significantly upregulated in OIR retinas and in hypoxic BV2 cells with hypoxia (Figure 2F,G).Furthermore, immunofluorescence of retinal sections showed higher expression levels of AIM2 in microglia from OIR mice than from normoxia (NOR) mice (Figure 2H).Additionally, subcellular localization experiment showed that AIM2 was expressed in both the nucleus and cytosol, and there were no significant changes in the subcellular localization between normoxic and hypoxic BV2 cells (Figure S2).Our results suggest that the inflammasome AIM2 may be instrumental in the development of OIR.
Hypoxia-induced autophagy dysfunction leads to increased AIM2 protein levels in BV2 cells
Intriguingly, we noted a substantial reduction in the mRNA expression of AIM2, as illustrated in Figure 2D,E, whereas its protein level was significantly increased, as shown in Figure 2F-H.We speculate that this was due to reduced AIM2 protein degradation under hypoxic conditions, ultimately resulting in an accumulation of AIM2 protein.0][31] In hypoxic BV2 cells, a markedly elevated level of reactive oxygen species (ROS) was observed (Figure 3A,B).Setanaxib (GKT831) is a potent and highly selective NOX1 and NOX4 inhibitor that significantly reduces the production of ROS. 32,33Intravitreal injection with Setanaxib resulted in a marked reduction in neovascularization compared to the Dimethyl Sulfoxide (DMSO) group in the OIR model (Figure 3C,D).
Additionally, through transmission electron microscopy (TEM), we found that autophagosomes were destroyed in hypoxic BV2 cells (Figure 3E).Western blot analysis was used to detect the protein level of LC3B, a widely used autophagy marker, in hypoxic and normoxic BV2 cells; in agreement with our previous results, LC3B was significantly decreased in the hypoxic group (Figure 3F).To confirm the influence of autophagy on AIM2 expression, we stimulated hypoxic BV2 cells with the autophagy inhibitor chloroquine at 5, 10, 20, 40, and 80 µm.Cell viability assay demonstrated that approximately 90% of the cells were viable (Figure S3A).Western blot analysis revealed that AIM2 protein levels increased in a concentration-independent manner following the 3G).Rapamycin, recognized as an autophagy activator, effectively and specifically suppresses the mTOR signaling pathway. 34,35We applied rapamycin in an attempt to restore autophagy activation in hypoxic BV2 cells and found that the protein level of AIM2 was downregulated (Figure 3H).To investigate any reciprocal influence of AIM2, we examined the expression of ATG5, Beclin1, and LC3B in hypoxic BV2 cells with AIM2 knockdown (KD); however, the results indicated that AIM2 does not regulate the autophagy process (Figure S3B).
To evaluate whether the autophagy marker LC3B or P62 directly interacts with AIM2, we conducted coimmunoprecipitation (Co-IP) experiments.The results indicated that there is no direct binding relationship between these proteins (Figure 3I).This finding implies that autophagy may exert its regulatory effects on AIM2 expression through indirect pathways rather than through direct protein-protein interactions.
Inhibition of AIM2 attenuates RNV
To further investigate the role of AIM2 in RNV, we constructed AIM2 KO mice.PCR and gel electrophoresis confirmed the presence of the homozygous AIM2 KO mice at 400 bp (Figure 4A).RT-qPCR showed that the knockout efficiency reached ∼90% (Figure 4B).Additionally, Western blot assay showed almost no bands in the electrophoresis the lane of AIM2 KO group (Figure 4C).Retinal flat-mount images showed that the extent and leakage of RNV were alleviated in AIM2 KO mice with OIR than in wild type (WT) mice with OIR (Figure 4D,E).Subsequently, a series of functional assays were performed in vitro.We transfected BV2 cells with AIM2 KD lentivirus, and the resulting images showed a transfection efficiency of more than 80% (Figure S4A).RT-qPCR analysis revealed that the mRNA knockdown efficiency of lentiviral siAIM2-2 and siAIM2-3 constructs was about 70% (Figure 4F), while Western blot analysis showed that the protein knockdown efficiency of lentiviral siAIM2-3 was about 70% (Figure S4B).BV2 cells were transfected with the lentiviral siAIM2-3 for subsequent experiments and were screened by puromycin dihydrochloride (2 µg/mL) to obtain a stable transgenic strain.
A co-culture system was designed in an effort to investigate the effects of AIM2 in BV2 cells on murine endothelial cells (MECs).In this system, monolayer MECs were seeded in an apical chamber and BV2 cells were seeded in a basolateral chamber (Figure S4C).MECs co-cultured with hypoxic BV2 cells had enhanced tube-forming capacity, whereas MECs with AIM2 KD had significantly weakened tube-forming capacity (Figure 4G,J).Additionally, MECs co-cultured with hypoxic BV2 cells possessed significantly improved migration and proliferation capabilities, which was diminished following knockdown of the AIM2 gene (Figure 4H,I,K,L).We conducted Western blot to examine the protein levels of angiogenic factors VEGFA, PEDF, and TGF-β in AIM2-silenced hypoxic BV2 cells.Our findings indicated that AIM2 had no effect on the expression of VEGFA and TGF-β, but it significantly increased the PEDF protein level (Figure 4M).Collectively, our findings suggest that AIM2 promotes RNV in OIR.
Inhibition of microglial AIM2 expression attenuates cell proliferation and polarization
Microglia are highly plastic and heterogeneous cells that play a crucial role in immune surveillance and microenvironmental stabilization through changes in polarization, proliferation, migration, and migration. 10,36,37However, the influence of inflammasome AIM2 on microglia still remains unclear.In images of NOR retinas, the cell bodies of microglia were small and branched; in contrast, in OIR retinas, microglial cell bodies were enlarged, with shorter protrusions, and amoeboid.However, there were no significant microglial morphological changes between WT and AIM2 KO mice with OIR (Figure 5A).
We subsequently examined the expression of TNFα (M1-type markers) and CD163 (M2-type markers) in BV2 cells using flow cytometry.This revealed that under hypoxic conditions, the expression levels of TNF-α and CD163 were all significantly increased, suggesting that BV2 cells were activated and underwent polarization.Following AIM2 inhibition and subsequent hypoxia, we observed a significant reduction in the expression of TNFα; however, no significant changes were detected in the expression levels of CD163 (Figure 5B-E Transwell assay revealed that the migration ability of BV2 cells was enhanced under hypoxia, whereas there was no change in migration ability when AIM2 was knocked down (Figure 5F,G).Cell Counting Kit-8 (CCK-8) assay showed a significant increase in the number of BV2 cells under hypoxic conditions compared to normoxic conditions, and a decrease in the number of BV2 cells after AIM2 silencing (Figure 5H).Collectively, our findings suggest that inhibition of AIM2 attenuates microglial M1-type activation and proliferation.
AIM2 regulates the protein levels of ASC, CASP1, and IL-1β
Previous research has demonstrated that both AIM2 and ASC have the pyrin structural domain, allowing them to combine and then initiate caspase 1 activation. 38To corroborate these findings, we performed the Co-IP assay and demonstrated that AIM2 and ASC directly interacted with each other (Figure 6A).Additionally, we performed ASC oligomerization assay to show AIM2 inflammasome formation (Figure 6B,C) and then detected several classical downstream mediators of the inflammasome; that is, we found increased levels of ASC, cleaved CASP1, matured IL-1β, and matured IL-18 in hypoxia-treated BV2 cells (Figure 6D-F).After inhibiting AIM2 in hypoxic BV2 cells under hypoxia, the expression of both ASC and CASP1 was significantly reduced, and the protein level of mature IL-1β and mature IL-18 also showed a similar trend between sh-NC and sh-AIM2 BV2 cells under hypoxia, which was especially pronounced in the case of mature IL-1β (Figure 6G-I).
Subsequently, we performed Western blotting on samples from retinas of WT and Aim2 KO mice with OIR.The results agreed with our previous findings, showing that the protein levels of ASC, cleaved CASP1, mature IL-1β, and mature IL-18 were decreased in retinas of Aim2 KO mice with OIR (Figure 6J-L).
Recombinant protein IL-1β promotes retinal angiogenesis and inhibition of IL-1β alleviates neovascularization
Previous studies have showed that IL-1β modulates the expression of VEGFA by activating promoter binding of STAT3 and NF-κB, leading to neovascularization. 39To confirm the role of effector IL-1β in promoting retinal angiogenesis, the recombinant protein IL-1β was applied to AIM2-silenced hypoxic BV2 cells, which were then cocultured with MECs.We observed an enhancement in MECs tube formation by when co-cultured with IL-1βsupplemented BV2 cells than when co-cultured with PBS (Figure 7A,D).Similarly, the proliferation of MECs also increased following co-culture with IL-1β-supplemented BV2 cells (Figure 7B,E).However, there was no significant change in MECs migration ability (Figure 7C,F).
Diacerein has been reported as a significant inhibitor of IL-1β production. 40We then administrated BV2 cells with Diacerein, Western blot showed that the inhibited degree of mature IL-1β reached around 50% (Figure 7G).In addition, we investigated whether Diacerein had an impact on AIM2 expression and discovered that it did not affect the protein level of AIM2 (Figure S5).Subsequently, a series of functional tests suggested that the inhibition of IL-1β in hypoxic BV2 cells weakened the MECs tube-forming and proliferation abilities, but it had no effect on the migration ability (Figure 7H-M).Collectively, our results suggested that AIM2 inflammasome drives microglial polarization and exacerbates RNV via the ASC/CASP1/IL-1β signaling pathway.
DISCUSSION
RNV is a common ocular condition that threatens human vision and is a component of many diseases including ROP, AMD, and DR. 41Among these, ROP is one of the leading causes of childhood blindness worldwide. 42,43Currently, methods to treat ROP produce only modest benefits; anti-VEGFA and laser ablation therapy can temporarily control neovascularization, but the long-term efficacy is limited, and repeated anti-VEGF injection may cause VEGFA resistance and other side effects in children. 44,45Therefore, more efficient and safer therapeutic strategies are urgently needed.Hypoxia, which can aggravate retinal oxidative stress and promote retinal angiogenesis, is considered the main cause of ROP. 468][49] Accordingly, when PLX5622 or clodronic acid is used to selectively deplete microglia, neovascularization is effectively curbed. 50In our study, analysis of our previously published single-cell data confirmed that microglial cells are strongly associated with ROP disease.In the OIR model, we found that microglial cells were highly proliferated and became activated in an amoeboid formation, converging around vascular endothelial cells.These results suggest that microglia possess highly plastic abilities and play a role in RNV through interactions with the vasculature.
Previous research has found that hypoxia initiates the RIP1/RIP3 signaling pathway, which mediates microglial necrosis and increases the release of FGF2, leading to a high degree of RNV; after inhibiting RIP1/RIP3, retinal angiogenesis was effectively reduced. 48Pyroptosis is one of the most important cell death types affecting microglia, which regulates both their plasticity and n = 3/group; *p < 0.05, one-way ANOVA).(H and K) Transwell migration assay and quantitative graph of murine endothelial cells (MECs) after co-culture with AIM2 knockdown BV2 cells (mean ± SD; n = 3/group; *p < 0.05, one-way ANOVA).(I and L) 5-Ethynyl-2ʹ-deoxyuridine (EdU) assay was used to analyze the proliferation of MECs after co-culture with AIM2 knockdown BV2 cells (mean ± SD; n = 3/group; **p < 0.01, one-way ANOVA).(M) The protein level of pigment epithelium-derived factor (PEDF), vascular endothelial growth factor A (VEGFA), and transforming growth factor-β (TGF-β) in AIM2-silenced hypoxic BV2 cells (mean ± SD; n = 3/group; **p < 0.01, unpaired Student's t-test).polarity. 51However, the relationship between microglial death and inflammasomes in ROP remained unclear.To address this knowledge gap, here, we examined the expression of inflammasome-forming genes, including Nlrp1, Nlrp3, Nlrp6, Nlrp12, and Aim2, in hypoxia-treated microglia.Interestingly, we observed that hypoxia was associated with a reduction in the mRNA expression of Aim2, while its protein level was increased.To delve deeper into the role of AIM2 in OIR, we generated Aim2 KO transgenic mice and induced an OIR model.Retinal flat-mount image analysis revealed a significant attenuation of angiogenesis in Aim2 KO mice.In our in vitro experiments, MECs co-cultured with AIM2-inhibited BV2 cells also showed a reduction in tube formation, migration, and proliferation abilities.
Inflammasomes are multiprotein complexes that can sense and target PAMPs and molecular signals related to cell damage. 52,53AIM2, as a DNA sensor, it can specifically recognize and detect impaired DNA molecules, such as DNA and damaged DNA in the cytoplasm resulting from the destruction of nuclear membrane integrity. 54-terminal pyrin domain (PYD) and C-terminal domain comprise the main structure of AIM2.6][57] ROS accumulation damages intracellular components and produces mass dsDNA, which is quickly recognized by and activates AIM2.Activated AIM2 can mediate the production of downstream inflammatory factors.In our experiments, the expression of ASC, cleaved CASP1, mature IL-1β, and mature IL-18 was increased under hypoxia; after silencing AIM2, they were significantly reduced.IL-1β as a downstream effector can promote blood vessel formation. 58,59Previous studies have reported that IL-1β activates vascular adhesion molecules and promotes the secretion of VEGFA, leading to neovascularization. 39otably, we observed that while hypoxia reduced AIM2 mRNA expression, yet increased its protein level both in vivo and in vitro.We hypothesize that this may be due to impaired AIM2 protein degradation under hypoxic conditions.Autophagy is one of the most important intracellular protein degradation methods, 60 and it can be regulated by oxidative stress.Moderate oxidative stress rapidly activates autophagy to clear damaged proteins and repair tissues; yet excessive oxidative stress leads to impaired autophagy. 61,62To validate the regulatory association between autophagy and AIM2, we treated hypoxic BV2 cells with the autophagy inhibitor chloroquine.The results demonstrated that the protein expression of AIM2 increased in a concentration-independent manner.This innovative approach establishes a link between autophagy and pyroptosis, elucidating the regulatory influence of autophagy on AIM2.As an additional consideration, it is possible that feedback mechanisms are at play, which modulate the cellular response to hypoxic stress.These mechanisms could lead to an increase in AIM2 protein levels, serving as a compensatory response to the observed decrease in mRNA levels.Further studies will be conducted in the future to delve deeper into this phenomenon.
Moreover, rapamycin, an autophagy activator, was used to restore the autophagy activation in our experimental setup.Our results demonstrated that this intervention significantly reduced the protein levels of AIM2 in hypoxic BV2 cells.Previous studies discovered that rapamycin can bind with FKBP12 and specifically inhibit the mTOR signaling pathway, a crucial regulator of various cellular processes including autophagy, cell growth, and metabolism. 63,64Given the established role of mTOR in autophagy regulation, we hypothesized that the mTOR pathway might be involved in modulating the activity of the inflammasome.To advance our understanding, we plan to conduct further studies that will provide molecular details of how the mTOR pathway affects the inflammasome.Additionally, we plan to intravitreally autophagy activator in the OIR model to assess the influence of activated autophagy on neovascularization.These future investigations will be instrumental in elucidating the complex signaling networks of autophagy and inflammation in OIR.
However, this study has several limitations.Our findings showed that the protein levels of both NLRP3 and AIM2 are elevated in the OIR.This suggests that both proteins may play a role in the progression of OIR disease, but the specific contributions of these proteins to the observed phenotypes remain unclear.To address this knowledge gap, we plan to generate NLRP3 KO mice, thereby allowing us to investigate the specific contributions of these proteins to the development of OIR and to determine which, if either, plays a more pivotal role in the disease process.
Additionally, given the specific contexts within and between species, across gender, developmental stage, complex tissue layers, and microglial states ultimately determine the phenotype and function of cells. 65In our study, we used the microglial cell line BV2 to investigate the underlying mechanisms; the BV2 cell line cannot reflect all the information of primary microglia.It may not fully represent the complexity and heterogeneity of microglia in the OIR disease state.Finally, the in vitro nature of BV2 cells may not fully recapitulate the in vivo microenvironment, which includes interactions with other cell types and the extracellular matrix.To address these limitations, future research could employ primary microglia isolated from retinas to validate our findings.In our study, we constructed AIM2 KO mice, which cannot completely demonstrate the role of AIM2 in microglia in OIR progression; in a future study, we will consider constructing conditional KO mice.Aside from this, we found that autophagy can regulate AIM2, but we did not identify which domain, terminal, or structure of AIM2 autophagy specifically recognizes; this is a goal of our next in-depth research project.
In conclusion, our study discovered a novel link between autophagy and the AIM2 inflammasome, revealing that autophagy regulates AIM2 protein levels.We also established evidence that AIM2 plays a crucial role in promoting neovascularization through the ASC/CAPS1/IL-1β pathway.These findings provide a potential therapeutic target and strategy for the management of ROP.MATERIALS AND METHODS
Animals
The C57BL/6J mice employed in this investigation were obtained from the Experimental Animal Center at Chongqing Medical University.AIM2 KO mice were generously provided by Prof. Xiaopeng Qi.Approval for all experiments involving animals was obtained from the Ethics Committee of the First Affiliated Hospital of Chongqing Medical University (approval number: 2021-612).
OIR model
To establish the OIR model in mice, P7 WT and Aim2 KO mouse pups with their nursing mother were exposed in hyperoxia (75% O 2 ) until P12, then they were returned to normoxia (∼21% O 2 ) for 5 days, which induced a relative state of hypoxia.On P17, peak of disease, 28,66 they were sacrificed.Subsequently, their retinas were carefully isolated for subsequent experimental procedures.Since there were no appreciable disparities between the sexes, data were gathered from the results for both males and females.
Identification of Aim2 KO mice
Agarose gel electrophoresis was utilized to determine the genotype of Aim2 KO mice.Following the manufacturer's protocol, we prepared a lysis buffer consisting of 1 M NaOH and 0.5 M EDTA, as well as a neutralization buffer containing 40 mM Tris and concentrated hydrochloric acid to adjust the pH to 4. Mice tails were collected and stored in labeled 1.5 mL centrifuge tubes.After incubation with 100 µL of lysis buffer at 95 • C for 1 h, 100 µL of neutralization buffer was added to the centrifuge tubes, which were then centrifuged at 12,000 rpm for 3−5 min to obtain the supernatant for PCR.Each PCR reaction contained 10 µL of GoTaq Green Master Mix (M7122, Promega), 3 µL of primer mix consisting of 1 µL of each primer, and 7 µL of the supernatant from the mixed buffer.The PCR cycling conditions were as follows: initial denaturation at 94 • C for 3 min, followed by 35 cycles of denaturation at 94 • C for 30 s, annealing at 63
Quality control, dimension reduction, and clustering
Seurat v 3.1.2was used for quality control, dimensionality reduction, and clustering.For each sample dataset, we filtered expression matrix by the following criteria: (1) cells with gene count less than 200 or with top 2% gene count were excluded; (2) cells with top 2% UMI count were excluded; (3) cells with mitochondrial content >20% were excluded; and (4) genes expressed in less than five cells were excluded.Gene expression matrix was normalized and scaled using functions NormalizeData and ScaleData.Top 2000 variable genes were selected by FindVariableFeatures for PCA analysis.Cells were separated into 13 clusters by FindClusters.Cell clusters were visualized using tdistributed stochastic neighbor embedding or UMAP with Seurat functions RunTSNE and RunUMAP.
UCell gene set scoring
Gene set scoring was carried out using the R package UCell v 1.1.0. 67UCell scores are based on the Mann-Whitney U statistic, which ranks query genes in order of their expression levels in individual cells.Since UCell is a rank-based scoring method, it is suitable for use in large datasets containing multiple samples and batches.
Differentially expressed genes analysis
To identify differentially expressed genes (DEGs), we employed the Seurat FindMarkers function based on the Wilcoxon rank sum test with default parameters.Genes expressed in more than 10% of the cells in both compared groups and with an average log(fold change) value greater than 0.25 were selected as DEGs.The adjusted pvalue was calculated using Bonferroni correction, and a value of 0.05 was used as the criterion to assess statistical significance.
Pathway enrichment analysis
To investigate the potential functions, Gene Ontology (GO) analysis was used with the "clusterProfiler" R package v 3.16.1. 68Pathways with p adj value less than 0.05 were considered as significantly enriched.For GSVA pathway enrichment analysis, the average gene expression of each cell type was used as input data. 69
Cell-type annotation
The identification of the cell type for each cluster was determined based on the expression of canonical markers in the reference database SynEcoSysTM (Singleron Biotechnology).SynEcoSysTM contains a collection of canonical cell-type markers for single-cell sequencing data from CellMakerDB, PanglaoDB, and recently published literature.
Cell-cell interaction analysis: CellPhoneDB
Cell-cell interaction between microglia and endothelial cells was predicted based on known ligand-receptor pairs using CellPhoneDB (v 2.1.0)version. 70The permutation number for calculating the null distribution of average ligand-receptor pair expression in randomized cell identities was set to 1000.The individual ligand or receptor expression was thresholded by a cutoff based on the average log gene expression distribution for all genes across each cell type.Predicted interaction pairs with a p-value <0.05 and an average log expression >0.1 were considered significant and visualized by dot plot in CellPhoneDB.
Cell Counting Kit-8 assay
Briefly, in 96-well microplates (Corning, Inc.), BV2 cells were seeded and cultivated in 100 µL medium with 5 × 10 3 cells per well. 71Each well received 10 µL CCK-8 reagent (MA0218, MeilunBio) under dark condition and cultured for 3 h.The optical density of individual wells was ascertained using a microplate reader obtained from ThermoFisher Scientific.The absorbance values served as indicators of cellular proliferation.
Transwell assay
Following a 24-h hypoxia treatment, BV2 cells were seeded in the lower chamber of a 24-well transwell plate (8 µm; Corning, Inc.) at a density of 5 × 10 4 cells per well, using DMEM-F12 supplemented with 10% FBS.Subsequently, MECs were seeded into the upper chambers at the same cell density with DMEM-F12 containing 0.5% FBS.After co-culturing for 24 h, the upper chambers were disassembled.MECs were immobilized using a 4% paraformaldehyde fix solution for 15 min, followed by three washes.The cells were then stained with 1% crystal violet.Utilizing a cotton swab, cells on the upper surface of the chamber membrane were wiped away, and images of the migrated cells on the lower surface of the chamber membrane were captured using a fluorescence microscope (Leica).
5-Ethynyl-2ʹ-deoxyuridine staining
BV2 cells were subjected to 24 h of hypoxic conditions and then placed in the upper chambers (0.4 µm; Corning, Inc.) at 5 × 10 4 cells per well, MECs were seeded into the basolateral chambers at 5 × 10 4 cells per well.After 24 h of coculture, remove apical chambers, MECs were incubated with 20 µL of 5-ethynyl-2ʹ-deoxyuridine (EdU) (Beyotime) for 1 h.Following this, cellular fixation was carried out using a 4% paraformaldehyde fixative solution for a duration of 10 min, succeeded by permeabilization with 3‰ Triton X-100 for 30 min.Following this, cells were exposed to a 200 µL reaction mixture from the EdU kit for 30 min and subsequently stained with 1× Hoechst 33342 for 5 min.Imaging was conducted using a fluorescence microscope (Leica).
Tube formation assays
Tube formation experiment was performed according to the method of EdU assay.Matrigel basement membrane matrix (Corning) was applied to each well of a 96-well plate for 50 µL.A total of 2 × 10 4 suspended MECs were put on the well.Images were taken using microscope after an incubation period 6 h at 37 • C, and then examined using ImageJ software.
Transmission electron microscopy
The morphology and number of autophagosome were observed through a transmission electron microscope.Briefly, normoxic and hypoxic BV2 cells were digested using 0.25% trypsin (Gibco) and washed with PBS for three times, then fixed with 2.5% glutaraldehyde.Following with exposing to 1% osmium acid for 1 h, cells were dehydrated using gradient ethanol (65%−90%), next, they were embedded with epoxy resin and were made into 50 nm slices.The ultrathin sections were stained with lead citrate and 1% uranyl acetate, then examined using a transmission electron microscope (HT7700).TEM resolution ranged between 1 µm and 200 nm.
Detection of intracellular ROS level
BV2 cells were counted with 2 × 10 5 and planted in sixwell plates.The control group was cultured with normoxia for 24 h, while the experimental group was incubated at a hypoxic condition.Then, added 1 mL of diluted DCFH-DA (10 mmol/L) to per well and incubated for 15 min.Next, the cells were added with medium to neutralize DCFH-DA.Pictures were captured using a fluorescence microscope (Leica).
Hematoxylin and eosin staining
Eyeballs were fixed with FAS eyeball fixation solution (G1109, Servicebio) and embedded in paraffin according to standard methods.Briefly, retina was treated with graded alcohols (70%-100%), washed with xylene and then embedded in paraffin.Microtomes were used to slice the paraffin blocks into 5-µm sections, which were deparaffinized by xylene and rehydrated by different concentration of alcohols (70%-100%).Subsequently, routine H&E staining was applied.The resulting sections were then observed and captured using a microscope.
Real-time quantitative PCR
After the medium of cultured cell was removed, 1 mL of Trizol (Roche) was directly added to the lysed cells in a 3.5 cm diameter petri dish and followed with the manufacturer's instructions.Extracted RNA was reversed into cDNA using RT Master Mix (AG11705, Accurate Biotechnology [Hunan] Co., Ltd.), and the resulting cDNA was mixed with SYBR Green qPCR Master Mix, protected from light (AG11708, Accurate Biotechnology [Hunan] Co., Ltd.).mRNA expression levels were assessed using the ABI 7500 Real-Time PCR System (Applied Biosystems).All primers were synthesized by Shanghai Sangon Co., Ltd., and their details are presented in Table S1.The mRNA expression was standardized with β-actin and calculated using the 2 −ΔΔCT method.
Western blotting
BV2 cells were seeded in six-well plate (Jet Biofl), after different treatment with them, following two washes with PBS, pre-cooled buffer was added to lyse the retina or BV2 cells.The protein concentration in the retina or BV2 cells was determined using the Bicinchoninic Acid Kit (Beyotime).Subsequently, the protein was separated by gel electrophoresis and transferred onto a polyvinylidene fluoride (PVDF) membrane.The membrane underwent blocking with Fast Blocking Western reagent (Yeasen).Gently, added TBST to the wash the PVDF membrane, and the membrane were placed into a box at 4
Co-immunoprecipitation
BV2 cells were cultured according to the method described above.Co-IP was performed using Thermo Scientific Pierce Co-IP kit (ThermoFisher Scientific) according to the manual instruction.The lysate was added to the cell culture plate for full cell lysis at 4 • C. The lysate was centrifuged at 12,000 rpm for 10 min, and the supernatant was collected and incubated with anti-AIM2 antibody (63660, Cell Signaling Technology) or anti-IgG antibody (30000-0-AP, Proteintech) at 4 • C overnight.After binding with protein A/G Sepharose beads, the protein was eluted with SDS loading buffer and detected by Western blotting.
ASC oligomerization assay
BV2 cells were plated onto six-well plates at a density of 2 × 10 6 cells per well.They were then subjected to normoxic or hypoxic conditions for 24 h.After treatment, the supernatant was discarded, and 200 µL of pre-cooled % NP40 (ThermoFisher) was added to each well.The cells were scraped off and transferred to a 1.5 mL centrifuge tube.The tubes were then placed on ice for 20 min and homogenized by passing through a 7-gauge needle 10 times.The lysates were centrifuged at 4 • C and 6000 rpm for 10 min.The supernatant was collected and the pellet was washed three times with 500 µL of pre-cooled PBS.After washing, the pellet was resuspended in 500 µL of PBS.Freshly prepared 2 mM DSS crosslinker (Ther-moFisher) was added and the mixture was incubated at 37 • C for 30 min.Following incubation, the mixture was centrifuged again at 4 • C and 6000 rpm for 10 min.The pellet was resuspended in 20 µL of SDS loading buffer, heated at 100 • C for 10 min, and then subjected to Western blot analysis.
Lentivirus transfection
A total of 2 × 10 5 BV2 cells were seeded in per well of six-well plates and cultured overnight.After the cells were adherent, AIM2 KD lentivirus was added according to the appropriate multiplicity of infection in DMEM-F12 with 10% FBS.After 8 h, according to the cell status, medium with virus was replaced with fresh complete medium.After transfection, the cell growth was observed, and medium containing puromycin (2 µg/mL) was added for selection to obtain stable cell lines.
Immunofluorescence staining and retinal flat-mount
A total of 2 × 10 4 cells were resuspended and counted, then seeded on the slides, and cultured in an incubator overnight.Immunostaining was performed according to the standard methods.Following fixation with 4% paraformaldehyde, BV2 cells were permeabilized using 0.5% Triton X-100 (G1204, Servicebio) for 30 min.Subsequently, cells were blocked with goat serum at 37 • C for 30 min, and the primary antibody was incubated overnight at 4 • C.After that, cells were treated with corresponding secondary antibody for 1 h at room temperature.Lastly, the nuclear of cells were stained with 4,6-diamidino-2-phenylindole.Images were captured by fluorescence microscope (Leica).
The eyes of mice were also fixed in 4% paraformaldehyde at room temperature for 2 h.After excision of surrounding tissues, the retina was dissected into a quadrifoliate configuration and mounted onto a glass slide.Following the permeabilization of cell membranes, the antigen was subsequently blocked with goat serum for 30 min at 37 • C. Gently shake off the sealing solution, add PBS to the slices with a certain proportion of CD31 (1:350), and the slices are placed flat in a wet box at 4 • C for overnight incubation.Pictures were taken under fluorescence microscope (Leica).
Statistics analysis
All experiments performed in this study were at least three independent replicates.The data were presented as mean ± standard deviation, and statistical analyses were performed utilizing SPSS Statistics 27 (IBM Corp.).Figures were generated using Prism 9.0 (GraphPad).
A C K N O W L E D G M E N T S
We are grateful to Prof. Xiaopeng Qi for providing the Aim2 knockout mice.This work was supported by the National
F I G U R E 1
Single-cell analysis shows the correlation of microglia and retinopathy of prematurity (ROP) disease.(A-C) UCell images showing the relative expression of concerned eye disease-related genes in different cell types.(D-G) Uniform manifold approximation and projection (UMAP) and FeaturePlot of ROP, diabetic retinopathy (DR), and age-related macular degeneration (AMD).(H-N) Gene set variation analysis (GSVA) analysis and FeaturePlot of microglia.(O) Microglia and endothelial cells communication.F I G U R E 2 Absent in melanoma 2 (AIM2) expression is upregulated in the retinas of oxygen-induced retinopathy (OIR) mice and in hypoxic BV2 cells.(A) The flowchart and pathological character of OIR mice model with time course.(B) Immunofluorescence images of retinal flat-mounts in NOR and OIR mice stained CD31 (endothelial cell maker) at postnatal days 13, 15, and 17, neovascularization was circled in red.Scale bar: 1 mm.(C) Images of hematoxylin and eosin (H&E) staining in NOR and OIR mice at postnatal days 13, 15, and 17.Scale bar: 25 µm.(D) mRNA level of inflammasome including Nlrp1, Nlrp3, Nlrp6, Nlrp12, and Aim2 between NOR and OIR retinas (mean ± standard inhibition of autophagy (Figure ). deviation[SD]; n = 3/group; *p < 0.05, **p < 0.01, unpaired Student's t-test).(E) mRNA expression of Nlrp1, Nlrp3, Nlrp6, Nlrp12, and Aim2 in BV2 cells treated with hypoxia compared to that with normoxia (mean ± SD; n = 3/group; *p < 0.05, **p < 0.01, unpaired Student's t-test).(F) Protein expression and quantification of AIM2 between NOR and OIR retinas (mean ± SD; n = 3/group; *p < 0.05, unpaired Student's t-test).(G)Protein level and quantitative graph of AIM2 in BV2 cells treated with hypoxia compared to those treated with normoxia (mean ± SD; n = 3/group; *p < 0.05, unpaired Student's t-test).(H) Representative immunofluorescence images of retinal slices in NOR and OIR mice stained with IBA1 and AIM2.Scale bar: 50 µm.NOR, normoxia.
Xianyang Liu, Qian Zhou, and Jiayu Meng designed the research and performed the study.Hangjia Zuo and Ke Hu revised the manuscript.Rui Zhang helped to polish the manuscript.Huiping Lu, Ruonan Li, and Hongshun Li helped to analyze the GEO data.Zhi Zhang, Meng Tian, Hong Wang, and Shuhao Zeng contributed reagents, materials, and analysis tools.Na Li, Liming Mao, and Shengping Hou helped to conceive the research and revise the manuscript.Authors have read and approved the final manuscript. | 2024-08-01T05:13:22.942Z | 2024-07-29T00:00:00.000 | {
"year": 2024,
"sha1": "ab684706b88f745143561466e5c4093cb6a33db1",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ab684706b88f745143561466e5c4093cb6a33db1",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51732060 | pes2o/s2orc | v3-fos-license | Energy Efficient Data Acquistion in Wireless Sensor Network
Wireless sensor network (or sensor network, for brevity in the following) comes into practice, thanks to the recent technological advancement of embedded systems, sensing devices and wireless communication. A typical sensor network is composed of a number of wirelessly connected sensor nodes distributed in a sensed area. In the network, sensor nodes sense their surroundings and record sensed readings. The sensed readings of individual sensor nodes are then collected to present the measurement of an entire sensed area. In many fields including but not limit to, military, science, remote sensing Vasilescu et al. (2005), industry, commerce, transportation Li et al. (2011), public security Faulkner et al. (2011), healthcare and so on, sensor networks are recognized as important sensing, monitoring and actuation instruments. In addition, many off-the-shelf sensor node products Zurich (n.d.) and supporting software such as TinyOS Group (n.d.) are available in the market. Now sensor network application development is much facilitated. Many sensor networks are anticipated to be deployed soon.
can be collecting all raw sensed readings and maintaining them in a data repository for centralized processing. Alternatively, a large volume of raw sensed readings are streamed to a processing site where analysis and data processing are directly applied on streamed sensor readings . However, costly wireless communication can quickly use up sensor nodes' battery energy. In other words, such a centralized approach is not energy efficient and thus undesirable in practice. As in the literature, a lot of original ideas and important research results have been developed for energy efficient data acquisition. Among those, many new techniques have been developed based on the idea of in-network query processing. Through in-network query processing, queries are delivered into sensor networks and sensor nodes evaluate the queries locally. By doing so, (partial) query results are transmitted instead of raw sensed readings. Since (partial) query results are in smaller size than raw sensed readings, energy cost can be effectively saved. Subject to the types of queries and potential optimization opportunities, various in-network query processing techniques have been developed and reported in the literature.
In this chapter, we review the main concepts and ideas of many representative research results on in-network query processing, which include some of our recent works such as itinerary-based data aggregation Xu et al. (2006), materialized in-network view Lee et al. (2007), contour mapping engine Xu et al. (2008) and in-network probabilistic minimum value search Ye, Lee, Lee, Liu & Chen (to appear). As briefly described, itinerary-based data aggregation is a new access method that navigates query messages among sensor nodes to collect/aggregate their sensed readings. Materialized in-network view is a novel data caching scheme that maintains (partial) query results in queried sensor nodes. Then, subsequent queries issued by different base stations can access cached results instead of traversing query regions from scratch to determine query results. Contour mapping engine derives fairly accurate contour line segments using data mining techniques. Besides, only the coefficients of equations representing contour line segments, which are very compact, are transmit. Finally, probabilitistic minimum value search is one of recent efforts in probabilistic sensed data aggregation. It finds the possible smallest sensed reading values in a sensor network.
The details of those works will be discussed in the following sections. First of all, we present a system model that our reviewed research results are based upon. Then, we discuss research results in in-network data aggregation and in-network data caching as well as in-network contour map computation. We further discuss recent results on in-network probabilistic data aggregation. Last but not least, we summarize this chapter and discuss some future research directions.
System model
Without loss of generality, a sensor network is composed of a number of battery powered stationary sensor nodes deployed over a sensed area. The spatial deployment of sensor nodes in a target sensed area is one of the research problems in sensor networks; and many research works (e.g. Bojkovic & Bakmaz (2008)) were proposed to maximize the area coverage by a given quantity of sensor nodes while providing required network connectivity among sensor nodes. The issue of sensor node deployment is usually considered to be independent from others. As will be discussed in the following, research works on data acquisition mostly assume that sensor networks are already set up and all sensor nodes are with identical hardware configurations.
In a typical sensor network, some senor nodes in the sensor network are directly connected to computer terminals; and they are called base stations. Through base stations, computer terminals can issue commands to administer sensor nodes and collect their sensed readings. Besides, all sensor nodes are wirelessly connected, e.g., MICAz uses 2.4GHz IEEE 802.15.4 radio. That means messages are all sent through wireless broadcast. When a node delivers a message, other sensor nodes within its radio coverage range can receive the message. Messages can be conveyed transitively from a sender sensor node to a distant target receiver node Xu et al. (2007). On the other hand, because of shared radio frequencies, simultaneous messages from closely located sensor nodes may lead to signal interference. Moreover, due to ad hoc connectivity and sensor node failure, which is common in practice, connections among sensor nodes are mostly transient and unreliable. Thus, other than regular data messages, every sensor node periodically broadcasts a special message called beacon to indicate its liveness to its neighboring sensor nodes. Also, data messages are sent through multiple paths from a sender sensor node towards a destination to deal with possible message loss Xu et al. (2007). As a result, those extra messages incur additional energy costs.
To save battery energy, sensor nodes stay in sleep mode for most of the time; and each of them periodically wakes up to sense its surrounding and record its measurements as sensed readings. For data acquisition, an entire sensor network (i.e., a set of sensor nodes N)presents a set of sensed reading values V, notationally, V = {v n | n ∈ N} where v n is a sensed reading value provided by a sensor node n.B a s e do nV, data analysis is conducted to understand the entire sensed area. As already discussed, it is very costly to collect V from all sensor nodes. Accordingly, some research results were reported in the literature exploring techniques to collect a subset of sensed readings V ′ (⊂ V) from a subset of sensor nodes N ′ (⊂ N), while collected readings may only provide approximate analytical results. The following are two sorts of techniques. Sampling is the first technique that sensed readings are only collected from some (randomly) selected sensor nodes Biswas et al. (2004); Doherty & Pister (2004); Huang et al. (2011). Those unselected sensor nodes do not need to provide their sensed readings. The sampling rate is adjustable according to the energy budget. The second technique is based on a certain prediction model Silberstein et al. (2006) that, some sensed readings can be omitted from being sent as long as they can be (approximately) predicted according to other sensed readings, which can be from some neighboring sensor nodes, or from the previous sensed reading values of the same sensor nodes. Meanwhile, another important research direction for energy efficient data acquisition based on in-network query processing Hellerstein et al. (2003) has been extensively studied; and we shall review some of the representative works in the coming four sections.
is very suitable to sensor networks. In the following, we discuss two major strategies, namely, infrastructure-based approaches and itinerary-based approaches, for in-network data aggregation.
Infrastructure-based data aggregation
As their name suggests, infrastructure-based approaches build certain routing structures among sensor nodes to perform in-network data aggregation. TAG Madden et al. (2002) and COUGAR Yao & Gehrke (2003) are two representative infrastructure-based approaches. They both form a routing tree to disseminate a query and to derive aggregated sensed readings in divide-and-conquer fashion. The rationale behind these approaches are two ideas. First, some aggregate functions f are decomposable so that f (V) can be transformed ,whereV 1 , V 2 , ···V x are sensed reading values from x disjointed subsets of sensor nodes and the union of all of them equals V,a n df can be applied to readings from individual subsets of sensor nodes and to their aggregated readings. For example, SUM(V),w h e r eSUM adds all sensed reading values, can be performed as SUM(SUM(V 1 ), SUM(V 2 ), ···SUM(V x )). Second, the connections among sensor nodes can be organized as a tree topology, in which the root of any subtree that covers a disjointed subset of some sensor nodes can carry out local aggregation on data from its descendant nodes. In other words, in-network data aggregation incrementally computes aggregated values at different levels in a routing tree. Figure 1(a) exemplifies a routing tree formed for data aggregation. In brief, upon receiving a SUM query for the total of sensed reading values from its connected computer terminal, a base station disseminates the query to sensor nodes within a specified queried region. The specified queried region can be a small area or an entire sensed area. With the queried region, sensor nodes join the routing tree when they receive the query. A node becomes the parent node of its neighboring nodes in a routing tree if those nodes receive the query from it. In a routing tree, the first queried node within the region serves as the root. Meanwhile, every non-root tree node should have another sensor node as its parent node, and non-leaf nodes are connected to some other nodes as their child nodes.
After the tree is built, data aggregation starts from leaf nodes. The leaf nodes send their sensed reading values to their parent nodes. Thereafter, every non-leaf node derives an aggregated value based on (aggregated) sensed reading values received from its child nodes and its own sensed reading value. As shown in Figure 1(a), some leaf nodes n 1 , n 2 , n 3 first send their reading values of 2, 4 and 5, respectively, to their parent node n 4 . Then, n 4 calculates the sum of their values and its own sensed reading values of 3, i.e., 14, and propagates it to its parent node n 5 . Eventually, the root derives the final sum among all sensor nodes in the region and reports it to the base station.
Itinerary-based data aggregation
The infrastructure-based approaches relies on an infrastructure to perform in-network data aggregation, incurring two rounds of messages for both query dissemination and data collection. However, in presence of sensor node failure, queries and aggregated sensed readings would be lost making these approaches not very robust and reliable. Some additional research works Manjhi et al. (2005) were proposed to improve the robustness and reliability of routing trees by replicating aggregated values and sending them through different paths towards the root. However, it incurs extra data communication cost. To save the quantity of messages, we have recently developed itinerary-based data aggregation Xu et al. (2006).
The basic idea of itinerary-based data aggregation is to navigate a query among sensor nodes in a queried region as illustrated in Figure 1(b). In every step, a query message that carries both a query specification and an immediate query result is strategically sent from one sensor node to another along a designed space filling path called itinerary.Thewidthofanitineraryis bounded by a maximum radio transmission range. Sensor nodes participating in forwarding a query message are called Q-nodes. After it receives a query message, a Q-node asks its neighboring nodes for their sensed readings. Then, the Q-node incorporates all received sensed readings and its own reading into the immediate query result. Thereafter, it forwards the query message with a new intermediate query result to a succeeding Q-node. Here, the succeeding Q-node is chosen by the current Q-node. If a Q-node fails, its preceding Q-node can detect it and re-propagates the query message to another sensor node as a replacement Q-node. As such, the itinerary can be resumed from that new Q-node. The evaluation of a query completes when a specified region is completely traversed. Finally, a query result is returned to the base station. On the other hand, the length of an itinerary directly affects the query processing time. A single itinerary takes a very long processing time, especially in a large query region. Thus, as opposed to single itinerary as shown in Figure 1(b), parallel itinerary has been developed to improve query processing time. As depicted in Figure 2(a), an itinerary is split into four threads scanning four rows in a region. Their immediate query results are then aggregated at the end of the rows. However, wireless signal from two adjacent threads may lead to signal interference, message loss and finally data retransmission. As a result, longer time and more energy are consumed. To address this issue, a hybrid itinerary has been derived accordingly.
Here, a query region is divided into several sections that contain multiple rows. Inside each section, a single itinerary scans all the rows. For instance, as in Figure 2(b), a query region is partitioned into two sections, each covering two rows. Within each section, a sequential itinerary is formed. Now, because of wider separation, the impact of signal interference is minimized while a higher degree of parallelism is achieved, compared with single itinerary.
Through simulation, our developed itinerary-based approach is demonstrated outperforming infrastructure-based approaches Xu et al. (2006). Besides, the idea of itinerary-based in-network query processing has also been adopted for other types of queries and applications such as tracking nearest neighbor objects Wu et al. (2007).
In-network data caching
Data caching is widely used in distributed computer systems to shorten remote data access latency. In sensor networks, data caching has one more important benefit that is saving communication energy cost. Many existing research works focused on strategies of replicating frequently accessed sensed readings in some sensor nodes closer to base stations Ganesan et al. (2003); Liu et al. (2004); Ratnasamy et al. (2002); Sadagopan et al. (2003); Shakkottai (2004); Zhang et al. (2007). In presence of multiple base stations, a research problem of finding sensor nodes for caching sensed readings is formulated as determining a Steiner tree in a sensor network Prabh & Abdelzaher (2005). In a graph, a Steiner tree is a subgraph connecting all specified vertices and providing the smallest sum of edge distances Invanov & Tuzhilin (1994). By caching data in some sensor nodes as internal vertices (that connect more than one edge) in a Steiner tree, the communication costs between those sensor nodes providing sensed readings and base stations are guaranteed to be minimized.
On the other hand, existing data caching schemes do not support data aggregation. Accordingly, we have devised a new data caching scheme called materialized in-network view (MINV) to support SUM, AVERAGE, COUNT, VARIANCE aggregate functions Lee et al. (2007). Specifically, MINV maintains partially computed aggregated readings in some queried sensor nodes. Then, subsequent queries, which are issued by different base stations and which cover queried sensor nodes, can be fully or partially answered by cached results.
Figure 3(a) shows a motivating example of MINV. In the figure, a SUM query Q 1 adds up the sensed readings of all sensor nodes in a query region at time t 1 . At a later times t 2 and t 3 ,t w oo t h e rS U Mq u e r i e s ,Q 2 and Q 3 , respectively, are issued to summarize readings from sensor nodes in two other queried regions overlapping Q 1 's. Without cache, all queries are processed independently. Ideally, if Q 1 's answer can be maintained and made accessible, Q 2 and Q 3 can be answered by some cached data to save the energy costs of an entire sensor network. On the other hand, two major issues are faced in the development of MINV. The first and most critical issue is the presentation and placement of queried results. This directly affects the usability of cached data for any subsequent query. Another issue is about how a query can be processed if its answer is partially or fully available from the cache.
In MINV, we consider a sensed area structured into a grid as shown in Figure 3(b), as opposed to building any ad hoc routing structure that favors queries issued by some base stations at query time. Within every grid cell denoted by cell(x, y), sensor nodes form a cluster and one of the sensor nodes is elected as a cluster head. Upon receiving a query, the cluster head collects sensed readings from all cluster members. Based on this setting, we can treat a sensor network as a grid of cluster heads. To answer aggregation queries, we assume parallel itinerary-based data aggregation as discussed in the previous section. (x, y). As shown in Figure 3(b), intermediate results derived and maintained for a SUM query (called partial sum) are accumulated and cached in cluster heads within queried regions. In the figure, cluster head at cell(3, 4) maintains an initial partial sum (i.e., init(3, 4)) and a final partial sum (i.e., final(3, 4)) as 7 and 10, respectively, while its local reading is 3. Based on cached partial sums, the sum of sensed readings in all cell between cell(x, y) and cell(x ′ , y) in the same row y can be determined as final(x ′ , y) − init(x, y). As in the figure, the sum of sensed readings of sensor nodes from cell(3, 4) through cell(7, 4) can be calculated as 31 − 7 = 24.
To answer another SUM query Q 2 whose region is fully covered by Q 1 's, Q 2 can simply traverse the border of its query region to collect cached partial sums. In Figure 3(c), Q 2 sums up init (3, 3), init(4, 3), init(5, 3) and init (6, 3), i.e., 3 + 7 + 2 + 3 = 15, from the left side of its query region. Thereafter, it calculates the sum of final(6, 7), final(5, 3), final(4, 3) and final (3, 7), i.e., 19 + 18 + 31 + 17 = 85 from the right side of the region, and subtracts 15 from it. Now the final sum is 70. Notice that only cluster heads on the border of a query region are accessed for cached partial sums and participate in query passing. By using the cache, messages between cluster heads and their members are saved. Besides, some internal grid cells inside a given query region are not accessed at all, further reducing energy costs.
Some queries may have their query regions partially covered by previous queries. In these cases, those queries need to be decomposed into subqueries, which each subquery covers one disjointed subregion. The final query result is then computed by aggregating those subquery results. For instance, Q 3 's region is partially covered by Q 1 's. Thus, it is partitioned into three subqueries Q 3 a , Q 3b and Q 3c as illustrated in Figure 3(d). While Q 3 a is totally answered by the cached partial sums, Q 3b and Q 3c are performed as separate SUM queries. The answer of Q 3 is then obtained by adding the sums from these subqueries.
Thus far, cache information has been implicitly assumed to be available to every base stations in the above discussion. In fact, it is not energy efficient to make cache information available everywhere. In MINV, we consider that the cache information is only maintained with initial and final intermediate results in queried grid cells. In this setting, cache discovery is an issue to consider. To determine whether a cache is available for a query, we introduced a probing stage in every query evaluation as illustrated in Figure 3(d). The main idea of this probing stage is described as follows. When a query reaches the (nearest) corner of a query region, it traverses to the diagonally opposite corner and checks if available cache is present in the traversed cells on a diagonal line. If no cache is discovered, it means two possible implications: (i) no cache is available inside the query region, or (ii) a cache if exists has a small overlapped area with the query region, so that it is considered to be not useful to the query. If no cache is used, the query is executed directly from the farthest corner. Otherwise, the query is transformed into subqueries accessing the cache and deriving aggregated reading values in remaining divided areas. Notice that this additional probing stage introduces a little extra communication cost, compared to evaluating queries directly, which usually derives query results at the farthest corners of query regions and sends the results from there back to base stations. Besides, for some cases like entire query regions fully covered by a cache (e.g., Q 2 as discussed above), probe stages can be omitted.
In-network contour map computation
As discussed in the previous two sections, data aggregation was used to compute a single aggregated value representing the measurements for an entire sensed area or a query region. For a large sensed area, certain measurements recorded by sensor nodes, e.g., temperature, wind speed, etc., should continuously change over the area. Data aggregation cannot effectively represent such spatially varied measurements. Thus, some other presentations, e.g., histogram, contour map, etc., should be used instead. Among those, contour maps are often used to present the approximate spatial distributions of measurements. On a contour map as illustrated in Figure 4(a), an area is divided into regions by some curves called contour lines and every contour line is labeled with one value. Thus, on a contour map, all measurements on a contour line labeled with v are equal to v, whereas measurements at some points not on any contour lines can be determined through interpolation according to their straight-line distances to adjacent contour lines. Xue et al. (2006). An earlier work Xue et al. (2006) was proposed to construct a contour map as a grid, in which each grid cell carries an aggregated single value. This grid presentation can facilitate recognition and matching spatial patterns of measurements with respect to some predefined patterns for event detection and phenomenon tracking. However, the grid presentation cannot provide very precise contour maps and it may incur a large communication cost to convey individual grid cell values, especially when grids of very fine granularity are used.
205
Energy Efficient Data Acquistion in Wireless Sensor Network
www.intechopen.com
Motivated by the importance of contour map in sensor networks, we have developed a Contour Map Engine (CME) to compute contour map in sensor networks Xu et al. (2008). More precisely, CME computes contour lines, which can be represented by the coefficients of certain curve/line equations, and thus are small to transmit. In a sensor network, every small area is assumed to be monitored by a cluster of sensor nodes as shown in Figure 4(b). Periodically, a cluster head collects sensed readings from all sensor nodes. Based on their spatial locations and reported sensed readings, the cluster head determines a contour line segment for the area and sends it to a base station. Finally, the base station connects all received contour line segments and constructs a contour map.
Logically, a contour line with respect to a given v c divides a given area into subareas on its two sides as in Figure 4(c). On one side, all sensor nodes provides reading values not greater than v c , whereas all other sensor nodes on another side have their readings not smaller than v c . Here, some sensor nodes reporting their sensed readings of v c may be distributed around the contour line. Further, given the reading values and locations of individual sensor nodes, partitioning an area by a contour line segment is somewhat equivalent to a binary classification problem. In light of this, the design of CME uses support vector machine (SVM) Christianini & Shawe-Taylor (2000), a commonly used data mining technique, to determines contour line segments. In a cluster of sensor nodes N ′ ,eachsensornoden (∈ N ′ ) provides its location x n and its classified value y n , which can be either −1or+1, according to its own sensed reading v n and the contour line value v c . Here, N e x t ,w e define the classification boundary (i.e., the contour line segment) as a hyperplane by a pair of coefficients (w, b) such that w T x + b = 0. Based on this, we can estimate an expectedŷ for any location x, which may not have any sensor node aŝ Now, the classification boundary in SVM is derived to maximize the margin between the convex hull of the two sets, such that classification error for unknown locations can be minimized as depicted in Figure 4(d). The distance between any location x and the classification boundary is |w T x| ||w|| . The optimal classification boundary is derived by maximizing the margin, which can be written with Largrange multipliers α n below: α n α m y n y m x T n x m subject to α n > 0a n d∑ n∈N ′ α n y n = 0. Finally, max α W(α) can be solved by traditional quadratic optimization.
Thus far, our discussion has assumed a single linear contour line segment formed. To handle non-linear classification, our CME utilizes space transformation to divide sensor nodes in a sub-cluster, according to some sample training data. Then, contour line segments are derived from individual sub-clusters. Interested readers can be refer the details in Xu et al. (2008). Some other recent works (e.g., Zhou et al. (2009)) have been presented in the literature to improve the precision of contour line segments by using more sophnicated techniques.
In-network probabilistic data aggregation
Sensor reading values are inherently noisy and somewhat uncertain, because of possible inaccurate sensing, environmental noise, hardware defeats, etc., Thus, data uncertainty is another important issue in sensor data analysis. In the literature, uncertain data management has been extensively studied and various models are developed to provide the semantics of underlying data and queries Faradjian et al. (2002); Prabhakar & Cheng (2009). However, existing works adopts centralized approaches Faradjian et al. (2002); Prabhakar & Cheng (2009) that, however, is energy inefficient as already discussed. In-network uncertain data aggregation appears to be new research direction.
Very recently, we have started to investigate a variety of in-network data aggregation techniques for some common aggregation queries. In the following, we discuss one of our recent works on probabilistic minimum value query (PMVQ) Ye, Lee, Lee, Liu & Chen (to appear). A probability minimum value query searches for possible minimum sensed reading value(s). value v i,k is associated with a non-zero probability p i,k being a real sensed reading value. The sum of all p i,k (1 ≤ k ≤| r i |) equals 1. The sensed reading r i of each example sensor node n i is shown next to the node. For n 1 , the actual sensed reading value may be either 5 with a probability of 0.5 or 6 with the same probability. Since every sensed reading has different possible values, it is apparently not trivial to say that 3, which is the smallest possible value among all, is the minimum since it may not actually exist. On the other hand, 4 can be the true minimum when 3 is not real. As such, more than one value can be the minimum value, simultaneously. Thus, the minimum value probability for v being the minimum v min among all possible sensed reading values, denoted by Pr [v min = v], is introduced and defined as below: In our example, Pr[v min = 3] is equal to (1 · 1 · 1 · 1) − (1 · 0.6 · 1 · 0.9)=0.46, Pr[v min = 4] is equal to (1 · 0.6 · 1 · 0.9) − (1 · 0 · 0.6 · 0.8) =0.54,and both Pr[v min = 5] and Pr[v min = 6] are 0, as listed in Figure 5(b). Hence, the minimum value query result include 3 and 4 and their minimum value probabilities are greater than 0.
To evaluate PMVQ in sensor networks, we have devised two algorithms, namely, Minimum Value Screening (MVS) algorithm and Minimum Value Aggregation (MVA) algorithm.B o t ho ft h e algorithms evaluate PMVQs in sensor networks organized as routing trees. We describe them in the following.
MVS Algorithm. Suppose that there are two probabilistic sensed readings r i and r j from two sensor nodes n i and n j ,wherer i = {v i,1 , ···v i,|r i | } and r j = {v j,1 , ···v j,|r i | }.Avaluev j (∈ r j )is certainly not the minimum if r i has all its values smaller than it, i.e., ∀ v i ∈r i v i < v j . Then, v j can be safely discarded. Based on this idea, we introduced a notion called MiniMax. Among sensed readings from a subset of sensor nodes N ′ , a MiniMax denoted by MiniMax(N ′ ) represents the largest possible value, formally, MiniMax(N ′ )= min can be sent to a parent node and the omitted probabilities can be deduced by the parent node. In addition to probabilistic minimum query, we have also investigated other probabilistic queries in sensor networks, e.g., probabilistic minimum node query (PMNQ) Ye, Lee, Lee, Liu & Chen (to appear) that searches for sensor nodes that provide probabilistic minimum values and probabilistic top-k value query that search for k smallest (or largest) values Ye, Lee, Lee & Liu (to appear).
Summary and future directions
Wireless sensor networks are important tools for many fields and applications. Meanwhile, in sensor networks, data acquisition that collects data from individual sensor nodes for analysis is one of the essential activities. However, because of scarce sensor node battery energy, energy efficiency becomes a critical issue for the length of sensor network operational life. Over those years, many research works have studied various in-network query processing as one of the remedies to precious precious sensor node energy. By in-network query processing, queries are disseminated and processed by sensor nodes and a small volume of (derived) data is collected and transmitted rather than raw sensed readings over costly wireless communication. Subject to the supported types of queries and potential optimizations, a variety of in-network query processing techniques have been investigated and reported in the literature. This chapter is devoted to review representative works in in-network data aggregation, data caching, contour map computation and probabilistic data aggregation. With respect to those areas, we also discussed our recent research results, namely, itinerary-based data aggregation, materialized in-network view, contour mapping engine and probabilistic minimum value search. Itinerary-based data aggregation navigates a query among sensor nodes in a queried region for an aggregated value. Compared with infrastructure-based approaches, it incurs fewer rounds of messages and can easily deal with sensor node failure in the course of query processing. To boost the performance of multi-queries issued from different base stations, materialized in-network views provide partial results for previous queries to subsequent aggregation queries. It is different from existing works that cache sensed readings independently and that cannot directly support data aggregation. Contour mapping engine adopts data mining techniques to determine contour line segments in sensor networks, whereas some other works relies on centralized processing or provide less accurate contour maps. Last but not least, probabilistic minimum value search is the initial research result on uncertain sensed data aggregation. As sensed reading values are mostly imprecise, handling and querying probabilistic sensor data is currently an important on-going research direction.
In addition, recent research studies have shown uneven energy consumption of sensor nodes that sensor nodes in some hotspot regions have more energy consumed than others Perillo et al. (2005). Such hotspot problems are currently studied from the networking side. Besides, heterogeneous sensor nodes are going to be very common in sensor networks. Thus, we anticipate that future in-network query processing techniques should be able to handle uneven energy consumption and to make use of super sensor nodes, while many existing works mainly presume homogeneous sensor nodes and consider even energy consumption. This dual conception of remote sensing brought us to the idea of preparing two different books; in addition to the first book which displays recent advances in remote sensing applications, this book is devoted to new techniques for data processing, sensors and platforms. We do not intend this book to cover all aspects of remote sensing techniques and platforms, since it would be an impossible task for a single volume. Instead, we have collected a number of high-quality, original and representative contributions in those areas. | 2017-09-16T21:31:59.381Z | 2012-06-13T00:00:00.000 | {
"year": 2012,
"sha1": "054a6b015a073c2366c8bde0e3463cea7b03206b",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/37517",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "73f523d1d88319b676001c7cd24c7c303f472623",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
256442084 | pes2o/s2orc | v3-fos-license | A Novel Technique for Handwritten Digit Recognition Using Deep Learning
. Handwritten digit recognition (HDR) shows a signi fi cant application in the area of information processing. However, correct recognition of such characters from images is a complicated task due to immense variations in the writing style of people. Moreover, the occurrence of several image artifacts like the existence of intensity variations, blurring, and noise complicates this process. In the proposed method, we have tried to overcome the aforementioned limitations by introducing a deep learning-(DL-) based technique, namely, E ffi cientDet-D4, for numeral categorization. Initially, the input images are annotated to exactly show the region of interest (ROI). In the next phase, these images are used to train the E ffi cientNet-B4-based E ffi cientDet-D4 model to detect and categorize the numerals into their respective classes from zero to nine. We have tested the proposed model over the MNIST dataset to demonstrate its e ffi cacy and attained an average accuracy value of 99.83%. Furthermore, we have accomplished the cross-dataset evaluation on the USPS database and achieved an accuracy value of 99.10%. Both the visual and reported experimental results show that our method can accurately classify the HDR from images even with the varying writing style and under the presence of various sample artifacts like noise, blurring, chrominance, position, and size variations of numerals. Moreover, the introduced approach is capable of generalizing well to unseen cases which con fi rms that the E ffi cientDet-D4 model is an e ff ective solution to numeral recognition.
Introduction
Character recognition of various languages is highly explored by researchers [1][2][3]; however, the most prominent area is HDR.HDR plays a fundamental part in the area of information manipulation as a huge amount of data is available in the form of printed text or pictures [4].Furthermore, analyzing digital information is more cost-effective than processing information manually from the printed paper.The objective of HDR approaches is to recognize and translate the handwritten digits into machine-understandable presentations.Nowadays, HDR grasps the extensive focus of the research community because of its multiple applications.These frameworks are capable of recognizing what is written on printed pages and allow scientists to explore significant information saved on historic documents and files, which appears unrecognizable to human eyes [5].Furthermore, HDR techniques are important for the digital revolution of any business and institute.Computerized HDR frameworks can assist in many areas, for example, automated detection of number plates of vehicles [6] and recognition of the digits written on medical receipts which can aid the chemists, patients, and staff.Moreover, psychologists can use HDR systems to analyze the patient's personality [7].However, the above-mentioned areas have large databases; that is why automated HDR systems need to be efficient and effective with a small execution time and consistent results.In recent years, several HDR systems have been presented to assist in a variety of applications.Such systems have the demand of exhibiting improved numeral recognition and classification accuracy in a consistent manner [6][7][8].Automated recognition of historical handwritten scripts is a challenging task because of the varying handwriting characteristics, languages, and styles that are vulnerable to intrawriter and interwriter differences [9].Moreover, the handwritten numerals found on electronic documents and images are different in terms of their location, size, space from the file margins, and width, which increase the difficulty of distinguishing them accurately [10].Due to this diversity, handwritten digit-detection and classification systems are customized for specific applications to improve the overall performance of the system [6,11].
In the area of HDR, pattern recognition and image processing play an important role in memorizing patterns in handwriting.Recently, several HDR frameworks [12][13][14][15][16][17][18][19][20] have been presented to recognize handwritten numerals.A typical HDR system involves the following main phases: data preparation, segmentation, key point computation, and categorization [21,22].Rapid developments in the area of HDR demonstrate progression in learning algorithms [5,9,23] and the accessibility of huge databases [24,25].The research community has introduced several solutions utilizing handcrafted key point-based methods [13][14][15][16]26], dense network approaches [18][19][20]27], etc.Some of the most often used handcrafted features in character identification include zoning features [28,29], projections [30], Fourier descriptors [31,32], contour direction histogram [9,33], chain coding [33,34] and invariant moments [35].These key points can be combined to create a reliable set of features that can be used to train a classifier.The computation of the features can significantly affect a classifier's performance.Numerous classification algorithms, including support vector machines (SVMs) [13][14][15][16], k-nearest neighbors (KNNs) [36], decision trees (DT) [37], and random forest (RF) [13] have been adopted.A reliable set of features should represent all characteristics of handwriting that are specific to a particular category and be as discriminatory as possible against all other classes [38].However, handcrafted feature-based approaches are found ineffective due to the substantial amount of time required for data preparation and increased training time.Moreover, these approaches are unable to effectively locate the numerals from distorted images.Now, DL is a rapidly expanding field among other machine-learning (ML) approaches for achieving improved performance in the areas of pattern recognition [39,40] and character detection [23,[41][42][43] due to its discriminative key point computation and classification properties.Among other frameworks of DL, deep neural networks (DNNs) are more time-consuming due to the increased number of hidden units and links [44].From the DL family, the CNN approaches are currently more popular for image analysis since these models employ fewer hidden layers than a DNN and have fewer parameters [40].Moreover, these approaches are proficient to compute location-invariant key points in a viable timeframe because of their effective pattern-recognition capabilities.Furthermore, CNNs are proficient in mapping input information to the output using temporal subsampling and thus are not affected by distortions or basic geometric modifications such as rotation, translation, squeezing, or scaling operation [45].Since handwritten numerals can be found in a variety of styles and orientations, therefore, several CNN algorithms have been extensively investigated by researchers to address challenges in the domain of automated HDR [46][47][48].Authors [16] suggested a CNN and SVM-based HDR system and showed improved classification results.In [5], another CNN-based approach, namely, the binary convolutional neural network (BCNN), for HDR is proposed.This approach generated high recognition results but was unable to learn the advanced characteristics of input samples.In [47], the authors suggested two designs for HDR using feedforward NN along with the CNN model for key point computation and categorization.The results show that CNN outperforms FWNN when it comes to handwritten numeral identification.In [48], an ensemble technique for HDR composed of various CNN architectures to improve classification efficiency at the expense of greater computational cost is developed.Although these prior studies [5,47,48] have attained excellent recognition precision, there is still potential for the development of automated HDR in terms of speed and accuracy performance.Several approaches have been presented in history for the effective and efficient detection of numerals from images, where most of the works are based on DL approaches.The main shortcoming of DL-based HDR techniques is their inefficiency and significant processing times.Moreover, many contributions in the literature for the recognition of handwritten and printed text recognition focus primarily on simple characters with no anomalies or noise [49].However, in real scenarios, the handwritten characters can be broken/distorted and produced in unrestricted conditions such as noise, blurriness along with varying illumination conditions, contrast, and intensity.These factors cause identification algorithms to exhibit multiple contradicting behaviors, thus affecting overall recognition accuracy.Rani et al. [46] proposed a deep CNN model based on Alex-Net for the identification of characters having broken and messy appearances.The accuracy of the model is reported at 92% of the synthetically gathered database.Similarly, in [23], an object recognition framework was adopted for the identification of handwritten numerals under the presence of noise and distortions.This approach attains comparatively higher recognition results but at the charge of enhanced computational burden.
The proposed study varies from previous research [23,[46][47][48] as it demonstrates the usefulness of CNN in terms of high accuracy as well as computational and processing complexity for categorizing handwritten numerals.The main motivation of this work is to present an efficient and effective solution to the aforementioned issues of numeral recognition.For this, we suggested an efficient CNN framework for HDR that is inspired by the EfficientDet model [50], which produces higher recognition rates in comparison to the existing latest approaches.In the EfficientDet-D4 model, the EfficientNet-B4 CNN backbone aids in the extraction of more reliable information from the input handwritten numeral image.The proposed DL framework can accurately detect and identify handwritten numbers in the presence of distorted backgrounds because it uses multiscale feature fusion during feature computation.The experimental findings validate the effectiveness of the proposed approach for dealing with complex situations such as changes in size, orientation, writing formats, and styles and huge matching among the structures of various digits.Moreover, the presented framework is also robust against noise and blurring in the input samples and can efficiently recognize numerals by minimizing the execution time due to its shallow architecture.
The following are the main contributions of the proposed technique: (1) We presented an end-to-end deep learning framework based on the EfficientDet-D4 model for precise identification and classification of handwritten numerals (2) The presented approach uses the lightweight backbone EfficientNet-B4 to compute robust and discriminative key points that improve the overall performance of HDR while reducing model training and execution time (3) To exhibit the usefulness of the presented framework, a rigorous quantitative and qualitative comparison of the provided technique was undertaken using a standard benchmark MNIST database.The findings show that the proposed CNN model improves recognition rates when compared to previous CNN-based algorithms The remaining structure of this study is as follows: Section 2 presents an overview of the related work that has been done to identify handwritten numbers from images.Section 3 provides details of the proposed methodology adopted for the recognition purpose.In Section 4, the experimental setting and findings are discussed, and lastly, the study is concluded with some recommendations in Section 5.
Related Work
Significant work has been done in the past for the development of automated HDR systems.Boukharouba and Bennia [9] suggested a handcrafted feature-based approach for HDR.A distinctive key point set was generated by employing the chain code histogram (CCH) [33] approach along with the pixel's transition information in the horizontal and vertical directions.The SVM classifier was then trained to classify extracted key points.This approach [9] is capable of accurately recognizing handwritten numerals; however, it necessitates extensive training on a larger database.In [51], the author proposed an ensemble classification method using bagging for improved accuracy.A hybrid system based on bagged-SVM, bagged-RBF, and RBF-SVM was designed and evaluated on different real and benchmark datasets.The performance evaluations demonstrated that the suggested hybrid RBF-SVM classifier performs well in terms of classification results; however, the generalization power needs further enhancements.Similarly in [52], the authors introduced a new approach for identifying handwritten digits by combining different feature-extraction methods and employing ensemble classifiers.Six feature sets were obtained in total, and the MNIST database was used to evaluate this model.The results demonstrated improved recognition performance; however, noise, distortion, or an unusual writing style cause performance degradation.Dine et al. [53] presented a novel feature computation method based on structural and statistical approaches.Initially, the preprocessing was performed to binarize, crop, and normalize the input data.Then, four different feature sets were computed by using cavity [54], zoning [28], Freeman chain coding (FFC) [34], and profile projection [55] methods.Lastly, KNN was used for the classification of key points.This approach [53] attained a recognition accuracy of 95% using FFC on the MNIST database.However, it may not perform well on challenging databases.Hou and Zhao [56] employed a combination of both handcrafted and deep features for the recognition of handwritten numerals.The Gabor feature-extraction algorithm was used to compute the handcrafted key points.The CNN classifier was trained using the calculated key points.This method [56] enhanced the accuracy of number identification; however, it has an increased computational cost.Pham et al. [57] suggested a dropout regularization strategy to improve the resilience of RNNs for HDR.This approach increases RNN accuracy by significantly lowering character and word error rates.Shamim et al. [13] developed an offline HDR technique using several ML algorithms such as multilayer perceptron (MLP), Naive-Bayes, Bayes-Net, SVM, J48, RF, and random tree.The results showed that MLP outperforms other classifiers in recognition performance.In [58], the authors presented a convolutional recurrent neural network (CRNN) that combined the advantages of deep CNN with RNN.This technique [58] was also used for scene text categorization and outperformed existing approaches for number recognition; however, this technique is computationally complex.
Wang et al. [59] introduced the quantum k-neighbor technique for HDR.The method in [59] has minimized the economic burden in comparison to the traditional k-nearest neighbor algorithm; however, the performance needs further improvement.Another technique for HDR was proposed in [60].In the first step, the multizoning technique [61] was applied for the key point calculation.In the next step, SVM and MLP classifiers were trained over the computed features to locate the handwritten digits.The method in [60] exhibits better numeral recognition performance but may not show better detection accuracy when digits form a triangle shape.Assegie and Nair [37] introduced an approach for HDR.Image pixels were employed as feature vectors to train the DT for digit localization and detection.This approach [37] is simple to implement; however, recognition accuracy needs further enhancement.Ali et al. [42] proposed a framework for localizing and recognizing the numerals, in which the Java-based DL4J model was utilized for the key point computation.In the next step, the extracted features were utilized to train CNN to classify the detected digits.It is concluded in [42] that for small-sized databases, CNN with minimum layers exhibits better detection 3 Journal of Sensors accuracy.Another DL-based framework was introduced in [62] to automatically locate the digits from input samples named as the deep convolutional self-organizing map (DCSOM) framework.The method explained in [62] employed the DCSOM model to compute the local histograms and deep key points to exhibit the categorization results.This method [62] works well for digit categorization under the occurrence of noise but may not perform well under rotational variations.Hafiz and Bhat [18] proposed an effective hybrid classifier by applying DL with the Q-learning-based reinforcement learning method [63].This work [18] shows better digit categorization accuracy under the occurrence of rotational alterations but at the expense of increased economical burden.A 3-layered spiking neural network (SNN) for recognizing the numerals was proposed in [19].It is concluded in [19] that the SNN-based DL model works well for numeral classification in comparison to traditional ANN frameworks with backpropagation (BP) techniques.However, such solutions may not perform well over large benchmark classification problems.Other researchers modified the BP algorithm by adopting the variable step size approach and the Newton method to improve the network convergence speed [27].Although this methodology enhanced the algorithm's handwritten numeral recognition performance, it requires more memory and is therefore inadequate for large-scale applications.In [20], a rapid handwritten numeral identification technique was proposed based on affinity propagation (AP) clustering and the BP neural network.However, the aforementioned methods [20,27] lack the ability to represent robust features and are quickly influenced by external factors such as noise and blurriness, making them unable to achieve better recognition accuracy.Moreover, the described studies [20,27] include limitations in terms of accuracy and computing time that can be further improved.
Jain et al. [64] presented a rotational invariant architecture using the CNN model for the identification of handwritten digits and captcha recognition.The approach employs multiple instances of the LeNet model [4]; each model is trained on different rotation angles and is, thus, robust to high-degree variation in the orientation of digits.Ali et al. [41] introduced a CNN with an extreme learning machine-(ELM-) based method for the recognition of handwritten numbers.For the extraction of discriminative key points, an enhanced CNN based on the LeNet model [4] having 5 hidden layers and an ELM classifier for key point classification into 0-9 classes was built.This strategy [41] improves the classification accuracy; however, the model suffers from the overfitting problem.The authors also investigated the effect of increasing hidden layers on model performance and discovered that increasing hidden layers reduces model performance for HDR.Similarly, in [65], the authors suggested an ANN and ELM-based technique for the classification of MNIST handwritten digits.The results showed that ELM attains better classification accuracy and less processing time than the ANN approach.Albahli et al. [23] proposed a region-based CNN, namely, Faster-RCNN, for the HDR system.The authors employed an improved key point extraction network based on DenseNet-41 for the computation discriminative feature set.The region proposal network was then used to produce the ROI and classify the handwritten numerals.This method [23] correctly recognized digits under complex transformations including rotation and scale variations; however, it relies on a predetermined set of anchor boxes and thus involves extensive hyperparameter choices during the training process.Table 1 presents the comparison of existing approaches and their limitations.We reviewed numerous automated approaches for accurately distinguishing handwritten numbers; nevertheless, performance can still be further improved in terms of accuracy and computing complexity.Existing methods either necessitate extensive preprocessing or are ineffective when dealing with distorted backgrounds or complex situations such as changes in size, orientation, writing formats, styles, noise, and blurriness.Furthermore, these methods require considerable training and are susceptible to model overfitting, resulting in poor performance on unseen data that can be enhanced.
Method and Material
To design the numeral recognition system, we have followed two main phases: (i) preparing data and (ii) handwritten digit localization and classification.The entire flow of the introduced approach is explained in Figure 1 while the mathematical model formulation is shown in Algorithm 1.In the first phase, annotations are designed to exactly locate the area containing each numeral.Next, the created annotations are used to perform the training of the introduced approach, namely, the EfficientNet-B4-based EfficientDet-D4.The EfficientDet-D4 model consists of three stages to perform the detection and categorization of HDR.Initially, EfficientNet-B4 is used as the base network to calculate the features from the input image.The EfficientDet-D4 framework accepts two types of inputs, namely, the input and annotated samples.Then, the BiFPN unit of the EfficientDet-D4 approach executed the top-down and bottom-up feature concatenation numerous times to calculate the final feature vector.Next, the prediction module performs the localization and classification of numerals via employing the computed key points, and obtained performance results are determined by using the standard metrics employed in the field of deep learning.A detailed demonstration of phases followed by the introduced work is explained in Algorithm 1, while the in-depth details of all steps are discussed in the subsequent sections.
Dataset.
To analyze the classification accuracy of the presented framework, several experiments are performed over the challenging database, namely, the MNIST (Modified National Institute of Standards and Technology) dataset [24].This dataset contains 10 classes of numerals presenting digits from 0 to 9. MNIST is a standard and the largest dataset of handwritten digits that is also heavily explored by the research community to train numerous image-processing models.The used dataset consists of a total of 70,000 samples of which 60,000 images are from the training part, while the remaining 10,000 samples are from the test set.The images from the MNIST dataset are challenging in nature as these are subject to variations in the size, scale, and angle of digits.Moreover, the images are suffering from blurring, noise, and intensity variations as well which present it as a complex database for numeral classification.We have presented some sample images from the MNIST dataset in Figure 2.
Annotations.
For the training procedure, it is necessary to correctly determine the location of digits from the input images.To accomplish this, the labeling [26] program is used to make the annotations of digit image portions to precisely indicate the ROIs.The generated values are stored in XML files having Bbox values along with the class of the region.After that, these files are used to generate the training file which is required for framework training.
EfficientDet.
Accurate and efficient feature extraction is essential to classify the input samples as digit categories.Simultaneously, getting the more descriptive feature set is a difficult task due to some reasons like the calculation of a The method is computationally expensive 5 Journal of Sensors large feature set which causes the model overfitting, and the other one is a smaller-size feature set which causes the model to miss learning some important image characteristics, i.e., size, color, and texture.So, it is essential to utilize an automated feature extractor instead of handcrafted featureextraction methods which can estimate the more representative features from the input images.The handcrafted feature-estimation methods are not effective in precisely localizing and categorizing the digits due to different factors, i.e., position, shape, color, chrominance, etc.To overcome these challenges, we have used the EfficientDet DL method which can extract features from images under examination.
The convolution filters compute the features of the image by exploring the structure of an object.Numerous objectdetection techniques have been introduced for the recognition of several diseases.These methods are divided into one-stage (YOLO [66], CornerNet [67], CenterNet [68], etc.) and twostage object detectors (Fast-RCNN [69], Faster-RCNN [70], Mask-RCNN [39], EfficientDet [71], etc.).The reason for choosing EfficientDet over one-stage methods is that these approaches compromise the detection accuracy by exposing the least time to perform classification, whereas the twostage methods show better digit-detection performance; however, these approaches are computationally complex due to To tackle these problems, we have employed the Effi-cientDet technique (proposed by the Google Brain team).The EfficientDet method is a more robust and scalable model due to the feature pyramid network (FPN) architecture which calculates the multidirected feature fusion.The proposed method has three main components, i.e., feature estimation through EfficientNet-B4, bidirectional FPN (BiFPN) used for both top-down and bottom-up feature fusion, and the final component which is utilized to localize and classify digits.
The following are the three modules of our method.
Feature Extraction through EfficientNet-B4
. For the extraction of features, we have utilized the approach, namely, EffiicientNet-B4 as a backbone network of the EfficientDet technique.The EfficientNet technique reliably balances each dimension with a static set of scaling coefficients as compared to the traditional models which perform scaling randomly.The employed feature extractor, namely, the EfficientNet-B4, is empowered to calculate a discriminative set of sample key points while maintaining the minimum set of framework parameters that assist to enhance the recognition performance of the proposed solution.The EfficientNet approach can easily handle the various transformations of input images which allows it to robustly tackle the problem of the nonexistence of the digit's position information.Moreover, the EffiicientNet model permits the reemployment of the extracted key points which enables it to be more appropriate for HDR detection and classification and speed up the training method as well.An illustration of the feature extractor EfficientNet-B4 is presented in Figure 3.
BiFPN.
For the efficient detection and classification of HDR, several object transformations like the location, shape, structure of digits, sample background, and intensity variations must be taken into count.Hence, employing multiscale feature extraction can play a vital role in correctly identifying the digits.Typically, the traditional models only utilize the top-down FPNs to combine the multistage features which lacks being able to completely handle the scale variations of samples and causes missing to learn the important aspects of object structure information.Therefore, such architectures of models are not well generalized to recognize the digits of varying size and with different angle orientations which ultimately cause degradation of the numeral detection and classification performance.To deal with the above-mentioned problems, the proposed approach gives the idea of BiFPN which causes the information to move in both the top-down and bottom-up directions by utilizing consistent and well-organized links.Furthermore, the BiFPN unit exploits trainable weights to calculate semantic features which ultimately enhances the HDR performance of the introduced framework.Hence, such a structure of the framework allows selection of the more important set of image features by the EfficientNet-B4 model which is used as input by the BiFPN unit.The depth and width of BiFPN are obtained, mathematically defined in the following equation: Here, w b and d b are depicting the width and depth of the BiFPN unit, respectively, whereas ∅ which is the compound element controls the scale sizes and has a value of 0 for the proposed approach.
Box/Class Prediction Module.
The resultant features are given to the box/class prediction network to find out the Bbox values towards the digit areas along with the respective class.The thickness of the current network is the same as BiFPN; however, depth is calculated as follows: 3.4.Detection Procedure.The proposed technique is free from other approaches, i.e., proposal generation and selective search.Moreover, the input along with the respective annotations is put into the EfficientDet, which immediately calculates the digit location and its respective class.
Experimental Results
In this section, we have provided a detailed description of the employed dataset along with the metrics which are used to assess the performance of the proposed model.Moreover, we have performed a series of experiments to check the numeral detection and classification performance of the presented approach.We have implemented the proposed method in Python language and executed it on an Nvidia GTX1070 GPU-based system.Table 2 displays the details of the training parameters of the proposed work.We have reported training and loss curves in Figure 4 to show the optimized learning behavior of the proposed approach.
Evaluation Metrics.
To evaluate the numeral identification and categorization performance of the proposed approach, we have employed numerous standard performance measurements named intersection over union (IOU), accuracy, 7 Journal of Sensors precision, recall, and mean average precision (mAP).The framework classification performance is calculated by using the following equation: Equation (3) exhibits the computation of the mAP score, in which AP is depicting the average precision value which is calculated for all categories, whereas t is indicating the image under evaluation.Furthermore, T symbolizes the total number of samples.Figure 5 displays the visual depiction of IOU, precision, and recall.
Model Evaluation.
To check the numeral recognition and categorization results of the presented approach, we have performed two types of evaluations, namely, the digit localization results and the class-wise performance.This analysis will assist the reader to determine the HDR localization and classification performance.
Numeral Localization
Results.An accurate HDR system must be enabled to correctly identify the ROIs (numbers) of different types.To check this, we have conducted an analysis.We have taken all the test images of the MNIST dataset and evaluated them on the trained EfficientDet-D4 model, and a few samples are shown in Figure 6.The localization results shown in Figure 6 clearly explain that the proposed approach is empowered to detect the digits of several types efficiently.Moreover, the technique is robust with regard to several image distortions with the existence of noise, blurring, and intensity alterations and can locate the digits with the size, angle, and position variations.We have also quantitatively measured the localization power of the employed framework by using two standard metrics, namely, the mAP and IOU scores.More clearly, the EfficientDet has classified the HDR with the mAP and IOU scores of 0.995 and 0.994, respectively, which demonstrates the effectiveness of the proposed solution.
Class-Wise Results.
To design the efficient numeral classification approach, it must be capable of differentiating the digits of different classes.Therefore, an analysis is performed to validate the presented technique for class-wise classification performance.Several experiments have been conducted to demonstrate the classification performance of the introduced work.Initially, we computed the class-wise precision, recall, F1-score, and error rate, and the attained results are explained in Table 3.The values demonstrated in Table 3 clearly show that our work is robust towards numeral recognition.More clearly, the introduced approach has acquired the average precision, recall, and F1-score of 98.75, 98.2%, and 98.45%, respectively, along with an average error rate of 1.55%.
To further discuss the numeral categorization performance of the EfficientDet-D4 model, we have discussed the class-wise accuracy values of the introduced method with the help of the bar graphs as these provide better visualization of data (Figure 7).More clearly, our approach has achieved the average accuracy values of 99.71%, 99.73%, 99.84%, 99.79%, 99.91%, 99.87%, 99.85%, 99.85%, 99.89%, and 99.85% for classes from zero to nine, respectively, that show the robustness of the introduced method.The obtained performance values clearly show that our method has exhibited efficient accuracy results for all classes of digits.
Moreover, we have plotted the confusion matrix for the proposed approach, as it assists in determining the recall rate of the model and its capability to recognize the digits of several classes.The acquired values are explained in Figure 8 which clearly shows the accurateness of the introduced methodology for the numeral classification.More descriptively, we have obtained the TPR rates of 0.960, 0.970, 0.980, 0.990, 0.983, 0.992, 0.992, 0.988, 0.988, and 0.983 for numerals from zero to nine, respectively.It is quite visible from Figure 8 that the proposed model can correctly recognize and classify the digits, even though a little similarity exists among digit 1 and 7 classes; however, both are still distinguishable.
It can be summarized based on the evaluation results of the above-performed analysis that the EfficientDet-D4 model is robust for detecting and classifying the digits in their respective classes.The major cause for the better classification results of the introduced technique is because of the representative key point extraction ability of the EfficientDet-D4 which permits it to better recognize the numerals.Furthermore, the EfficientDet-D4 approach can easily deal with the overtuned model data and handle the complex image transformations which result in its effective performance.
4.3.
Comparative Analysis with DL-Based Methods.We have designed an experiment to evaluate the introduced method against several DL-based approaches.To accomplish this, we have chosen several methods, namely, RCNN, Fast-RCNN, Faster-RCNN, SSD, and YOLO.To conduct the performance analysis, we have taken the mAP and accuracy evaluation metrics, as these are the standard measures employed by the researchers to compute the classification results of object-detection methods.Moreover, we have compared the execution times of all methods to show the effectiveness of our model.The comparative results are demonstrated in Table 4, from where it can be seen that our approach outperforms the rest of the methods with the mAP score, accuracy, and time values of 0.995, 99.83%, and 0.16 s, respectively.The Faster-RCNN approach shows the comparative results with the mAP and accuracy values of 0.993, and 99.78%, respectively, while the lowest performance is exhibited by the YOLO framework with the mAP and accuracy values of 0.943 and 93.20%, respectively.More descriptively, in the terms of mAP evaluation measurement, the competitive methods give an average value of 0.961, which is 0.995 for our method.So, for the mAP, the proposed approach gives a performance gain of 3.4%.Similarly, for the accuracy measure, the competent methods present an average accuracy measure of 96.29%, while in comparison, the presented work shows an average accuracy value of 99.83% and hence gives a performance gain of 3.54%.Furthermore, we have compared the execution time as well, and it is quite evident that our method shows the lowest execution time of 0.16 s as compared to all other approaches.So, it can be concluded that the EfficientDet approach gives a reliable and low-cost solution to numeral identification and classification.The basic cause for the better results of the introduced method is because of the robust key point extraction ability of the EfficientDet-D4 model which presented the complicated image transformations in a viable manner.Journal of Sensors Moreover, the one-stage digit identification and classification ability of the EfficientDet-D4 approach provides it with a computational advantage as well.
4.4.Comparative Analysis with ML-Based Methods.To discuss the efficiency of the introduced method, another experiment has been performed to check the HDR results of the EfficientDet approach against the ML-based classifier.For this reason, we have taken three renowned ML-based classifiers, namely, the KNN, DT, and SVM, and compared their results against our approach as demonstrated in [72,73].The obtained results are shown in Table 5 from where it is quite clear that our method exhibits the highest classification performance with an accuracy value of 99.83%.The KNN method shows the second-highest classification performance with 96.89% accuracy.Moreover, the lowest classification performance is shown by the DT classifier with an accuracy value of 86.60%.More clearly, the methods in [72,73] acquire an average accuracy value of 92.42%, while it is 99.83% for the proposed method.Hence, we have demonstrated an average performance gain of 7.41%.The reported results are clearly showing that the proposed solution exhibits better numeral detection accuracy in comparison to the ML classifiers because of its power to tackle the overtuned framework training data.
Comparative Analysis with State of the Art. In this part,
We have performed another experiment where we have taken numerous latest methods from the literature which use the same database for numeral identification and categorization and compared our results against them.To have an 10 Journal of Sensors unbiased assessment, the average results of the proposed method are evaluated against the average performance values reported in these methods [74][75][76][77][78], and the obtained comparison is given in Table 6.Zhao and Liu [74] introduced a DL-based method named LeNet-5 to categorize the handwritten numerals from the input images and obtained a 98.1% accuracy value.Enriquez et al. [75] introduced a CNN approach for numeral recognition and achieved 98% accuracy results.Similarly, in [76], a DLbased approach for HDR shows an average accuracy value of 95.7%.Beikmohammadi and Zahabi [77] proposed a hybrid CNN model to locate and categorize the digits with an accuracy of 99.80%.Similarly, a DL-based approach was used in [78] for numeral categorization and obtained an average accuracy value of 99.60%, whereas comparatively, the employed EfficientDet-D4 method has gained a 99.83% accuracy value which is the largest in comparison to all evaluated methods.In the more in-depth analysis, the comparative approaches showed an average accuracy value of 98.24% while it is 99.83% for our technique.Hence, the proposed method gives a performance gain of 1.59%.The basic cause for the robust numeral classification results of the introduced framework is that the works presented in [74][75][76][77][78] use an intense deep-framework architecture for key point calculation, which results in model overfitting and enhancing the computational burden as well, while the presented method uses the EfficientNet-B4 as a base network, which is capable of computing a reliable set of sample key points and preserving the computational complexity as well.Therefore, we can say that the EfficientNet-B4-based EfficientDet-D4 framework gives a robust and effective solution to digit classification.
4.6.Cross-Dataset Evaluation.For a robust HDR system, it must be capable of detecting and classifying the digits from unseen examples.To test the generalization power of the proposed technique, we have performed an evaluation.To accomplish this, we have taken another dataset, namely, the USPS [25].[76] 95.7% Beikmohammadi and Zahabi [77] 99.80% Agrawal [78] 99.60% Proposed technique 99.83% 11 Journal of Sensors from 0 to 9. We have trained the proposed approach over the MNIST database while the USPS dataset is employed to test the technique.The gained performance values are explained via drawing a box plot (Figure 9), where the performance results for both the training and the test part are ranged over the number line into quartiles, median, and outliers.More clearly, we have obtained the average training and test accuracies of 99.30% and 99.10%, respectively, which demonstrate the robustness of the proposed technique in dealing with unseen examples as well.
Conclusion
Accurate recognition of numerals from images plays a significant role in the domain of information processing.However, a huge writing pattern difference and the presence of various sample distortions like noise, blurring, and intensity changes complicate the effective detection of HDR.In this work, a reliable DL-based HDR system, namely, Efficient-Det-D4, is presented to resolve the existing issues of this domain.More clearly, input images are initially annotated to locate the position of digits on images, which are later used to train the EfficientDet model to detect and categorize the digits.We have evaluated the presented approach over the complex dataset, namely, MNIST, and attained an average accuracy value of 99.83%.We have confirmed through huge experimentation that the presented work can efficiently recognize the numerals from the test samples and categorize them into 10 categories showing numbers from 0 to 9.Moreover, the approach is capable of accurately identifying and classifying the digits even under the occurrence of various postprocessing attacks, i.e., light and color variations, blurring, noise, angle and size changes, etc.Furthermore, across-dataset evaluation on the USPS dataset is also accomplished to show the efficacy of the proposed method for the unseen cases.Evaluation results have assured that the introduced approach is robust against present modern techniques and can play a vital role in the area of information processing.Based on the computed results, we can say that this approach can play an important role in the area of automated number plate recognition of vehicles for surveillance applications.Furthermore, this work has an application in optical character recognition to facilitate various daily life tasks, i.e., product prices, receipt recognition, etc.In the future work, we plan to extend the proposed approach to be applied to other languages.
Figure 1 :
Figure 1: Flow diagram of the proposed technique.
Figure 2 :
Figure 2: Visual representation of samples from the MNIST dataset.
Figure 6 :
Figure 6: HDR localization results obtained with the proposed technique.
Figure 8 :
Figure 8: Confusion matrix of the proposed approach for HDR.
Figure 9 :
Figure 9: Cross-dataset evaluation results of the proposed technique.
Table 1 :
Comparison of existing approaches of handwritten digit recognition.
Table 2 :
Training parameters of the proposed solution.
Table 3 :
Class-wise performance results of the EfficientDet-D4 for HDR recognition.
Table 4 :
Comparative analysis with DL methods.
Table 5 :
Performance evaluation with ML-based methods.
Table 6 :
Performance evaluation of the proposed technique with state-of-the-art techniques. | 2023-02-01T16:28:46.280Z | 2023-01-30T00:00:00.000 | {
"year": 2023,
"sha1": "3e4810ddcf19726e0149e8de34adcfb5cc7f4d90",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/js/2023/2753941.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "32f93bce6e5b69fe8a6847a2a1469b05a6157e2d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
5738149 | pes2o/s2orc | v3-fos-license | Membrane Fusion Is Induced by a Distinct Peptide Sequence of the Sea Urchin Fertilization Protein Bindin*
, Fertilization in the sea urchin is mediated by the membrane-associated acrosomal protein bindin, which plays a key role in the adhesion and fusion between sperm and egg. We have investigated the structure/func-tion relationship of an 18-amino acid peptide fragment “B18,” which represents the minimal membrane binding motif of the protein and resembles a putative fusion peptide. The peptide was found to mimic the behavior of its parent protein bindin with respect to ( a ) its high affinity for lipid bilayers, ( b ) the ability to aggregate and fuse vesicles, ( c ) the binding of Zn 2 1 by a histidine-rich motif, ( d ) the tendency to self-assemble, and ( e ), as indicated earlier, the adhesion to cell surface polysaccharides. Fluorescence and light scattering assays were used here to monitor peptide-induced lipid mixing, leakage, and aggregation of large unilamellar sphingomyelin/ cholesterol vesicles. For these activities, B18 requires the presence of Zn 2 1 ions, with which it forms oligomeric complexes and assumes a partially a -helical conformation, as observed by circular dichroism. We conclude that aggregation and fusion involves a “trans-complex” between peptides on apposing vesicles that are connected by Zn 2 1 bridges.
Membrane fusion is a ubiquitous event in numerous intraand intercellular processes, such as vesicular trafficking (1,2) and the infectious entry of viruses (3)(4)(5). It also constitutes the committing step that allows sperm and egg to merge their genetic material (6 -9). Fertilization has traditionally been studied using sea urchin gametes, and much attention has focused on sperm proteins that become exocytosed upon contact with the egg jelly coat. The major acrosomal protein, bindin, is recognized as a key mediator of sperm-egg adhesion and fusion (10,11). Its species-specific binding to the egg receptor, presumably via interactions with sulfated polysaccharides (12,13), has been well documented in vivo and in vitro (14 -16). Furthermore, the direct involvement of bindin in the fusion event between the membranes has been suggested from observations with lipid vesicles as model systems (17)(18)(19)(20)(21). To unravel the mechanisms underlying sperm-egg fusion and, in particular, to investigate the structure/function relationship of bindin in the overall process, reconstitution would be the approach of choice. However, structural analysis of bindin has been frustrated thus far, because in its native state the protein is extensively self-aggregated within the acrosome vesicle or it is closely associated with the sperm membrane.
Given the interaction of bindin with lipid membranes as well as cell surface carbohydrates, there is much evidence that the protein plays a dual functional role during fertilization. A similar multiple involvement in cell recognition (adhesion or penetration) and fusion has been proposed for other proteins, too, like fertilin (PH-30) (6,22,23), abalone sperm proteins (8,24,25), or viral proteins (3)(4)(5). In many instances, the fusogenic activity of such a protein has been attributed to a short fusion peptide or hydrophobic patch, which could then be characterized in detail with regard to its membrane interactions and secondary structure (22, 23, 26 -28). Here, we have identified the minimum membrane binding peptide of the sea urchin fertilization protein bindin, and we investigate its fusogenic and structural behavior in solution and on the membrane. Most experiments with this peptide are directed by the extensive knowledge about the interactions of the native parent protein bindin with its putative binding partners.
Previous work by Glabe and co-workers revealed that native and recombinant bindin binds peripherally to lipid vesicles, presumably as a monomolecular layer (17,18,20). Because the protein displays no preference for charged lipid head groups, its association appears to be mediated by hydrophobic interactions (18). An unusual feature is its specific affinity for membranes in the gel phase or enriched in cholesterol (17,29). Moreover, bindin is able to induce the fusion of lipid vesicles, which proceeds only slowly with dipalmitoylphosphatidylcholine/cholesterol but within seconds when sphingomyelin/cholesterol (SM/Chol) 1 is used (19,21). The enrichment of sphingomyelin and cholesterol in the outer plasma membrane appears to be physiologically significant because of their formation of detergent-insoluble patches (30). Sphingolipids have also been described as relevant for the fusion mechanism of viral proteins (31).
The functionally important interactions of the 24-kDa protein bindin with the membrane are attributed to its highly conserved central domain, consisting of 70 -80 amino acids (14,17,20,21). By truncation experiments and using overlapping synthetic peptides of this region, we have located the minimal membrane binding motif "B18" (20,21). These 18 amino acids (LGLLLRHLRHHSNLLANI) are perfectly conserved among all known sea urchin species, and the sequence bears some resemblance to viral fusion peptides. Interestingly, the same region also appears to participate in receptor binding, and related peptide fragments have been shown to inhibit fertilization in vitro (32,33). Hence we reasoned that B18 represents an attractive model system to simulate the lipid-protein interactions during fertilization. To this end, we have used fluorescence assays to investigate whether this amphiphilic peptide is capable of inducing vesicle aggregation, membrane fusion, and destabilization, as monitored by lipid mixing and leakage. Specifically, we have examined whether these processes are affected by Zn 2ϩ , which is found in the native protein and presumably binds to the histidine-rich motif contained in the B18 peptide (34 -36). In complementary circular dichroism experiments, structural changes of the peptide were monitored and correlated with its functional features. The results indicate that B18 may be regarded as an appropriate model for various aspects of lipid-protein interactions and membrane fusion during fertilization, because its behavior is in many respects comparable with that of the native protein bindin.
EXPERIMENTAL PROCEDURES
Synthetic Peptide-The peptide B18 (LGLLLRHLRHHSNLLANI), numbered here in terms of amino acids 103-120 of mature binin from Strongylocentrotus purpuratus, was synthesized semi-automatically using solid phase resin and Fmoc (N-(9-fluorenyl)methoxycarbonyl) protecting groups (37). The crude peptide was purified by reverse phase high pressure liquid chromatography on a water/acetonitrile gradient with 0.1% trifluoroacetic acid. The purity and mass of the product (2090 g/mol) were checked by electrospray mass spectrometry, and the amount of lyophilized peptide was determined gravimetrically. Stock solutions were prepared by dissolving B18 at typically 1 mM in water, giving a pH of Ϸ4 where it is fully soluble and does not self-aggregate, as otherwise slowly occurs at pH Ͼ 7.
Buffers-Buffers for fusion, leakage, and aggregation assays were made with 10 mM HEPES (usually pH 7.4), Bis-tris propane, or acetate. They contained 140 mM NaCl unless salt effects were to be examined. CD measurements were carried out without salt to avoid distortion in the far-UV. Stock solutions of ZnCl 2 , CuCl 2 , CoCl 2 , CaCl 2 , MgCl 2 (or the corresponding SO 4 Ϫ2 salts for CD), and EDTA were prepared at 5-50 mM in ultrapure water.
Vesicle Preparation-Bovine brain SM and the fluorescently labeled phospholipids N-(7-nitrobenz-2-oxa-1,3-diazol-4-yl)phosphatidylethanolamine (N-NBD-PE) and N-(lissamine rhodamine B sulfonyl)phosphatidylethanolamine (N-Rh-PE) were obtained from Avanti Polar Lipids. Chol was purchased from Sigma, and the fluorescent probes 8-amino-naphtalene-1,3,6-trisulfonic acid sodium salt (ANTS) and pxylenebis(pyridinium)bromide (DPX) were obtained from Molecular Probes (Leiden, The Netherlands). Liposomes were prepared by codissolving 80/20 SM/Chol (mol/mol) in CHCl 3 , together with 0.8 mol % of each of the fluorescent lipids when required for fusion assays. The mixture was dried under N 2 and resuspended in buffer at a final lipid concentration of approximately 4 mM by vortexing, followed by 10 freeze-thaw cycles using 50°C warm water (38). A uniform population of large unilamellar vesicles (LUV) was obtained by repeated high pressure extrusion (Lipex extruder from Biomembranes, Vancouver, Canada) of the liposomes through a polycarbonate Unipore membrane (pore size, 100 nm; Millipore) at a temperature above the gel-to-fluid phase transition (45°C). For contents leakage experiments (39), solutions of ANTS (25 mM in 90 mM NaCl, 10 mM Tris, adjusted to pH 7.4) and DPX (90 mM in 50 mM NaCl, 10 mM Tris) were mixed at a 1:1 ratio. The combined solution was added to the dried lipids, subjected to 10 freeze-thaw cycles, and then extruded, keeping the material in the dark. To remove unencapsulated dye, the vesicles were washed right before the experiment by gel filtration on a Sephadex G75 column using a 150 mM HEPES/NaCl elution buffer, which balances the internal vesicle osmolarity. The exact lipid concentration of each LUV stock was determined by phosphate analysis (40).
Lipid Mixing Assay-Peptide-induced lipid mixing between SM/Chol vesicles was followed by monitoring the relief of fluorescence resonance energy transfer between NBD-PE and Rh-PE (9,41). The time dependence of fluorescence was monitored with 1-s resolution on a spectrofluorimeter (SLM, Aminco Bowman Series 2 Luminescence, Urbana) at excitation and emission wavelengths of 465 and 530 nm, respectively. The temperature was maintained at 30°C (unless stated otherwise) in a thermostated cuvette holder equipped with a magnetic stirrer. The labeled vesicles were suspended in a final incubation volume of 2 ml buffer together with a 3-fold excess of nonlabeled vesicles, giving a total lipid concentration of 200 M. Fusion between the vesicles was followed upon adding the peptide, metal ions, or EDTA from their stock solutions with a Hamilton microsyringe. Most experiments were carried out with a peptide concentration of 5 or 10 M and a Zn 2ϩ concentration of 40 M to induce fusion or 60 M to induce aggregation. The fluorescence scale was calibrated by setting the base line of the initial background signal to 0%. Infinite probe dilution, corresponding to 100% fluorescence, was determined after disrupting the vesicles in 0.5% (v/v) Triton X-100. The initial rate of fusion was analyzed by curve-fitting the signal with the Enzfitter software and expressed as the percentage of fluorescence increase (relative to the level obtained upon infinite dilution) per second (% max/s).
Leakage of Vesicle Contents-The release of aqueous contents from the LUVs was monitored by the fluorescence dequenching of ANTS by DPX (42), both entrapped in the SM/Chol vesicles as described above. For resonance energy transfer measurements, the ANTS excitation was set at 284 nm, and emission was set at 530 nm. Sample concentrations, experimental conditions, and data analysis were the same as in the lipid mixing assay above.
Vesicle Aggregation Assay-Aggregation of the LUVs was monitored by turbidity measurements. The absorbance at 400 nm was recorded on a thermostated Hamamatsu spectrophotometer, using conditions as for lipid mixing and leakage. The rates of aggregation were calculated from the initial slope of the curves as the change in absorbance per minute (⌬A/min). The distribution of the diameters of the initial LUVs and of the aggregated/fused vesicles were measured on a Nicomb particle size analyzer (model 370).
Circular Dichroism Spectroscopy-CD spectra were recorded with a Jasco 710 spectropolarimeter over the wavelength range from 185 to 250 nm (43) The temperature was maintained at 5°C, the scan rate was 50 nm/min, the step resolution was 0.5, the response time was 1 s, and 5-10 runs were accumulated per spectrum. The peptide was measured at pH 7.5, using different concentrations of 5 M B18 in 0.5 mM HEPES, 50 M B18 in 5 mM HEPES, or 500 M B18 in 50 mM HEPES in a 10-, 1-, or 0.1-mm cuvette, respectively.
Electrospray Mass Spectrometry-Noncovalent interactions of the peptide with various metal ions were investigated on samples of 50 M B18 in 250 M NH 4 HCO 3 buffer at pH 9.0. Metal ions were added at a ratio of 1:1 or 8:1, respectively, to B18. In view of the tendency of B18 to aggregate at this pH, fresh samples were prepared for each measurement. Fig. 1, the peptide B18 induces vesicle aggregation and lipid mixing when added to large unilamellar vesicles consisting of SM/Chol (80/20). For these effects, Zn 2ϩ must be included in the incubation medium. This divalent cation is known to be contained in the native bindin protein under physiological conditions (11,32). Furthermore, the data show that vesicle leakage occurs upon peptide binding, which interestingly does not require the presence of Zn 2ϩ , unlike aggregation and fusion.
Interaction of the Peptide with SM/Chol Vesicles-As shown in
It is illustrated in the top row of Fig. 1 that the peptide alone does not lead to vesicle aggregation (Fig. 1A), but turbidity does increase when B18 is added in the presence of Zn 2ϩ (Fig. 1B). By addition of EDTA, vesicle aggregation is arrested and only partially reversed (approximately 20% decrease in turbidity; not shown). This indicates the formation of larger structures, presumably because of B18-Zn 2ϩ -induced vesicle fusion. The occurrence of fusion is further supported by the data presented in Fig. 1C, which illustrates the particle size distribution before and after exposure to B18 and Zn 2ϩ . The average diameter of the SM/Chol vesicles (or vesicular clusters) increases from approximately 150 nm by more than an order of magnitude to 3000 nm. The subsequent addition of EDTA leads to a final size of around 1500 nm.
Membrane fusion is accompanied by the mixing of membrane lipids. As shown in the middle row of Fig. 1, it is apparent that neither the peptide on its own (Fig. 1D) nor Zn 2ϩ induces any lipid mixing. However, when both B18 and Zn 2ϩ are present, rapid lipid mixing is observed (Fig. 1E). This process is instantaneously interrupted upon chelation of Zn 2ϩ by EDTA (Fig. 1F). The fusion data, monitored by the NBD/ Rh-PE lipid mixing assay, thus show the same Zn 2ϩ dependence as observed for aggregation. Electron microscopy was used to confirm the formation of large fused vesicles. 2 The requirement of B18 for reasonably low concentrations of Zn 2ϩ to induce vesicle aggregation and lipid mixing suggests that the peptide becomes "activated" by the cation. Little difference is observed when reversing the order of addition to the vesicles, but prior incubation of B18 with Zn 2ϩ decreases their combined activity. This observation suggests that the peptide binds specifically to Zn 2ϩ , which may induce a structural change or cause the peptide monomers to aggregate. In the native parent protein, the Zn 2ϩ ion is presumably bound via the same histidine-rich motif as in the B18 peptide, because there are no further conserved histidines or cysteines in the remaining parts of the bindin sequence (10,11,32).
Binding of the peptide to the SM/Chol vesicles evidently destabilizes the bilayer, as shown by the release and fluorescence dequenching of ANTS/DPX (Fig. 1, bottom row). Interestingly, addition of B18 on its own already induces a substantial release of contents (Fig. 1G), which indicates a distinct affinity of the water-soluble peptide for the uncharged membrane. Furthermore, this observation implies that B18 does not require exogenously added Zn 2ϩ to interact with the bilayer as such, although the presence of Zn 2ϩ further enhances the rate and extent of leakage (Fig. 1H). In fact, the data reveal that the metal ion must fulfill a structural role when binding to B18, conveying functional properties to the peptide by triggering vesicle aggregation and fusion. These properties are apparently not expressed when B18 is interacting with the bilayer in a Zn 2ϩ -independent mode. It is in this context relevant to note that EDTA is not merely capable of chelating Zn 2ϩ . Unexpectedly, the presence of EDTA was observed to abolish the Zn 2ϩindependent mode of peptide-induced leakage (Fig. 1I). This observation implies that EDTA interacts directly with B18 and thereby prevents it from destabilizing the membrane.
The pH and temperature dependence of peptide-Zn 2ϩ -induced fusion and aggregation of SM/Chol vesicles is shown in Fig. 2. The observed pH optimum near 7.6 ( Fig. 2A) supports the notion that the histidine residues in B18 coordinate the metal ion. These side chains are deprotonated and available for Zn 2ϩ binding when the pH is raised beyond the typical histidine pK a of around 6 to 7. At an even higher pH above 8, on the other hand, the low solubility of Zn(OH) 2 becomes the limiting factor for complex formation, and the aggregation and fusion activities are seen to decrease again accordingly.
A temperature optimum at around 30°C is observed for both lipid mixing and aggregation as demonstrated in Fig. 2B. By differential scanning calorimetry we found that the pure (mixed chain) brain sphingomyelin has a relatively broad transition, with an onset near 30°C, a maximum at 40°C, and an end point around 50°C. 3 When mixed with 20% cholesterol, the differential scanning calorimetry signal is further broadened, and the onset shifted to a lower temperature. Nevertheless, it appears that in the mixed SM/Chol system, aggregation and fusion display their optimum temperature near the onset of the original melting point of pure SM. The possibility of a temperature-induced conformational change of the peptide is unlikely, according to our CD measurements in solution (see below).
Interaction of the Peptide with Zn 2ϩ -To further define the role of Zn 2ϩ in the B18-induced vesicle aggregation and fusion process, its concentration dependence was examined in Fig. 3. With increasing amounts of Zn 2ϩ , the rates of fusion (Fig. 3A) and aggregation (Fig. 3B) are initially found to increase, as expected. After passing through a maximum, however, the activity of the peptide decreases again, suggesting that it becomes blocked by excess Zn 2ϩ . To check whether the interaction between Zn 2ϩ and B18 is influenced by the law of mass action, we measured the Zn 2ϩ dependence of lipid mixing and aggregation at 5 and 10 M peptide concentration, respectively (Fig. 3, A and B). Within error limits, the amount of Zn 2ϩ required for a maximum response is independent of peptide concentration. Note, however, that more Zn 2ϩ is required for optimum aggregation than for optimum fusion, suggesting that the Zn 2ϩ -B18 complexes involved in aggregation and fusion are not necessarily identical. Fig. 4 summarizes the lipid mixing and aggregation kinetics as a function of peptide concentration. According to Fig. 3, where the optimum Zn 2ϩ concentration was found to be independent of peptide concentration, it is justified here to use a constant aliquot of Zn 2ϩ to trigger fusion or aggregation (40 or 60 M, respectively). Lipid mixing and aggregation kinetics are displayed in the same plot on different scales (Fig. 4A) to allow a comparison of the two effects. Both profiles show essentially the same dependence on B18 concentration, suggesting that the membrane becomes saturated with peptide in an approximately linear fashion. Saturation occurs at a lipid/peptide ratio of around 15:1. Assuming that essentially all peptide binds to the bilayer, the available surface area on the outer vesicle leaflet would be approximately 300 Å 2 /peptide, based on a lipid area of 40 Å 2 . The molecular area of the 18-amino acid peptide is also close to 300 Å 2 when modeled either as a -sheet or an ␣-helix. This good correlation suggests that the bilayer surface may become completely covered by a monomolecular layer of peptide, at which point the maximum rate of aggregation and lipid mixing is reached.
The concentration dependence of leakage in the presence of Zn 2ϩ is included in Fig. 4A as a dotted line. Compared with lipid mixing, contents release occurs about twice as fast, and the maximum effect is already reached at a much lower lipid/ peptide ratio. To resolve the concentration dependence in more detail, the leakage data are illustrated on an expanded scale in Judging by the first few data points, a sigmoidal character may be attributed to these curves, which would be indicative of a cooperative binding event (28). This interpretation, however, does not imply the formation of a transmembrane pore but rather that a limited number of peptides are sufficient to destabilize a vesicle such that it discharges its entire load.
Interaction of the Peptide with Other Ions-The ability of Zn 2ϩ to trigger vesicle aggregation and lipid mixing is attributed to its interaction with the histidine side chains of the peptide. To further define the specificity and the functional consequences of this complex, electrospray mass spectrometry was used to check whether the peptide could also bind to any other metal ions, such as Cu 2ϩ , Co 2ϩ , Mg 2ϩ , Ca 2ϩ , and Na ϩ . Only the transition elements gave a positive signal at the combined mass of the peptide plus metal ion, which reverted to the peptide mass alone under acidic denaturing conditions. This confirms that, next to Zn 2ϩ , the peptide can also form a complex with Cu 2ϩ and Co 2ϩ . However, in contrast to Zn 2ϩ , neither Cu 2ϩ nor Co 2ϩ was able to stimulate the peptide to induce vesicle aggregation or fusion. More significant still is the observation that leakage, which is caused by the peptide per se (Fig. 1D) is suppressed in the presence of either Cu 2ϩ or Co 2ϩ (data not shown). Therefore, binding of Cu 2ϩ or Co 2ϩ has different structural and functional consequences on B18 than the binding of Zn 2ϩ .
To quantify the inhibitory effects of Cu 2ϩ and Co 2ϩ , competition experiments were carried out using vesicles that were triggered to fuse by the addition of 10 M peptide in the presence of 40 M Zn 2ϩ . Fig. 5A illustrates how the rate of lipid mixing becomes progressively reduced when an increasing amount of Cu 2ϩ or Co 2ϩ is present in the incubation solution. Cu 2ϩ is capable of suppressing fusion almost completely, its effect being much more pronounced than that of Co 2ϩ . It is essential at this point, however, to recall from Fig. 3 that an excess of Zn 2ϩ also reduces the rate of fusion, and the relevant Zn 2ϩ data from the original graph are therefore included in Fig. 5A as a dotted line. Based on these data, it appears that Cu 2ϩ binds the peptide competitively and more strongly than Zn 2ϩ .
The metal ion chelator EDTA, originally used in control experiments to quench fusion and aggregation, was found to suppress leakage by interacting directly with the peptide (Fig. 1I). We therefore carried out a series of lipid mixing and aggregation assays in the presence of increasing amounts of EDTA, whereby fusion is triggered as usual by the addition of B18 and Zn 2ϩ . Fig. 5B shows that the rate of aggregation decreases approximately linearly with EDTA concentration, as expected for a successive removal of free Zn 2ϩ ions from the solution. The rate of lipid mixing, on the other hand, decreases more dramatically and is completely suppressed already at much lower EDTA concentration. Therefore, it appears that EDTA interferes with the formation of the B18-Zn 2ϩ fusion complex, for which more strict structural criteria have to be met than for mere vesicle aggregation. The most likely side chains on the peptide to interact with EDTA are the two arginine residues.
Structural Consequences of Ion Binding-For vesicle aggregation and fusion to occur, a specific Zn 2ϩ -mediated assembly of B18 has to take place, as documented above. To determine the structural features of this complex, the conformation of the peptide and its tendency to oligomerize were investigated by circular dichroism. Because sphingomyelin gives a pronounced CD signal at wavelengths below 220 nm, measurements were carried out with the peptide alone in aqueous solution in the absence of any lipid. The peptide tends to self-aggregate slowly when kept as a millimolar solution above pH 7, but samples were prepared freshly from an acidic stock. Under these con-ditions, B18 is well soluble and largely unstructured at pH 7.5, even at a concentration of 500 M. CD measurements revealed a slight increase in "secondary structure" over the pH range from 3.0 to 9.0, which amounts to less than 10% as judged by the signal amplitude at 222 nm (data not shown).
A series of Zn 2ϩ titrations was carried out at pH 7.5, using different peptide concentrations to assess oligomerization effects. At 5 M peptide concentration, the addition of Zn 2ϩ has hardly any effect on its random coil conformation (data not shown). At 50 M B18, a weak precipitation of the peptide sets in with increasing amounts of Zn 2ϩ , as judged by the decrease in signal intensity because of light scattering (data not shown). At an even higher peptide concentration of 500 M B18, however, substantial structural changes are revealed in the CD spectra, which are summarized in Fig. 6 (A and B). Initially, the addition of about half an equivalent of ions (250 M Zn 2ϩ , note the stoichiometry w.r.t. 500 M B18) induces a partially ␣-helical peptide conformation, according to the double minimum in the line shape at 207 nm and close to 222 nm (Fig. 6A) (43). The addition of further Zn 2ϩ then leads to the precipitation of B18, as seen in Fig. 6B (representing the continuation of the titration in Fig. 6A). The visibly cloudy precipitate could be redissolved by adding EDTA or by acidifying the solution. This CD analysis suggests (and a more detailed discussion will be published elsewhere), 4 that initially a soluble peptide-Zn 2ϩ complex assembles with a 2:1 stoichiometry of B18:Zn. Further 2؉ (panels A and B) and Cu 2؉ (panel C). The addition of Zn 2ϩ initially leads to the formation of a partially ␣-helical B18-Zn 2ϩ complex (panel A), followed by precipitation (panel B). The binding of Cu 2ϩ , on the other hand, induces a -turn in the peptide (panel C) and is also followed by precipitation (not shown). addition of Zn 2ϩ then leads to the formation of a 1:1 precipitate of (B18-Zn 2ϩ ) n . Both in the soluble and precipitated Zn 2ϩ complexes, B18 has a partially helical structure. It appears that Zn 2ϩ connects the peptides via their histidine residues, and the resulting conformational change may expose some hydrophobic regions that promote aggregation further.
Whereas Zn 2ϩ was shown to activate the peptide, Cu 2ϩ binds competitively and causes an inhibition of vesicle leakage, aggregation, and fusion. Fig. 6C illustrates the changes in the spectral line shape when Cu 2ϩ is added to a 500 M peptide solution, suggesting that a local -turn is formed (43). Precipitation starts to set in at higher Cu 2ϩ concentrations but with a different overall line shape than the Zn 2ϩ precipitate (data not shown). Similar to the inhibitory effect of Cu 2ϩ , we also observed that EDTA prevents the peptide from destabilizing the membrane, possibly by binding to the two arginine side chains. The structural change induced by EDTA is weak, and the resulting CD line shape resembles that observed with Cu 2ϩ , again reminiscent of a -turn (data not shown). Hence, it appears that the binding of Cu 2ϩ or EDTA to certain peptide side chains induces a different conformation than Zn 2ϩ , thus rendering the peptide inactive toward the membrane on the time scale of the fusion measurements. DISCUSSION We have demonstrated that the peptide B18 is able to induce aggregation and fusion of uncharged lipid vesicles, thus mimicking the function of its parent protein bindin. The native sea urchin acrosomal sperm protein (236 amino acids) is supposedly involved in sperm-egg adhesion as well as membrane fusion during fertilization (10,11), and its interactions with lipid vesicles have been well characterized (17)(18)(19)(20). Here, we describe some remarkably similar interactions of the 18-amino acid peptide B18, which may be relevant for the peripheral anchoring of bindin onto the acrosomal membrane and which may even play a role in the fusion event between sperm and oocyte. Both the peptide and the protein display a high affinity toward SM/Chol vesicles (21), which may represent the acrosomal sperm membrane and possibly the egg membrane. Like bindin, B18 is also able to trigger the aggregation and fusion of artificial model membranes (19,21). This functional property of the peptide is exclusively expressed in a Zn 2ϩ -dependent manner. Similarly, the native fertilization protein is known to contain one equivalent of this particular cation (32).
To gain further insight into the mechanism of fusion, multiple interactions need to be taken into account between all the participating molecules in this model system, namely the B18 peptide, Zn 2ϩ cations, and SM/Chol vesicles. First, the membrane binding mode of the peptide per se requires some attention. We found that B18 destabilizes the bilayer and causes substantial leakage from the large unilamellar vesicles (Fig. 1G). The high affinity for the uncharged lipid must be attributed to hydrophobic interactions, rather than an initial long range electrostatic attraction. Although the amphiphilic peptide carries two positively charged arginines in its center, many hydrophobic side chains are clustered at each end of the molecule (Fig. 7A). There is some indication that leakage may proceed as a cooperative event (Fig. 4B) (28).
Upon binding to the lipid vesicles, the peptide is able to trigger their aggregation and fusion, but for these activities it needs to be stimulated by Zn 2ϩ (Fig. 1E). Before evaluating the ternary membrane system, first consider the peptide-Zn 2ϩ interactions in the absence of any lipid vesicles. Our CD analysis showed that Zn 2ϩ induces a partially ␣-helical conformation of B18 and leads to the formation of oligomeric metallo-peptide complexes. The coordination must involve the histidine residues of the motif HLRHH, but their spacing is too close to allow all three side chains to bind simultaneously to a single Zn 2ϩ ion. Therefore, we suggest by analogy with metal binding sites in other proteins that initially only the first (His 109 ) and the last histidine (His 113 ) bind a Zn 2ϩ ion on one face of the peptide, as schematically illustrated in Fig. 7A. This picture is implicated by the fact that positions i and i ϩ 4 are suitably spaced to induce the ␣-helical structure observed (Fig. 6A), i.e. like a zinc finger . In this particular conformation, the leucine side chain Leu 110 would be exposed at the helix surface, which may force it to insert into the lipid bilayer (when present) or seek any other available hydrophobic environment. Our analysis of the peptide-Zn 2ϩ binding is consistent with an assembly of soluble dimeric complexes, which is then followed by further oligomerization and precipitation. The initial 2:1 stoichiometry supports the picture that a dimeric B18-Zn 2ϩ -B18 complex is assembled around a central ion, involving His 109 and His 113 on each peptide. In the presence of excess Zn 2ϩ , it is conceivable that the remaining histidine (His 112 ) participates in further Zn 2ϩ bridges, which link up the dimeric complexes and lead to the formation of a 1:1 precipitate of (B18-Zn 2ϩ ) n .
A key step in vesicle aggregation and fusion must be the assembly process or the resulting molecular conformation of the peptide-Zn 2ϩ oligomers in the presence of the membrane. Complex formation in solution was found to be favored only at high peptide concentration (500 M B18), whereas vesicle fusion was accomplished with much lower amounts (typically 10 M). Apparently, the membrane recruits the water-soluble peptides from the bulk solution, and the high local concentration promotes their self-assembly with Zn 2ϩ either near the vesicle surface or once they are partially immersed in the bilayer. As indicated above (Fig. 1H), the peptide interacts with the membrane almost instantaneously, and leakage is even more pronounced in the presence of Zn 2ϩ . The binding of Zn 2ϩ appears to promote a fusogenic peptide conformation, possibly by crosslinking the monomers to one another. By analogy to the mechanism of Ca 2ϩ -induced fusion of acidic phospholipid vesicles (41), we suggest that a so-called "trans-complex" may be formed between two apposing membranes. As illustrated in Fig. 7B, a central Zn 2ϩ ion may be complexed from either side by two B18 molecules that are associated with separate vesicles. Part of the function of such a membrane-bound complex would be to mediate vesicle aggregation. The subsequent fusion process will then be facilitated through the concerted destabilization of the bilayers by the hydrophobic side chains. In Fig. 7B we have drawn the peptide in the Zn 2ϩ complex with a partially helical structure, based on the CD data in solution. On the other hand, we have no information about its conformation when bound to the membrane on its own without Zn 2ϩ . Neither does this drawing take into account the possibility that Zn 2ϩ bridges may also form between peptides in-plane of the membrane rather than between apposing vesicles.
Various additional observations underscore the specificity and subtlety of the Zn 2ϩ -Bl8 complex formed, which is involved in the actual fusion step. At a fixed peptide concentration, titration experiments demonstrate that fusion and aggregation are inhibited by excess Zn 2ϩ (Fig. 3). Similarly, preincubation of B18 with Zn 2ϩ reduces their combined fusion activity. We thus conclude that the dimeric B18-Zn 2ϩ -B18 complex is the active fusogenic agent, whereas further oligomerization deteriorates its potency. The saturation of all histidine residues with Zn 2ϩ or the formation of extended oligomeric chains may sterically interfere with the membrane fusion process. Apparently, an excess of Zn 2ϩ has a less perturbing effect on aggregation than on fusion, and vesicular aggregates as part of the fusion complex could be dispersed again with EDTA (Fig. 1C).
When the histidine residues are deprotonated, the peptide can bind to various transition metal ions, which eventually leads to precipitation. B18 also tends to aggregate slowly by itself in solution. Yet, peptide aggregation or complex formation per se do not provide the molecular specificity or the necessary kinetics required for membrane fusion. In contrast to Zn 2ϩ , the addition of Cu 2ϩ or Co 2ϩ to B18 does not induce any vesicle aggregation or fusion. On the contrary, Cu 2ϩ and Co 2ϩ compete rapidly and efficiently with Zn 2ϩ , and their mere presence in the incubation mix inhibits the Zn 2ϩ -induced membrane fusion (Fig. 5A). In solution, the peptide is folded by Cu 2ϩ into a local -turn (Fig. 6C). We therefore suggest that the initial binding site may be different for Cu 2ϩ than for Zn 2ϩ . It very likely involves the histidine pair His 109 and His 112 , whose spacing i and i ϩ 3 is characteristic of metal binding sites with a -turn (Fig. 7A) (34 -36).
A similar conformational or steric block appears to be the reason why EDTA prevents the interaction of B18 with the membrane (Fig. 1I) and actively inhibits fusion (Fig. 5B). A bidentate complex between EDTA and the two arginine side chains (Arg 108 and Arg 111 ) would be entropically favored and energetically stabilized by hydrogen-bonded interactions between the guanido-and carboxyl-groups (Fig. 7A). Such binding mode was in fact proposed to explain the adhesion of bindin to the sulfate esters of certain cell surface polysaccharides on the putative sea urchin bindin receptor (12,13). Remarkably, a nonapeptide (LRHLRHHSN), derived from B18 with the same arginine/histidine motif, was shown to be a potent inhibitor of fertilization in vitro, and it expressed its optimum binding affinity only in the presence of Zn 2ϩ (32). These two observations, namely (a) that B18 requires Zn 2ϩ to trigger membrane fusion and (b) that the related nonapeptide requires Zn 2ϩ to bind to the sulfate groups of cell surface carbohydrates, emphasize the specific structural role of this ion to promote an active local conformation in the peptide backbone. The function of the conserved protein domain around the sequence of B18 thus appears to be involved in the binding of several partners, i.e. the Zn 2ϩ cation, the acrosomal membrane, and the egg cell receptor.
Finally, it is remarkable that fusion occurs at all with the SM/Chol model membranes, given that they are not in the fluid phase but in a liquid ordered state. A similar gel phase preference has also been reported for the vesicle binding and fusion activity of the native fertilization protein with other lipids (17,29). A broad transition temperature range was determined for the mixed SM/Chol system used here, with a maximum at 40°C. Nevertheless, the optimum for B18-induced fusion coincides with the onset of the phase transition of pure SM around 30°C (Fig. 2B). Accordingly, it is tempting to suggest that B18-mediated fusion may be accomplished via its interaction with locally enriched SM domains in the mixed SM/Chol bilayers (30,31). Because the lipid packing is strongly perturbed during the melting process, this would favor any local or temporary peptide penetration. In line with numerous previous studies concerned with structural features of fusogenic synthetic or natural peptides, penetration into the bilayer is particularly facilitated by a helical structure (26,44). Indeed, B18 tends to assume a partially ␣-helical conformation in the presence of Zn 2ϩ , at least in the aqueous environment where we could study complex formation directly by CD. Peptide selfassembly as a mechanism for membrane perturbation and fusion has also been described in the literature, when it involves a -sheet structure (22,23,28). Oligomerization is in fact a recognized feature in the fusion mechanism of viral proteins, which may even involve the hydrophobic fusion peptides once they get exposed (3)(4)(5)27). Here, we have described a Zn 2ϩmediated self-assembly of B18, which correlates with its fusogenic activity.
In summary, the minimum membrane-binding peptide B18, derived from the sperm protein bindin, represents an attractive model system to study lipid-protein interactions during fertilization. Membrane binding, adhesion to sulfated polysaccharides, and self-association appear to be regulated by the formation of specific metallo-complexes, which in turn determine the local protein conformation. The functionality of the full size protein will surely depend on numerous other factors that are not accessible by this model. Fusion between sperm and egg, for instance, is presumably a nonleaky process, but the peptide induces substantial perturbations on the membrane. Neither can the mechanism of species-specific recognition be addressed using the short conserved B18 sequence. Nevertheless, our studies reveal the very likely involvement of this peptide in membrane binding. Whether it acts as a genuine fusion peptide or simply serves as a peripheral membrane anchor remains to be established. The possibility of mimicking at least some functional aspects of bindin by a simple peptide will allow us to obtain more detailed structural and functional insight into its role in fertilization. | 2018-04-03T03:25:20.723Z | 1998-07-03T00:00:00.000 | {
"year": 1998,
"sha1": "0a4c174eaa926a28becad66b112b95c2780bc9d8",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/273/27/16748.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "2a510d91734a5ca00d066463279a751a88d5b0e2",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
221377559 | pes2o/s2orc | v3-fos-license | Baicalin alleviates collagen-induced arthritis and suppresses TLR2/MYD88/NF-κB p65 signaling in rats and HFLS-RAs
Baicalin is a flavonoid isolated from the root of Scutellaria baicalensis with anti-inflammatory, antioxidant and antiapoptotic pharmacological properties. However, the therapeutic effect of baicalin on rheumatoid arthritis (RA) is not completely understood. The present study aimed to explore the therapeutic potential and mechanisms underlying baicalin in collagen-induced arthritis (CIA) model rats. CIA was induced in male SD rats. The hind paw thickness and severity of joint injury were monitored to assess the onset of arthritis. At 28 days after the initial immunization, different doses of baicalin were administered once daily via oral gavage for 40 days. The radiologic and pathological alterations were examined using X-ray, and hematoxylin and eosin staining, respectively. ELISA was employed to measure the serum levels of proinflammatory cytokines. Reverse transcription-quantitative PCR and western blotting were conducted to determine the expression of toll-like receptor (TLR)2, myeloid differentiation factor 88 (MYD88) and NF-κB p65. Baicalin treatment noticeably alleviated radiographic and histologic abnormalities in the hind paw joints of CIA model rats in a dose-dependent manner. The serum levels of proinflammatory cytokines were significantly decreased in baicalin-treated CIA model rats compared with vehicle-treated CIA model rats. The mRNA expression levels of TLR2 and MYD88, as well as the protein expression levels of TLR2, MYD88 and NF-κB p65 were significantly decreased by baicalin treatment in the synovial tissue of CIA model rats and human RA fibroblast-like synoviocytes. The results suggested that baicalin may exert a beneficial effect on CIA, which may be mediated by inhibiting the TLR2/MYD88/NF-κB signaling pathway.
Introduction
Rheumatoid arthritis (RA) is a chronic autoimmune disease characterized by swelling, pain, stiffness and deformity of peripheral joints, which affects 0.5-1% of the adult population worldwide (1). Although a variety of antirheumatic drugs, such as cytokine antagonists, and B cell depletion and T cell co-stimulation blockers, display clinical efficacy for the treatment of RA, the life expectancy in patients with RA is 10-15 years shorter compared with the general population (2). Therefore, developing novel and more potent therapeutic agents for RA is important.
RA is an inflammatory condition that primarily affects the small synovial joints of the hands and feet (3). The main pathological features of RA include synovial hyperplasia and the secretion of proinflammatory cytokines, such as tumor necrosis factor-α (TNF-α), interleukin (IL)-1, IL-6 and IL-8, in the affected joints (4,5). Activated fibroblast-like synoviocytes (FLSs) serve a central role in RA pathogenesis by producing inflammatory cytokines, chemokines and proteinases that degrade the extracellular matrix (4,6). A number of proinflammatory cytokines, such as TNF-α, IL-1 and IL-6, are involved in the pathogenesis of RA, serving as therapeutic targets for the development of novel drugs against RA (7).
Toll-like receptors (TLRs), a family of type I transmembrane glycoproteins, have been implicated in the inflammatory and immune responses in RA. When exposed to an immunogenic stimulus, cells within the joint display increased TLR expression and increased expression of the corresponding ligands, such as bacterial peptidoglycan and heat shock protein 60 (8), which triggers rapid expression of proinflammatory cytokines and chemokines that orchestrate immune responses, leading to inflammation and damage to the joints in patients with RA (9). Previous studies have highlighted the importance of TLR2 in RA via in vitro systems and animal models (10,11). Highly expressed TLR2 in RA synovial tissue-lining macrophages and fibroblasts heterodimerizes with TLR1 or TLR6 upon ligand binding and interacts with myeloid differentiation factor 88 (MYD88) to initiate a signaling cascade that results in activation of key transcription factors, including NF-κB (12). Previous studies have demonstrated that TLR2 ligation in RA FLSs enhances proinflammatory cytokine secretion (13), and TLR2 blockade significantly inhibits cytokine secretion (14), indicating an indispensable role of TLR2 signaling in RA development. Therefore, targeting TLR2 and the downstream signaling pathways may serve as a potential therapeutic approach in RA treatment. Baicalin (7-glucuronic acid, 5,6-dihydroxyflavone) is a f lavonoid compound isolated from the root of Scutellaria baicalensis (15), which possesses multiple pharmacological effects, including anti-inflammatory, antioxidative, antiapoptotic and immune regulatory properties (16)(17)(18). Previous studies have suggested that baicalin could ameliorate CIA in animal models (19,20). In addition, baicalin can attenuate periodontitis and lipopolysaccharide (LPS)-induced fever in rat models via inhibition of the TLR2 or TLR4/MYD88/p38 mitogen activated protein kinase (MAPK)-NF-κB signaling pathways (21,22). However, whether TLR2 signaling is associated with the beneficial role of baicalin in CIA is not completely understood.
In the present study, a CIA rat model that is widely used to mimic the joint inflammation observed in human RA (23) was established to evaluate the potential therapeutic effect of baicalin in CIA. Alterations to the serum levels of proinflammatory cytokines and the expression of TLR2, MYD88 and NF-κB p65 in response to baicalin treatment were examined to investigate the mechanisms underlying the therapeutic effects of baicalin in CIA. The present study indicated that baicalin may serve as a promising therapeutic agent to target the inflammatory response and TLR2/MYD88/NF-κB signaling in RA.
Materials and methods
Animals and CIA modeling. A total of 100 male Sprague-Dawley (SD) rats (age, 8 weeks; weight, 180±20 g) were obtained from the Experimental Animal Center of Ningxia Medical University (certificate no. SCXK 2015-0001). The rats were fed and housed in a standard pathogen-free environment with a temperature of 22±2˚C, 55±5% humidity, 12-h light/dark cycles and free access to food and water. All animal care and procedures described in the present study were approved by the Ethics Committee of Ningxia Medical University (approval no. 2015-125). All animal experiments were performed in accordance with the Guidelines for the Care and Use of Animals published by the P.R. China Ministry of Health (January 1998) (24). The CIA model was established in 84 rats as previously described (13,14). The normal group consisted of the remaining 16 rats. Briefly, 0.1 ml bovine collagen II (Sigma-Aldrich; Merck KGaA) emulsified in complete Freund's adjuvant (Sigma-Aldrich; Merck KGaA) was administered twice at 7-day intervals by intra-dermal injection into the back, base of tail and the footpad of each rat. At 14 days after the initial immunization, each rat was administered with a booster subcutaneous injection at the base of tail. After the booster injection, the degree of joint redness and swelling were evaluated using a 5-grade arthritis scoring method, as previously described (15,16). The paw thickness (mm) of each rat was measured with a Vernier caliper every 7 days during the establishment of the CIA rat model (28 days). At 28 days after the initial immunization, 80 rats with an arthritis score of ≥2 were selected as CIA model rats.
Baicalin treatment. At 28 days after the initial immunization, the CIA model rats were randomly divided into five groups (n=16 rats/group): i) Model; ii) Tripterygium Glycosides Tablet (TGT; Yuanda Pharmaceutical Huangshi Feiyun Pharmaceutical Co., Ltd.; 10 mg/kg/day); iii) 15 mg/kg/day baicalin (purity 98%; Nanjing Chunqiu Biological Engineering Co., Ltd.); (iv) 30 mg/kg/day baicalin; and v) 60 mg/kg/day baicalin. The doses of baicalin used in the present study were determined based on a previous study (1). TGT was dissolved in normal saline and baicalin was dissolved in saline (pH 8.0). Rats were administered once a day with TGT or baicalin by oral gavage. Rats in the normal and model groups were administered once a day with an equivalent volume of saline by oral gavage. After 40 days of treatment, the rats were fasted for 8 h (with free access to water) and then anaesthetized with 10% chloral hydrate (300 mg/kg) via intraperitoneal injection. Blood (~5 ml) was obtained from the retro-orbital plexus and maintained at room temperature for 2 h. Serum was collected by centrifugation at 2,000 x g, 4˚300*12C for 15 min and stored at -80˚C until further use. Subsequently, the rats were sacrificed by cervical dislocation. No signs of peritonitis were observed prior to sacrifice. The synovium was immediately isolated from the hind knee joint, washed with sterile normal saline and stored at -80˚C.
Cell culture and treatment. Normal human fibroblast-like synoviocytes (HFLS; cat. no. 338586; BeNa Culture Collection) and RA human fibroblast-like synoviocytes (HFLS-RA; cat. no. 337864; BeNa Culture Collection) were maintained in synoviocyte growth medium (Gibco; Thermo Fisher Scientific, Inc.) supplemented with 100 IU/ml penicillin and 100 mg/ml streptomycin at 37˚C in a humidified atmosphere of 5% CO 2 . Passage 4-7 cells were used for subsequent in vitro experiments. HFLS-RAs were serum-starved for 12 h at 37˚C prior to baicalin treatment (25,50 or 100 µg/ml) for 48 h at 37˚C. Untreated HFLSs were used as controls.
Radiographic assessment. Following sacrifice, the rat hind paws were subjected to X-ray scans to observe the radiologic alterations using a MRAD-D50S RADREX-I system (Toshiba Medical Manufacturing Co., Ltd.) with the following parameters: 40 kV, 100 mA and 0.02 millisecond. Images were read and scored by a blinded independent radiologist as previously described (25): 0, no radiologic alterations; 1, mild alterations with tissue swelling and edema; 2, moderate alterations with joint erosion and disfiguration; 3, severe alterations with bone erosion and osteophyte formation. The radiologic score for each rat was expressed as the sum of the results of both hind paws.
Hematoxylin and eosin (H&E) staining. Rat synovium was fixed using 4% paraformaldehyde for 48 h at room temperature, dehydrated using a graded alcohol series, cleared with xylene and embedded in paraffin at room temperature. Tissues were cut into 5-µm thick paraffin sections using a tissue microtome. The deparaffinized tissue sections were stained with H&E for 10 min at room temperature. Synovial hyperplasia was observed in three randomly selected fields of view using a CKX41SF inverted microscope (Olympus Corporation; magnification, x200).
Reverse transcription-quantitative PCR (RT-qPCR). Total RNA was isolated from rat synovial tissues or in vitro cultured human cells using TRIzol ® (Invitrogen) according to the manufacturer's instructions. Total RNA was reverse transcribed into cDNA at 42˚C for 60 min, then 70˚C for 10 min using M-MLV reverse transcription kit (Promega Corporation), according to the manufacturer's protocol. RT-qPCR was performed using a 20 µl volume containing 1.5 µl cDNA, 0.5 µl forward primer (10 pmol/µl), 0.5 µl reverse primer (10 pmol/µl), 10 µl SYBR-Green mix (Bio-Rad Laboratories, Inc.) and 7.5 µl deionized distilled H 2 O. The sequences of the primers (Sangon Biotech Co., Ltd.) used for qPCR are listed in Tables I and II. The following thermocycling conditions were used for qPCR: Initial denaturation at 94˚C for 15 min; followed by 40 cycles of denaturation at 94˚C for 15 sec, annealing at 60˚C for 34 sec, and extension at 72˚C for 15 sec; and final extension at 72˚C for 10 min. mRNA expression levels were quantified using the 2 -ΔΔCq method (26) and normalized to the internal reference gene GAPDH.
Statistical analysis. Data are expressed as the mean ± standard deviation of three experiments (n=6 rats in each group). Statistical analyses were conducted using SPSS software (version 17.0; SPSS, Inc.). Comparisons among multiple groups were analyzed using one-way ANOVA followed by Tukey's post hoc test. P<0.05 was considered to indicate a statistically significant difference.
Baicalin treatment alleviates collagen-induced joint injury in rats.
To investigate the potential therapeutic effects of baicalin on RA, a CIA rat model that is widely used to mimic the joint inflammation observed in patients with RA was established (23). The onset of arthritis in the model group occurred at day 7 following primary collagen immunization, as evidenced by a notable increase in hind paw thickness from day 7 to day 28 (Fig. 1B) and clinical manifestations, such as functional impairment and swollen red paws, compared with the normal group (Fig. 1A). No joint destruction was observed in the normal group, whereas notable bone erosion was observed in the model group (Fig. 2). Compared with the model group, the radiographic change in the 15 mg/kg baicalin group was minimal. Moreover, moderate alterations were observed in rats treated with TGT or 30 mg/kg baicalin, whereas obvious alleviation was observed in the joints of rats treated with 60 mg/kg baicalin compared with the model group (Fig. 2). Consistently, the mean radiologic scores of rats receiving TGT, 30 mg/kg baicalin and 60 mg/kg baicalin were significantly lower compared with the model group (P<0.01; Fig. 2), suggesting that both TGT and baicalin could alleviate collagen-induced bone erosion and destruction of the joints. CIA model rats receiving 60 mg/kg baicalin exhibited the most potent protection, as indicated by the lowest radiologic score recorded among the five different groups (Fig. 2). Subsequently, histologic analysis of synovial tissue from the knee joint after 40 days of treatment was performed to further evaluate the therapeutic effect of baicalin. H&E staining indicated that the normal group exhibited a normal synovium appearance, whereas the model group demonstrated notable histological abnormalities, including synovial thickening, inflammatory cell infiltration and excessive proliferation of synovial fibroblasts. Compared with the model group, baicalin-treated rats displayed a dose-dependent improvement to histological changes, further indicating the beneficial effect of baicalin on collagen-induced joint injury (Fig. 3).
Baicalin inhibits the production of IL-1β, TNF-α and IL-6.
RA is a chronic inflammatory disease (6); therefore, whether baicalin could modulate the inflammatory process in CIA model rats was investigated. The serum levels of proinflammatory cytokines, including IL-1β, TNF-α and IL-6, were significantly increased in the model group compared with the normal group (P<0.01), whereas the serum levels of proinflammatory cytokines were significantly decreased in the TGT (P<0.01), 30 mg/kg baicalin (P<0.01) and 60 mg/kg baicalin (P<0.05 for TNF-α; P<0.01 for IL-1β and IL-6) groups compared with the model group, suggesting an antiinflammatory role of baicalin in CIA model rats (Fig. 4).
Baicalin suppresses the TLR2/MYD88/NF-κB signaling pathway in the synovial tissue of rats. To further investigate the mechanism underlying the anti-inflammatory effect of baicalin in CIA model rats, the expression levels of TLR2, MYD88 and NF-κB p65 in the synovial tissue from the knee joint of rats were measured, as TLR signaling is extensively involved in the inflammatory response in human RA (27). Compared with the model group, treatment with different doses of baicalin significantly attenuated CIA-induced mRNA and protein expression of TLR2 and MYD88; however, TLR2 protein expression levels were not significantly decreased by 15 mg/kg baicalin compared with the model group. Similar results were observed for the protein expression levels of NF-κB (Fig. 5). The results suggested an involvement of the TLR2/MYD88/NF-κB signaling pathway in the development of CIA in rats.
Baicalin suppresses the TLR2/MYD88/NF-κB signaling pathway in cultured HFLSs. Subsequently, the expression levels of TLR2, MyD88 and NF-κB p65 in cultured HFLSs were measured. Compared with untreated HFLS-RA cells, the mRNA expression levels of TLR2 and MYD88, as well as the protein expression levels of TLR2, MYD88 and NF-κB p65 were significantly decreased in baicalin-treated HFLS-RA cells. However, compared with untreated HFLA-RAs, the protein expression level of NF-κB p65 was not significantly Figure 2. Therapeutic effects of baicalin on joint destruction in collagen-induced arthritis model rats. After 40 days of treatment, the rats were sacrificed. In each group, six rats were randomly selected and their hind limbs were subjected to radiographic assessment using the following scoring method: 0, no radiologic alterations; 1, mild alterations with tissue swelling and edema; 2, moderate alterations with joint erosion and disfiguration; 3, severe alterations with bone erosion and osteophyte formation. As indicate by the red arrows, no joint destruction was observed in the normal group, whereas notable bone erosion was observed in the model group. ** P<0.01 vs. normal; ## P<0.01 vs. model. TGT, tripterygium glycosides tablet; Ba, baicalin. decreased by 25 mg/kg baicalin (Fig. 6). The results were consistent with the results observed in CIA model rats, further indicating the involvement of the TLR2/MYD88/NF-κB signaling pathway in synovial inflammation.
Discussion
The CIA model is the most widely used animal model for RA studies due to the similar pathological and arthritic presentations between CIA model animals and patients with RA (23). In the present study, CIA was induced in male SD rats, and a notable increase in hind paw thickness was observed in the model group compared with the normal group from day 7 post-primary collagen immunization. In the later stages of CIA model induction, acute inflammation gradually alleviated, leading to the redness and swelling of hind paws gradually subsiding. As chronic inflammation continued, CIA model rats displayed clinical manifestations, such as functional impairment and notable histological abnormalities in the synovial tissue, such as synovial thickening and inflammatory cell infiltration. The results indicated that the CIA model was successfully established in the present study. The results also indicated that baicalin dose-dependently alleviated joint injury in CIA model rats. Furthermore, to the best of our knowledge, the present study demonstrated for the first time that baicalin could suppress the TLR2/MYD88/NF-κB signaling pathway in the synovial tissue of CIA model rats as well as in HFLS-RAs in vitro, suggesting that blockade of TLR2/MYD88/NF-κB signaling may be associated with the beneficial role of baicalin in CIA model rats.
The pathogenic mechanism underlying RA is complex, with genetic, environmental and immunological factors contributing to RA incidence and development (28,29). Increasing evidence indicates that various immune cells and cytokines are involved in the pathogenesis of RA (30). The present study indicated that there was a notable increase in the number of inflammatory cells and a significant increase in serum levels of IL-1β, TNF-α and IL-6 in the synovial tissue were observed in CIA model rats compared with normal rats, which was consistent with previous studies (31,32). It has been reported that baicalin exerts anti-inflammatory effects in various disorders. For example, baicalin inhibits LPS-induced inflammation caused by endotoxic shock (21). Baicalin administration also inhibits inflammatory cell infiltration and expression of proinflammatory cytokines in an animal model of multiple sclerosis (33). Similar results were observed in the present study, highlighting the potential of baicalin as an anti-inflammatory therapeutic agent in RA.
The TLR2/MYD88/NF-κB signaling pathway serves an essential role in RA development by enhancing the secretion of proinflammatory cytokines and matrix metallopeptidases (10)(11)(12)(13)(14). The present study indicated that the mRNA and protein expression levels of TLR2, MYD88 and NF-κB p65 in synovial tissue, as well as the systemic levels of IL-1β, TNF-α and IL-6 were significantly increased in CIA model rats compared with normal rats. Previous studies demonstrated that baicalin inhibits the TLR2 or TLR4/MYD88/p38 MAPK/NF-κB signaling pathways in periodontitis and LPS-induced fever in rats (21,22). In addition, suppression of synovial NF-κB p65 expression is involved in baicalin-relieved joint inflammation in CIA model rats (20). In the present study, following treatment with different doses of baicalin, the amelioration of synovial lesions was accompanied by significant suppression of the TLR2/MYD88/NF-κB p65 signaling pathway, suggesting that the beneficial effect of baicalin on CIA may be mediated via inhibiting activation of the TLR2/MYD88/NF-κB signaling pathway in the synovial tissue of CIA model rats.
Collectively, the results of the present study suggested that baicalin exerted an anti-inflammatory effect on CIA model rats, and may serve as a potential therapeutic agent in RA. The inhibitory role of baicalin in collagen-induced inflammatory responses may be mediated by suppression of the TLR2/MYD88/NF-κB p65 signaling pathway.
The present study had two main limitations. Firstly, a large number of rats were used and secondly, TGT treatment was not used in the in vitro experiments because sterile TGT solution was not available. The TGT tables used for the in vitro experiments contain starch that can easily block the mesh of the filter during the filtration and degerming process. | 2020-07-30T02:10:20.045Z | 2020-07-28T00:00:00.000 | {
"year": 2020,
"sha1": "a6cf815f22cb87c87ced2c3401db4a823cc7d6a6",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2020.11369/download",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "f42a36297e69336827cb1f49ee495c940f72d066",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235379048 | pes2o/s2orc | v3-fos-license | Community Based Essential Newborn Care Practices and Associated Factors among Women Who Gave Birth at Home in Last Twelve Months in Amaro Woreda, Southern Ethiopia, 2019
Introduction:- Significant numbers of women are giving birth at home; in this case community based newborn care is a means of bringing life-saving care to mothers and newborns at the community level. However, practice is challenging within the Ethiopian health system. Objective:- The aim of this study was to assess prevalence of community based newborn care practices and its associated factors among women who gave birth at home in Amaro Woreda, southern Ethiopia, 2019. Methods:- Across-sectional study was conducted on 490 women in the reproductive age groups of 15-49 in Amaro district and by using simple random sampling technique individual was recruited. Data collected through face-to-face interview at household level. EpiData version 3.1 statistical software was used for entry and SPSS version 20 for was used for data cleaning, management and analysis. Bi-variate and multivariate logistic regression analysis employed for analysis of factors associated with Community based newborn care practices. Results:- A total 490 of study participants were included in the analysis and only 29% practiced community based essential newborn care. Educational status of father AOR=2.28; 95%CI:1.07-4.84 & mother AOR=0.35; 95%CI: 0.16-0.75, last delivery assisted by relative/friends AOR=3.58; 95%CI: 1.66-7.73, having awareness about Community based newborn care AOR=3.49; 95%CI: 2.11-5.77, awareness about newborn danger sign AOR=2.18; 95%CI: 1.29-3.68 and having birth preparedness and complication readiness plan AOR=3.52; 95%CI: 1.97-6.29 were an identified independent factors associated with Community based newborn care Practice. Conclusion and recommendation:- Around three-fourth (71%) of mothers were not practicing Community based newborn care. Educational status of family, awareness about
Introduction
Community based new born care (CBNC) is a means of bringing life-saving care to mothers and newborns at the community level within the Ethiopian health system (1). Through CBNC, the government aims to strengthen the primary health care unit (PHCU) and the Health Extension Program, which is a platform for community-based primary care delivery (1). Building on lessons learned from Integrated Community Case Management of childhood illness (ICCM), the implementation of CBNC used the following guiding principles to ensure rapid, high-quality implementation: government leadership and ownership, spanning the continuum of care, balance between preventive and curative care at the community level, quality service, community participation, strong health system support and phased implementation approach and partnership (2).
The goal of CBNC program is to reduce newborn mortality through strengthening the Globally, around 4 million neonatal deaths occur annually, which accounts for 38 percent of under-five mortality. A similar number of babies are still born and 99% of all neonatal deaths occur in low and middle income countries (4). As Ethiopian Demographic and Health Survey (EDHS) 2016, the neonatal mortality rate was 29 deaths per 1,000 live births. The risk of death is highest in the first 24 hours of life when more than half of deaths occur and about three-quarters of all neonatal deaths occur within the first week of life (5).
Despite Ethiopia's remarkable reductions in infant and under-5 mortality and achievement of Millennium Development Goal (MDG) four three years ahead of the deadline, the reduction in neonatal mortality has not been as impressive (1). The 2016 EDHS results show that the neonatal and infant death for the 5 years before the survey is 29 & 48 deaths per 1,000 live births, respectively. In other words, in Ethiopia 1 in every 35 children dies within the first month and 1 in every 21 children dies before celebrating the first birthday (6). Although Ethiopian government did many health interventions such as training health workers, enhancing referral system, integrating health services, implementing packages of Health Extension Program (HEP) and routine immunization, neonatal death is still high, even one of the top ten countries in Africa (7,8). The finding of studies conducted in different area of Ethiopia with regards to the level of CBNC practice documented that there is low comprehensive practices of essential new born care practices (9,10). A family community package promoting good home care of the newborn particularly cleanliness, warmth provision, and exclusive breastfeeding would have an expected reduction in the NMR of 10 to 40 percent, varying with the baseline NMR and the potential for accessing care. The effect might be greater if the package successfully addressed harmful local practices.
As the indication of EDHS 2016 the proportion of home delivery nearly 83% in Ethiopia the probability of maternal and newborn health, need more investigation for better planning and promotion on maternal and newborn health. Outreach services such as prenatal care alone have an effect of about 10 percent on NMRs, but when they are combined with a family package using community health promoters, an additional 30 percent reduction in the NMR is projected in Ethiopia (11).
As indicated on 2017/2018 annual report of Segen Area People Zone maternal and newborn health report performance of ANC 4th 63 %, and skill delivery coverage 71 %.In addition to this, specific to Amaro district government report reveals that postnatal care service coverage 68 %, and skill delivery coverage 32.09 %. Therefore, obtaining evidence on level of Community on Essential Newborn Care Practices and its associated factors will facilitate the interventional measures to avert the preventable factors of neonatal death.
Hence, this study aimed to assess community based essential newborn care practices and associated factors among women who gave birth in the last 12 months preceding the survey in Amaro woreda southern Ethiopia, 2019.
Methods And Materials Study area, Design and Period
This study conducted in Amaro district in Segen Area Peoples Zone, Southern Ethiopia. The district is located 695 km south of Addis Ababa and 465 km still south of the regional capital city Hawassa. The district has a total of population of 193,219 in which 25,505 women are in childbearing age. In the district, there are 33 rural kebele and 2 urban kebele. In the district, there are one primary hospital, seven health centers, 39 health posts and 25 private health facilities deliver health care service (21). Health Extension workers at health posts are providing maternal, child and neonatal services like ANC, PNC, Immunization, growth monitoring, outpatient therapeutic program, ICCM/CBNC and first aids. At community level, Heath extension workers are performing preventive health services through regular home to home visits and curative services at health post (22,23).
A community based descriptive cross-sectional study was conducted from January-February 2019 in Amaro district, Southern Ethiopia.
Population, Sampling Determination and Sampling Procedure
The source population was women in the reproductive age group of 15-49 years who were practicing home delivery in Amaro district. Accordingly, women of reproductive age groups who gave live birth at home in the last 12 months' period preceding the survey was taken as study population. To take adequate sample size from study population, sample size for study was determined by single population formula by consideration of a desired level of confidence (95%), margin of error (5%), non-response rate (10%), design effect (1.5), the estimated prevalence of community based essential newborn care practices (26.7%) [10].
the final sample size for this study was 495.
To make the sample is more representative; around 31% (11) out of 35 kebeles of Amaro district were selected using lottery method. Then, proportional allocation of size to kebele was made. Finally, computer random generated number was employed to recruit the mother from family folder in community health information system. However, the mothers who delivered in the health institution, mothers who are caregivers to babies whose mother is lost, those mothers who were seriously sick during data collection and mothers who recently delivered died fetus was excluded from the study.
Variables of the study and Operational definitions
Community Based Essential Newborn Care Practices is outcome , while age of mother, education of mother, education of father; sex of neonate, occupation of mother, 0ccupation of father, n umber of pregnancy, number of delivery, number of children, number of ANC visits, attendant at birth, birth preparedness plan, mothers awareness about newborn danger signs, counseling from a health extension worker (HEW) or advice from a CHV (WDA leader), exposure to media and husband involvement.
Community Based Essential New Born Care Practiced: In this study those mothers who practice all the three essential new born care practices (delayed bathing, safe cord cutting and early initiation of breast feeding within one hour of birth) are said to be practiced ( 9).
Delayed bathing: The recommended practice of bathing a new born by delaying for at least the first 24 hours of birth to reduce the risk of hypothermia(10).
Early initiation of breast feeding: The recommended practice of putting a new born to the mother's breast within one hour of birth (10).
Safe cord cutting: Safe cord cutting means practice of cutting a new born cord with help of the instrument from clean home delivery kit, a new blade or a boiled blade (15).
Data collection methods and tools
Structured questionnaires were designed in English version and translated to Amharic and local language. Pre-test was carried out on the 10% of sample size at Burji District which is one of neighbor district and necessary correction was made prior to the actual data collection. The data was collected by four BSc graduate professional in health at selected households. Training was given for data collectors and supervisor on the objectives of the study, the content of the questionnaire, issue related to confidentiality of the responses and rights of the respondents during data collection. They also have informed about proper data handling, systematic answers for respondents' questions. data was collected by using face to face interviews. Supervisors checked data for completeness daily after data collection and principal investigator also randomly cross-checked the data before entry. Finally, the overall data collection process was controlled by the principal investigator. (Table 3).
Discussion
This study aimed to assess community based essential newborn care practices and associated factors among women who gave birth at home in the last 12 months preceding the survey in Amaro woreda southern Ethiopia. Of the total study participants included in the analysis, 142(29%) were practiced CBNC. However, 60.2 % neonate received clean cord care, 63.1% mothers initiated breast-feeding within one hour and 59.5% received appropriate bathing, which is comparable with study conducted in Southwest Ethiopia ( 18).
The prevalence of community based essential new-born care (CBNC) practice in this study is considered to be higher as compared to other studies reports from Amhara (23.1%) and Aksum (26.7%) (9,10). This discrepancy may be due to the fact that some of the former studies include both home and facility delivery.
In the current study, paternal education was found to have statistically significant association with community based essential newborn care practices. (9,(17)(18)(19). This might be due to the fact that education in the real world has a positive significant effect to practice any healthy life for mothers and children as well families. However, in the current study mothers able to read and write were 65% less practicing CBNC than illiterate mothers. It need further investigation.
Mothers who had her last delivery at home by assisted with relatives or friends were found to be 3.58 times more likely practicing CBNC than those assisted by HW at home This may be due to poor quality of counseling by HEW or HWs results in strong peer pressure to change their view towards CBNC practice.
Having awareness about CBNC is the most important reported factors in the population (15,19). Besides of the above reported risk factors the level of practicing the community based essential new-born care (CBNC) was 3.49 times highly common among those mothers who had awareness about CBNC than those who did not informed. Neonatal danger has become a substantial problem in many developing countries like Ethiopia. In this regard, health-seeking behavior of mothers for neonatal care highly relies on their knowledge about neonatal danger sign, and it has been hardly investigated (24).
In this community-based study, individuals who had awareness about newborn danger sign were 2.18 times more likely practicing community based essential newborn care (CBNC) as compared to the counterpart. The finding in this study considered to be in line with a community based study conducted in Enugu state, South-East Nigeria and North West of Ethiopia (24,25). This finding might be due to the fact that those mothers who had a positive awareness about neonatal danger sign experienced of how to practice CBNC.
comprehensive strategy and matrix that includes shared responsibility among the woman and her family, the community, healthcare providers, facilities that serve them, and the policies that affect care for the woman and the newborn (26). Accordingly, the likelihood of Community based essential new-born care (CBNC) practice among those having birth preparedness and complication readiness plan was 3.52 more prone to practice CBNC than counterpart. This finding was in line with finding in Kofale District, South East Ethiopia (26) and it was found to be due to the fact that having birth preparedness and complication readiness plan positively affect the mothers practice to ward community based essential new born care.
Unlike many other study findings, this study did not reveal any association of community based essential new-born care (CBNC) practice with number of ANC visits during last pregnancy, number of last fetus, early PNC for last birth by HEW and counseling about CBNC in the last 12 months. This might be illustrated by variations related to study population, setting, socio-demographic, socio -economic and cultural difference.
One of the strength of this study was findings can be generalized to similar settings and population in other parts of the country since this study is primary data and a community based study.
Conclusion
In conclusion, around 71% of mother who give home birth and participated in this study were not practicing essential newborn care, which is fare lower than many other studies.
The study identified both positive and negative factors towards to CBNC practice.
Accordingly, having formal paternal educational, delivery assisted by relative or friends, having awareness about CBNC, having awareness about newborn danger sign and having birth preparedness and complication readiness plan were positively affecting an essential
Availability of data and material
The datasets used for current study is available from the corresponding author on reasonable request. divorced, single) , Others*** (Merchant and Government employee). **Significance at p_ value <0.001, * significance at p_ value <0.05. Figure 1 Community based newborn care practice status of study participants, Amaro district | 2019-09-17T01:08:02.609Z | 2019-08-23T00:00:00.000 | {
"year": 2019,
"sha1": "646fb0c6024cc26f380fa68a13bedcf45156f892",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-4199/v1.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8f85fd5b5ab9a5f29be0307c83dd105343bc6ff9",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": []
} |
29186833 | pes2o/s2orc | v3-fos-license | Locomotor performance of cane toads differs between native-range and invasive populations
Invasive species provide a robust opportunity to evaluate how animals deal with novel environmental challenges. Shifts in locomotor performance—and thus the ability to disperse—(and especially, the degree to which it is constrained by thermal and hydric extremes) are of special importance, because they might affect the rate that an invader can spread. We studied cane toads (Rhinella marina) across a broad geographical range: two populations within the species' native range in Brazil, two invasive populations on the island of Hawai'i and eight invasive populations encompassing the eastern, western and southern limits of the toad invasion in Australia. A toad's locomotor performance on a circular raceway was strongly affected by both its temperature and its hydration state, but the nature and magnitude of those constraints differed across populations. In their native range, cane toads exhibited relatively low performance (even under optimal test conditions) and a rapid decrease in performance at lower temperatures and hydration levels. At the other extreme, performance was high in toads from southern Australia, and virtually unaffected by desiccation. Hawai'ian toads broadly resembled their Brazilian conspecifics, plausibly reflecting similar climatic conditions. The invasion of Australia has been accompanied by a dramatic enhancement in the toads' locomotor abilities, and (in some populations) by an ability to maintain locomotor performance even when the animal is cold and/or dehydrated. The geographical divergences in performance among cane toad populations graphically attest to the adaptability of invasive species in the face of novel abiotic challenges.
Invasive species provide a robust opportunity to evaluate how animals deal with novel environmental challenges. Shifts in locomotor performance-and thus the ability to disperse-(and especially, the degree to which it is constrained by thermal and hydric extremes) are of special importance, because they might affect the rate that an invader can spread. We studied cane toads (Rhinella marina) across a broad geographical range: two populations within the species' native range in Brazil, two invasive populations on the island of Hawai'i and eight invasive populations encompassing the eastern, western and southern limits of the toad invasion in Australia. A toad's locomotor performance on a circular raceway was strongly affected by both its temperature and its hydration state, but the nature and magnitude of those constraints differed across populations. In their native range, cane toads exhibited relatively low performance (even under optimal test conditions) and a rapid decrease in performance at lower temperatures and hydration levels. At the other extreme, performance was high in toads from southern Australia, and virtually unaffected by desiccation. Hawai'ian toads broadly resembled their Brazilian conspecifics, plausibly reflecting similar climatic conditions. The invasion of Australia has been accompanied by a dramatic enhancement in the toads' locomotor abilities, and (in some populations) by an ability to maintain locomotor performance even when the animal is cold and/or dehydrated. The geographical divergences in performance among cane toad populations graphically attest to the adaptability of invasive species in the face of novel abiotic challenges.
2017 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.
Introduction
Environmental conditions drive much of the variation in organismal performance, at a variety of ecological and evolutionary timescales. At one extreme, abiotic factors directly constrain the performance of individuals [1][2][3][4][5][6][7][8][9]; and at the other extreme, thermal and hydric conditions can act as selective forces on performance capabilities of a population and, ultimately, a species [10][11][12][13][14]. At intermediate timescales, physiological acclimation can adjust the immediate impact of abiotic variation on performance [15][16][17][18][19][20]. While acclimation of individuals usually occurs only over the range of conditions likely to be encountered by a population, sustained exposure to novel environmental challenges can favour evolutionary changes that extend the range of conditions over which an organism can operate successfully [10][11][12]14].
Many environments worldwide are experiencing rapid changes in thermal and hydric conditions, and such changes look set to continue at an increasing pace [21]. Will existing adaptations to ancestral climates prevent organisms from functioning effectively in shifting abiotic conditions? Invasive species provide an ideal opportunity to answer that question for two reasons. The first reason is that introduced populations are subjected to environmental conditions very different from those experienced in the native range. That difference allows us to investigate the rate at which a novel selective pressure or direct environmental influence can modify ancestral performance traits. In performance traits that are simultaneously affected by multiple abiotic factors (e.g. temperature and moisture), comparisons between native-range and invasive populations also can clarify the degree to which correlated responses to multiple constraints can be teased apart by plasticity or evolution. The short and well-documented timeframes of many invasions, coupled with a capacity for behavioural and physiological flexibility and rapid evolutionary change in many introduced taxa [22][23][24][25][26], allow us to compare closely related organisms that are exposed to very different challenges [27,28]. Second, locomotor performance is a central determinant of dispersal rate, a trait that can evolve very rapidly within invasive populations [29,30]. How quickly an animal moves through its environment, and how well it can withstand climatic extremes while doing so, can directly influence an invasive species' dispersal rate and thus, its distribution. For an invader moving into a novel climate, a high level of performance under ancestral conditions is not enough; it must also be able to perform in the newly encountered conditions [31,32].
Locomotor ability of anurans is well suited to analyses of these questions, because an anuran's ability to disperse may depend upon both its temperature and its hydration state [6,7,31,[33][34][35][36][37]. To understand how local environments constrain locomotor performance in such an animal, we thus need to take our measurements over a range of abiotic conditions [36]. To understand how adaptation and/or developmental plasticity have modified those norms of reaction, we can compare animals that have been subjected to different selective pressures and environmental experiences [36].
Cane toads (Rhinella marina; formerly Bufo marinus) are large bufonid anurans native to relatively hot, wet areas of Central and South America [38]. In the early part of the twentieth century, the species was introduced to more than 40 countries around the world, in a misguided attempt at biological control of insect pests [22][23][24][25][26]38]. Some of those recipient countries have climates similar to those within the toads' native range, but others pose novel thermal and hydric challenges. Notably, the toads' invasion across tropical Australia has exposed it to climates much drier and hotter than occur within its native range [27,28,[39][40][41][42]. Given that anuran locomotor performance is constrained both by temperature and by moisture [27,28], how has the toads' invasion of novel climatic zones affected the sensitivity of its locomotor ability to thermal and hydric conditions? Specifically, has the cane toad's invasion of Australian areas with harsh climates been accompanied by an enhanced tolerance of hot and desiccating conditions?
Study species
Cane toads (R. marina) are large (exceptionally, to greater than 1 kg) 'true toads' (family Bufonidae) [26]. The species' native range extends from Mexico and southern Texas through an extensive area of Central and South America [22,26,38]. Adult cane toads are heavyset terrestrial anurans that feed on a diversity of invertebrate prey [26,38]. The toads' large body sizes and prodigious appetites encouraged commercial sugarcane growers to import toads to control insect pests in plantations in Puerto Rico; and from there, 150 toads were translocated to Hawai'i in 1932 and released in sugarcane fields [22,43]. Toads are still common on the major Hawai'ian islands, where the animals are relatively sedentary [44].
The 101 descendants of the Hawai'ian immigrants were collected in 1935 and shipped to northeastern Australia, where thousands of their progeny were released along the Queensland (QLD) coast [22]. The anurans have since spread widely through tropical and subtropical regions of Australia, inflicting major impacts on populations of anuran-eating predators [23][24][25]45]. In Australia, cane toads thrive not only in well-watered regions of coastal northeastern Australia, but also in severely arid regions [28,46]. In recent years, toads have also invaded cold montane regions of southeastern Australia [20,41]. Dispersal rates are dramatically higher for individuals at the (western) invasion front than in individuals from range-core areas of eastern Australia [29,47]. Analyses of climatic correlates of cane toad distributions led Tingley et al. [48] to conclude that cane toads occupy a wider range of climatic conditions in Australia than in their native range.
Sampling locations
We collected adult toads (both males and females, ranging from 50 to 300 g) from locations in their native range (Brazil), in Hawai'i and in Australia. All toads were collected by hand at night, placed in damp cloth bags and kept in a moist, cool environment to reduce stress. Following capture, toads were transported to local laboratory facilities for the experiments. All toads collected were tested, and sample numbers are detailed in table 1.
Toads from the native range were collected in Manaus, Amazonas (AM) and Alter do Chão, Pará (PA) in Brazil. Fieldwork occurred during January and February 2015, a warm and wet time of year.
During June and July 2015, we collected toads on the island of Hawai'i (HI, USA), from sites in the extreme east (Hilo) and extreme west (Kailua-Kona) of the island. The eastern (windward) side is humid and warm, whereas the western (leeward) side is much drier, due to a rainshadow effect coupled with highly porous volcanic soils [44]. Cane toads are broadly distributed through the landscape on the (wetter) eastern side of the island, but largely restricted to anthropogenically watered sites (golf courses) on the drier western side [44].
In Australia, we collected toads from eight sites. Two sites in Western Australia (WA) (Oombulgurri, Kununurra) were in the extreme west of the species' range, close to the invasion front (less than 2 years post-colonization). The climate is hot year-round and seasonally arid. Another two sites were in the Northern Territory (NT) (Katherine, Leaning Tree Lagoon) in an area exposed to a similar but less harsh climate than that encountered further to the west. Our two QLD sites (Townsville, Charters Towers) experience moister conditions year-round. Lastly, our two sites in New South Wales (NSW) (Brooms Head and Tabbimoble) are close to the current southeastern invasion front, and experience cooler (but generally moist) conditions (table 1 for details of site locations, invasion history, climatic conditions and sample sizes). Climate data were sourced from Climate-Data.org [49].
Husbandry and experimental methods
After capture, toads were allowed to acclimate in laboratory conditions for at least two weeks. During acclimation (and between trials), we fed the anurans crickets and mealworms, and provided ad libitum access to water and shelter. The room was set to 25°C, with a 12 L : 12 D cycle. Prior to their first locomotor trials, we maintained the toads at the test temperature for a minimum of 2 h, during which time we kept them in water to ensure full hydration. We emptied the toads' bladders by gently applying pressure to the abdomen, and then encouraged them to run along a circular wooden track inside a temperaturecontrolled room, and stimulated them to keep moving by gentle pokes to their urostyles if needed. The trial continued for 10 min. We recorded the number of laps, plus additional distance on the uncompleted final lap for each individual. Because all trials were the same duration, we used total distance travelled as our index of locomotor performance. To correct for size variation among toads (snout-vent lengths (SVL) ranged from 85.6 to 183.9 mm), we expressed distances in terms of body lengths travelled during a trial as the dependent variable in our statistical analyses-we will refer to this variable as 'performance'.
After the initial trials, we placed toads in desiccating conditions (exposed to a flow of dry air), and allowed them to dehydrate overnight until they lost 10% of their initial (fully hydrated, empty bladder) body mass. Toads were not placed in a chamber, but rather stayed overnight in mesh containers, inside the temperature-controlled room at 25°C and with airflow provided by the room's air-conditioning system and fans. This allowed for a slower, gentler, more natural water loss than in wind tunnels/chambers. All further desiccation processes occurred in this same manner, regardless of Table 1. Details of toad sampling sites and the annual climatic conditions of each location. Climatic data sourced from Climate-Data.org [49] the temperature being tested during the locomotor performance. We repeated the locomotor test protocol over the following days to measure locomotor performance of toads at 90%, 80% and 70% of their body mass. After the final trials, we placed toads back into water (at room temperature) to allow full recovery. Each toad was tested at 15, 25 and 35°C (except for Hawai'ian toads, in which case only 25 and 35°C trials were performed because we were unable to maintain low temperatures in our test facilities in Hawai'i). We tested all animals at 25°C first, because it was the temperature of acclimation and close to the natural environmental temperature (i.e. it allowed us to establish a baseline performance). Following that, we tested each animal at 15°C, and lastly at 35°C.
Statistical analyses
Using the open-access software R STUDIO v. 0.99.893 [50], we used linear mixed models (package lmer) to evaluate the fixed effects of test conditions (temperature and hydration level) and location on the toads' locomotor performance. We included individual ID no. as a random factor in the analyses to accommodate multiple measures taken from individual toads. Collection site and country were combined as groups (per state for Australian toads, and per country for Hawai'ian and Brazilian toads). We treated test temperature and desiccation level as three-and four-level categorical variables, respectively. We ran three sets of analyses. First, we combined data from all toads to quantify overall patterns of the influence of the two categorical variables (test temperature and dehydration level) on locomotor performance. Next, we included country of origin (Brazil, USA, Australia) as an additional fixed factor, to compare locomotor responses with environmental conditions at this broad level. Lastly, we restricted the dataset to Australian toads only, and included state of origin (QLD, NSW, NT, WA) as a fixed effect, to look in more detail at the divergence in locomotor traits over the 80-year history of toad invasion in this continent. Residuals from all analyses conformed to assumptions of normality and homoscedastic variances. The data used in these analyses were deposited at Dryad Digital Repository [51].
Overall effects of temperature and hydration on locomotor performance of toads
We combined data from all populations to explore overall impacts of temperature and hydration on locomotor performance. Toads exhibited wide variation in locomotor ability, as a function of test conditions as well as traits of toads (total distance travelled ranged from 30 to 18 560 cm over the 10-min test). ANOVA on the combined data showed that a toad's locomotor performance was affected by a significant interaction between its temperature and its hydration level (F 6,1955 = 30.39, p < 0.001; figure 1; see also electronic supplementary material, table S9). Thus, the effect of hydration on locomotion depended on the temperature. At all tested temperatures, there was a steep decrease in locomotion when toads were dehydrated below 80%. However, in the lowest test temperature (15°C), locomotor performance was significantly better at 80% hydration and remained stable at higher levels (90%, 100%), while at higher test temperatures (25°C, 35°C), performance was significantly lower at 100% hydration than at 80 and 90% (based on general patterns in figure 1; see also electronic supplementary material, table S1). Furthermore, toads travelled farther under warmer conditions, and the significant interaction term reflects a trend for locomotor performance to be low, regardless of hydration level, at the lowest test temperature (figure 1).
Comparison of toads from Brazil versus Hawai'i versus Australia
We explored the effects of temperature, hydration level and country of origin in a linear mixed-effects ANOVA model, including individual identity as a random effect. The model resulted in a significant three-way interaction term (F 8,1938 Australian toads. By contrast, decreasing hydration level from 90 to 80% dramatically reduced locomotor performance of Hawai'ian toads at both 25 and 35°C. This decrease was less marked in Brazilian toads, and minimal in Australian toads (electronic supplementary material, tables S2-S4). When tested at low temperature, Australian toads travelled almost twice as far as did either Brazilian or Hawai'ian toads. Brazilian and Hawai'ian toads were greatly affected by dehydration, especially at high temperatures. At 70% hydration and 35°C, Hawai'ian toads were rendered immobile, which necessitated excluding them from this treatment combination (figure 2).
Comparison among toads from different locations in Australia
Among Australian toads, a significant three-way interaction (F 18,1399 = 2.67, p < 0.001; see also electronic supplementary material, table S11) indicated that the manner in which temperature and hydration level affected locomotor performance depended on which population the toad was from. Comparing among Australian populations, toads from NSW had much higher locomotor performance than did toads from any of the other populations. The NSW animals not only had higher performance at 90 and 80% hydration level at all tested temperatures, but also were more resilient to the 70% hydration treatment (i.e. exhibited less decrease in performance: figure 3). Toads from all Australian regions performed best at 35°C, at all hydration levels (figure 3; electronic supplementary material, tables S5-S8).
Following the models analysed above, we performed pairwise comparisons of significant factors and interactions using the Tukey method, within each population, to elucidate patterns of effect. Both test temperature and hydration level significantly affected toad performance, but in different ways in different populations (electronic supplementary material, tables S5-S8).
Discussion
Test conditions (temperature and hydration level) strongly affected the locomotor performance of our cane toads, consistent with previous studies on other anuran species, including bufonids [17,28,31,34,36,52,53]. Although we encouraged them to continue moving, toads travelled shorter distances when they were cool and/or dehydrated. Importantly, performance of these animals was also affected by an interaction between these two factors, such that the effect of desiccation on distance travelled differed among test temperatures. Thermal impacts on performance were relatively straightforward in all tested populations (both native and introduced), with toads moving further at higher temperatures. However, hydric condition (level of desiccation) had more complex effects. At warmer temperatures, toads performed optimally at intermediate levels of hydration, and travelled shorter distances at hydration levels above and below this optimum (figure 1). At warmer temperatures, Australian and Hawai'ian toads unexpectedly exhibited a decrease in locomotor performance when hydration level increased from 90 to 100%. Full hydration may impose a physiological constraint, or alter behavioural factors that make toads less inclined to perform to the same level in experimental trials [54]. Only a few studies have looked into the combined effects of temperature and hydration levels in R. marina. Tingley et al. [28] compared the effect of different hydration levels in two populations of toads (a mesic and a semi-arid population). However, they only tested locomotion at approximately 25°C, with no 70% hydration treatment; also, their toads were dehydrated rapidly in a wind tunnel, whereas in our study toads were dehydrated overnight. Slower dehydration better mimics the natural situation, and may elicit more natural behavioural and physiological responses. Tingley et al. [28] suggested that locomotor performance of toads from a mesic area (corresponding to our QLD populations) declined more rapidly with dehydration than did that of toads from a semi-arid area (corresponding to our NT populations). Our study confirms this pattern, but including the WA and NSW populations reveals a greater complexity: locomotion of toads from the western invasion front (WA) is more sensitive to both temperature and dehydration than is that of toads from the southeastern invasion front (NSW). Although warmer temperatures were associated with better performance overall, the generally higher performance levels of toads from NSW suggest that lower temperatures (or greater thermal variance) pose more of a challenge to R. marina than does dehydration.
Tingley et al. [48] concluded that the southern limits of the cane toad's native range in Brazil are driven by competition with a closely related species (Rhinella schneideri), rather than abiotic constraints. However, our results suggest that cane toads perform poorly in cool dry environments, suggesting that abiotic factors may be important also. Australian toads performed better than native-range conspecifics under all tested conditions, suggesting that individuals in these invasive populations are behaviourally and/or physiologically more capable of tolerating extreme conditions. Consistent with the prediction that toads will exhibit enhanced locomotor performance as they invade harsher environments, our comparisons among toads from different parts of Australia also reveal finerscale geographical divergences in the sensitivity of locomotor performance to test conditions (figure 3). For example, toads from the southeastern invasion front (NSW) continued to move along the raceway even when cool and highly dehydrated, whereas toads from tropical populations (QLD and NT) were more affected by dehydration and cold. The magnitude of such differences was striking: for example, toads from NSW travelled an average of 39.08 ± 9.62 m at 15°C and 70% hydration, whereas toads from QLD travelled an average of 29.01 ± 12.13 m under the same conditions; Brazilian toads moved an average of 11.57 ± 5.68 m under those conditions, and we were unable to even test Hawai'ian animals under those conditions because they were immobile and unresponsive.
The substantial divergences among populations might be due to phenotypic plasticity (responses to environmental conditions experienced during the animal's lifetime) and/or to adaptation that has occurred during the relatively brief period since the groups were separated. Toads were brought from the South American mainland to Puerto Rico in 1923, and also perhaps earlier [43]. Puerto Rican toads were then translocated to Hawai'i in 1932, and from there to QLD in 1935 [43]. Rather than evolving such strong divergences in locomotor performance over this short period, the divergences may have been driven by phenotypic plasticity rather than evolutionary adaptation. Because we worked with field-collected toads, their responses to temperature and desiccation may have been fashioned by the individual's own experiences prior to collection. Toads from different regions within Australia differ strongly in a suite of phenotypic traits related to dispersal rate (encompassing morphology, physiology and behaviour), and common-garden studies have shown that many of those geographical differences are heritable [29,55,56]. Nonetheless, we have no evidence that geographical shifts in thermal and hydric sensitivity of performance also have a genetic underpinning.
What mechanisms may underlie the development of a phenotype capable of continuing to move around even in cool and dry conditions? The obvious explanation is geographical divergence in climates. Cane toads in their native range, and in Hawai'i, are exposed primarily to relatively stable warm moist conditions, through a combination of prevailing climates and active selection of moist habitats [44]. Resistance to cool dry environmental conditions is unlikely to have been an important determinant of fitness. By contrast, Australian populations of the cane toad have been exposed to novel (and often, harsh) abiotic conditions since their arrival in that continent 80 years ago [48]. An ability to continue dispersing even under such conditions may have conferred a significant fitness benefit in Australia, leading to the evolution (or phenotypically plastic manifestation) of a phenotype capable of maintaining locomotor performance even under abiotic extremes. In keeping with that interpretation, the Australian populations most resistant to cool conditions are those from NSW, the area where toads are most likely to encounter cool weather [20]. The sites where we collected toads also vary in features such as distances between potential spawning-ponds, and the array of local predators; such factors also may influence selection on locomotor ability. To know whether such ability is innate or phenotypically plastic (expressed only when such conditions are encountered during an animal's lifetime) would require studies on captive-raised animals.
The success of introduced species often depends on their ability to deal with challenges imposed by the abiotic environment, rather than local biotic resistance [57]. As a result, populations of invasive species can diverge in physiological responses as a result of being subject to different conditions in different areas [58,59]. Our data on cane toads accord well with those conclusions, and suggest that rapid adjustments to deal with novel abiotic challenges may allow cane toads to extend into larger areas of Australia than would be expected from their 'climatic envelope' within the native range, or within already-invaded parts of Australia. The implications for wildlife managers are clear: we may need effective means to control cane toads not only in the kinds of environments in which they currently occur, but also in other (cooler, drier) regions currently well outside the predicted final extent of cane toad distributions in Australia.
In summary, our data add to a growing picture of cane toads as extraordinarily flexible animals. The species evolved in relatively benign environments [48], favouring an ancestral condition of low resistance to thermal and hydric extremes. That sensitivity, if retained, would have precluded successful colonization by cane toads in many of the places to which it was translocated [26]. Instead, the cane toad changed rapidly after it encountered novel abiotic conditions in its introduced range. Despite the low numbers of founding individuals in successive bottlenecks in Hawai'i and QLD, and thus low genetic diversity [60], toads rapidly evolved distinctive modifications in a diverse suite of phenotypic traits [29,55,56], and managed to thrive and spread over large portions of the driest continent on Earth. This study suggests that the toads' success in Australia is also due at least partly to their ability to maintain effective locomotion even under hot and desiccating conditions. The toads' success in Australia is discouraging, because cane toads have had severe ecological impacts in that continent [45], and at the same time encouraging, because it testifies to the ability of organisms to deal with the kinds of novel environmental challenges that are increasingly being thrust upon them. And it reinforces a cautionary note from other studies: we cannot reliably infer the attributes of an invasive species from studies in the organism's native range, because the process of invasion may generate rapid divergence. and interpretation of the data; K.C. provided invaluable logistical support and appropriate equipment for experimentation, and helped draft and revise the article; R.S. conceived the study, participated in the design of the study, coordinated the study and helped draft the manuscript. All authors gave final approval for publication.
Competing interests. We declare we have no competing interests. Funding. Our work was sponsored by the Capes Foundation within the Ministry of Education, Brazil (grant no. BEX/13734-13-0) and the Australian Research Council (FL120100074). | 2017-09-24T17:51:28.913Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "9cdcfbf4107296e9b7f44771dfed1ae7983334a3",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.170517",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b8326a96c0f35ecde4d277d4f53b7ecb8049d8a2",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
17152018 | pes2o/s2orc | v3-fos-license | Staged interventional and surgical treatment of tetralogy of Fallot with critical stenosis of proximal aortic arch in premature hypotrophic newborn
Tetralogy of Fallot (ToF) is the most common cyanotic congenital heart defect diagnosed in newborns. Depending on the degree of right ventricle outflow tract stenosis and clinical signs of cyanosis, the pathology necessitates intensive medical, interventional and surgical treatment in early infancy with anatomic correction before the first year of life. ToF is often accompanied by additional cardiovascular defects and congenital abnormalities apart from the circulatory system. Complex stenosis of the right ventricle outflow tract (RVOT) in ToF is rarely accompanied by any stenosis of the left ventricle outflow structures [1]. Regular treatment becomes more problematic in borderline low body weight patients suffering from prematurity and hypotrophy (birth weight < 2.5 kg), who usually do not meet regular criteria for surgery and cardiac interventions.
Introduction
Tetralogy of Fallot (ToF) is the most common cyanotic congenital heart defect diagnosed in newborns. Depending on the degree of right ventricle outflow tract stenosis and clinical signs of cyanosis, the pathology necessitates intensive medical, interventional and surgical treatment in early infancy with anatomic correction before the first year of life. ToF is often accompanied by additional cardiovascular defects and congenital abnormalities apart from the circulatory system. Complex stenosis of the right ventricle outflow tract (RVOT) in ToF is rarely accompanied by any stenosis of the left ventricle outflow structures [1]. Regular treatment becomes more problematic in borderline low body weight patients suffering from prematurity and hypotrophy (birth weight < 2.5 kg), who usually do not meet regular criteria for surgery and cardiac interventions.
Case report
A 2-month-old premature hypotrophic newborn boy, 2.3 kg b.w., was referred to the emergency department with severe cyanosis in the course of postnatally diagnosed ToF. Peripheral saturations on air were less than 70%. Apart from dysmorphia features, the boy underwent resection of an additional thumb. Also he had a history of intensive treatment of congenital pneumonia in a different institution. Despite the cyanosis there was an evident pressure gradient of upper-to-lower limbs of 40 mm Hg, with diminished pulse in the femoral artery. Initial echo showed typical morphology of ToF with right aortic arch, malalignment 12 mm ventricular sep-tal defect (VSD), non-restricted right-to-left shunt, 70% overriding aorta, hypertrophied right ventricle (RV) and a severely stenotic, dysmorphic pulmonary valve (PV) in the hypoplastic pulmonary trunk (PA). Peripheral pulmonary arteries had acceptable size with McGoon index > 1.5, although intracardiac defects were accompanied by critical stenosis in the proximal aortic arch of 2 mm width and 40-50 mm Hg echocardiographic pressure gradient. During cardiac catheterization there was revealed right aortic arch, internal carotid arteries right and left extending from the ascending aorta, and the right subclavian artery from the distal part of the aortic arch. The left subclavian artery was filling up to the collateral circulation. Probably the left subclavian artery arose from the closed ductus arteriosus. The patient was qualified for cardiology intervention with balloon plasty of the aortic arch ( Figure 1). During the procedure an early restenosis was found, because of high stiffness of the stenotic area, so we decided to implant a stent. Direct measurements of the stenotic aorta precluded the use of a regular pediatric aortic stent in the premature arch; therefore a coronary stent was implanted (Multi-Link 8 4 mm/8 mm, Abbott Vascular, Illinois, USA), with its deployment to 4.5 mm (Figure 2). Control transthoracic echocardiography (TTE) showed unrestricted flow in the aortic arch. The boy was then referred for percutaneous RVOT balloon plasty 7 days later. After right ventriculography that showed a severely stenosed PV and 5 mm width RVOT, a single approach triple balloon plasty was performed. The direction of flow over the unrestricted VSD changed for left-to-right. Because of previously diagnosed right-sided iliac and femoral artery thrombosis Advances in Interventional Cardiology 2016; 12, 1 (43) local fibrinolysis was performed (120 min alteplase infusion). During administration of intravenous alteplase (0.2 mg/kg/h) with heparin (15 U/kg/h) the arterial flow was restored in the thrombotic femoral vessels. Oral aspirin 3 mg/kg b.w. was administered. During a month of treatment in the Department of Pediatrics the symptoms of dysphagia with anxiety and cyanosis, with bidirectional interventricular shunt features in control TTE, were observed. Regular modified Blalock-Taussig shunt (BT) was performed in the 2.8 kg boy via right-sided thoracotomy with a 3.5 mm Gore-Tex tube. After the shunt the baby was referred for additional RVOT balloon plasty. At the very beginning of the anesthesia induction, despite adequate shunt flow, he fell into a severe cyanotic spell with the need of intensive medical treatment and cardiac massage. The procedure was abandoned. Three days later the 2.9 kg b.w. premature patient underwent a successful surgical correction of the ToF with extensive resection of the hypertrophied RV and an intraoperative hybrid pulmonary balloon plasty. The stent was removed with a resection of stenosed aortic segment and combined end-to-end anastomosis between the brachiocephalic trunk and the left carotid artery was done with means of deep hypothermia circulatory arrest (DHCA). The patient was extubated on the operative day. Transient atrioventricular (AV) block with temporal epicardial pacing was observed in the early postoperative course. Further treatment was uneventful. Because of oral feed-ing difficulties the boy was treated in the Department of Gastroenterology and then discharged home 3 weeks after surgery. He was regularly followed up in the outpatient clinic. His echocardiography showed closed VSD, mild residual stenosis of RVOT up to 35 mm Hg and free flow in the aortic arch. The patient returned to his regular rehabilitation protocol, typical for resolving physical and developmental problems of prematurity.
Discussion
Our clinical routine is an early correction of ToF with staged balloon plasty of RVOT, b-blocker treatment in patients with evident RVOT hypertrophy and preventive BT shunt in any signs of emergency because of severe hypoxia [2]. We prefer a comprehensive therapy from the very birth, with prenatal diagnostics and precise plans of treatment. In our observation the strategy with staged balloon RVOT plastics, with RVOT stenting in some cases, reduces muscular hypertrophy of the right ventricle, and improves the early postoperative course after the correction of ToF.
Stenosis of the proximal aortic arch is rarely observed in patients with ToF. The aortic stenosis causes additional systemic overload, with concomitant typical signs of cyanosis. The situation could be difficult in the initial diagnostics; thus we stress precise echocardiography of every problematic patient. The procedure of stenting the arch resolved the problem with left ventricle overload before anatomic correction of ToF.
Our strategy is applied in every patient; therefore we consciously suppose that staged interventional treatment could be beneficial also for borderline patients with accompanying cardiovascular anomalies. We proved that premature newborn low body weight and prematurity are independent surgical risk factors for babies with complex cardiac problems [3].
Following our experience, low body weight is no longer a contraindication for percutaneous interventional procedures. We successfully introduced invasive treatment even in premature babies. Nevertheless, we keep in mind that cardiac interventions in small premature babies are related to additional problems, which cause the need of additional treatment [4].
In the present case, an adequate shunt flow did not protect the baby from cyanotic spells, which could be related to additional comorbidities (prematurity, low gestational age, bronchopulmonary dysplasia, low body weight) [5].
Effective treatment of congenital cardiovascular problems in the early infancy of the present premature patient, despite additional clinical disadvantages, supported his intensive general rehabilitation [6].
Conclusions
Modern alternative hybrid and staged interventional strategies supported with intraoperative imaging combine the advantages of surgery and interventional cardiology to treat more complicated patients despite their suboptimal condition. | 2016-05-06T18:23:32.523Z | 2016-02-11T00:00:00.000 | {
"year": 2016,
"sha1": "c09dece41a9a24ee0cc3358175e67e8d75539036",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-35/pdf-26606-10?filename=Staged%20interventional.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c09dece41a9a24ee0cc3358175e67e8d75539036",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216177282 | pes2o/s2orc | v3-fos-license | Preparation of low cost catalysts for proton exchange membrane fuel cell
Nitrogen-doped reduced graphene oxide (NG) with high nitrogen level was synthesized by a facile pyrolysis. NG has been getting attention because of its high catalytic activity toward the oxygen reduction reaction (ORR) and reduce cost. The synthesis of NG used graphene oxide (GO) and urea as a N-precursor were dissolved in ethanol. Then the mixture was evaporated by ultrasonic bath for 30 min. The mixture was slurry then was transferred to tube furnace and pyrolyzed at 300°C and 800°C (NG300 and NG800) with heating rate of 2.5 °C/min in N2 atmosphere for 30 min. The morphology and structure of nitrogen doped graphene oxide were investigated by Scanning Electron Microscopy (SEM) and X-ray photoemission spectroscopy (XPS). The XPS spectra of NG indicated that NG300 had the highest intensity of N1S peak among others. Mass of nitrogen of NG300 and NG800 were evaluated and had about 15.5%wt and 6.6%wt, respectively. Furthermore, N spectra at high-resolution was analysed and de-convoluted to three N chemical states of pyridinic-N, pyrrolic-N, and graphitic-N. The electrochemical properties of NG were determined by cyclic voltammetry (CV) and Linear sweep voltammetry (LSV). From the results shown that NG800 catalyst yielded highest electrochemical activity particularly for oxygen reduction reaction (ORR) over GO and NG300. Thus, N atoms doped into the graphene were responsible for the ORR catalytic activity resulting from doping N atoms and provided more density of active sites and conductivity. Moreover, NG can be applied as a supporting material for Non-precious metal group catalysts of fuel cell.
Introduction
Oxygen reduction reaction (ORR) is attended in polymer electrolyte membrane fuel cells (PEMFCs). Pt is still the best of all catalysts for ORR but too expensive and rare metals. These make price of catalysts are more competitive leading to many researches being conducted to find alternate electrocatalysts that are good activities for ORR [1]. Therefore, the study of the synthesis of catalysts without platinum group metals that is called Non-platinum group metal catalyst (Non-PGM catalyst) has been attended and developed for electrochemical reaction especially in PEMFC.
The structure of the Non-PGM catalyst consists of the support without platinum, but nitrogen atoms are doped on the support, for example, N-doped carbon. The structure of the N-doped carbon is rather complex with various functionalities of N. These functionalities, N-doped carbon presents the N atoms doping in the carbon structure for example, N-doped graphene and N-doped graphene oxide etc. Moreover, N-doped carbon was responsible for the ORR catalytic activity [2]. Various N precursors such as ammonia, urea, melamine etc.
The good catalytic properties are small particles, high surface area, good durability and high dispersions. Graphene oxide (GO) is usually chosen for catalysts due to GO can disperse in water and organic solvents such as methanol, ethanol, ethylene glycol and DMF. Furthermore, GO has high surface area and electrical properties [3]. Moreover, thermally reducing GO (rGO) can produce a higher surface area. On the other hand, a major disadvantage of this process is damaged structure by the high temperature, affect to mass loss and decrease of mechanical strength of the rGO significantly [4].
To prepare N-doped carbon, Degang Li et al. [5] prepared N-doped graphene by using graphene and urea with pyrolysis method. The urea was used as the substance of nitrogen sources. The samples were reduced in air flowing with heating at 350 C. The percentage of nitrogen atoms was as high as 18.7% wt. Moreover, Ziyin Lin et al. [2] prepared N-doped graphene, which was a catalyst that provided a good activity of ORR reaction using a pyrolysis method. They used material similar to ref [5], but the reduced temperature was higher at 800 C and in Nitrogen gas flow. It was found that their NG gave electrical and chemical efficiency better than the catalyst without nitrogen flow.
In this study use graphene oxide (GO) and urea (N-precursor) to synthesize N-doped carbon by pyrolysis at 300 C and 800 C in nitrogen atmosphere. graphene oxide (GO) was reduced by thermal in pyrolysis and N atoms were doped in the rGO, to produce N-doped rGO, it is denoted as NG.
NG was prepared by pyrolysis of GO and N precursors. Firstly, 0.5 g of GO and 0.6 g of urea were dissolved in ethanol of 30 ml and treated in ultrasonic bath for 30 min [5]. The solvent was evaporated from the mixture on a hotplate at 60-100C, while the mixture was being stirred until the brown mixture was obtained and become a slurry. Then transferred the mixture to the tube furnace and heated to 300C and 800C for 30 min in ultra-high pure N2 gas at heating rate of 2.5 O C/min. The products were black sheet. Then were crushed by mortar and denoted as NG300 and NG800.
Physical characterization
Surface morphologies of N-doped reduced graphene oxide (NG) were investigated by scanning electron microscopy (SEM). The surface elemental compositions were obtained from X-ray photoelectron spectroscopy (XPS) analysis equipped with a monochromatic Al anode.
Electrochemical measurements
The catalytic activities of the GO and NG catalysts were measured by a bi-potentiostat (Pine Instrument, USA) and rotating-disk electrode (RDE) apparatus at room temperature. The solution of isopropanol (20%) and Nafion ionomer (0.02%) is prepared from 20 ml of isopropanol, 79.6 mL of deionized water and 0.4 ml of Nafion solution (5 wt%). Next, the preparation of catalyst ink, 10 mg of the catalyst is measured into a 5 ml of the stock isopropanol/Nafion solution is added [6]. The 10 L of catalyst ink (i.e., GO, NG300 and NG800) was dropped to cover the glassy carbon electrode with a geometric area of 0.196 cm 2 for working electrode. The 3M of Ag/AgCl electrode used as a reference electrode, where the Pt wire was the counter electrode.
Cyclic voltammograms (CVs) were recorded in the potential between 0.05-1.2 V vs SHE and studies at scan rate of 50 mV s -1 in the solution of 0.5 M H2SO4 in flowing of N2 gas. Linear sweep voltammetry (LSV) for ORR analysis measurement was performed between the potential of 0.4 and 1.0 V (SHE) on the RDE in 0.5 M H2SO4 with saturated O2 at a scan rate of 2.5 mV s -1 at 1600 rpm. The CVs were carried on again after the 2,000 potential cycles of LSV. The LSV data was recorded the potential cycles to estimate the stability of the catalysts.
Physical characterization
The figure 1(a) shows the high-resolution Scanning Electron Microscope (JSM -IT300) analysis (5000× magnification) of single layer graphene oxide (GO). The structure of GO is observed, it is hardly crystalline and many interlayer spacings. In contrast, figure 1(b) and 1(c), graphene sheets are observed, indicate that GO is conversed to rGO at 300C and 800C completely and confirming the accomplished synthesis of NG [7].
The electronic state and chemical character of catalysts were explained with XPS spectra and shown in figure 2. The XPS survey spectra of GO, NG300 and NG800 show the elemental of C, N and O. The spectrum shows the peak of C 1s, N 1s and O 1s found that N 1s peak of GO is invisible because there is no urea in the synthesis. Table 1 describes the elemental compositions of GO, NG300 and NG800 from XPS analysis. The NG800 had very high carbon that was found to be 88.3% which was more than NG300 (66.5%) and GO (60.5%). On the other hand, the O content of NG800 was the lowest with 7.5%, where NG300 and GO were 17.9% and 39.5% respectively. Result from the high temperature of pyrolysis process can reduce oxygen [8]. The N content of NG300 reaches 15.5%, which is much larger than NG800 (6.6%) and GO without N-doping. The electronic state and chemical character of catalysts were explained by XPS spectra as shown in figure 2. The XPS survey of GO, NG300 and NG800 show the C, N and O composition. The spectrum shows the peak of C 1s, N 1s and O 1s found that the peak of N 1s of GO is invisible because there is no urea in the synthesis. Figure 2. The XPS survey spectra of GO, NG300 and NG800.
The C 1s spectra of the GO in figure 3(a) are deconvoluted into three Gaussian peaks, with the 283.0, 285.3 and 286.5 eV of binding energy, which can be interpreted as C-C, C-O-C and C=O group, respectively [9]. The C 1s spectra of NG300 in figure 3(b) with 285.0, 285.8 and 289.0 eV of binding energy can be related to C-C, C-O-C and O-C=O group, respectively. figure 3(c) shows the C 1s spectra of NG800 that can interpret as C-C, C-O-C group and a shake-up satellite peak (-*) with the binding energy of 285.0, 286.0 and 291.5 eV, respectively [10].
Moreover, spectra of three Gaussian N 1s peaks of GO were showed in figure 3(d), with binding energy of 398.0, 399.7 and 400.4 eV, that related to pyridinic N, pyrrolic N and graphitic N respectively. figure 3(e) shows two peaks of pyridinic N and pyrrolic N for NG300 with binding energy of 398.7 and 400.5 eV, respectively. The N 1s spectra of NG800 shows in figure 3(f) with binding energy of 398.0, 399.0 and 401.2 eV, that were interpreted as pyridinic N, pyrrolic N and graphitic N, respectively. It indicates that graphitic N peak appeared on NG800 probably due to the high temperature reduction step, whereas pyrrolic N was disappeared. The most of all was convert to pyridinic N and graphitic N [11]. The graphitic N will enhance the active regions [12] by improve the conductivity of the materials, which is more transport electrons in the electrode [8]. Figure 3. The spectra of C 1s at high-resolution: (a) GO (b) NG300 and (c) NG800, and N 1s peak: (d) GO, (e) NG300 and (f) NG800. The CV of the first cycle from all samples are shown in figure4(a). The NG300 shows a blunt peak during hydrogen desorption from 0.05 V to 0.30 V at the positive current. After 2,000 cycle of electrochemical cyclic in figure 4(b), the hydrogen desorption peak was significant decreased indicating the loss of catalytic activity. On the other hands, there was no notable peak of hydrogen desorption peak for GO and NG800. It infers that GO and NG800 were not active for the hydrogen oxidation reaction (HOR).
Electrochemical characterization and ORR analysis
For the oxygen reduction reaction (ORR), the results are illustrated in figure 5. All samples were active for the ORR. As it can be seen, the onset potential of NG800 is higher than the others with Eonset of -0.325 V vs SHE. It indicates that chemical state of Graphitic-N in NG800 are most active for ORR [13]- [15]. Furthermore, NG300 shows the active ORR less than NG300, but higher than GO due to the state of Pyrrolic-N and Pyridinic-N. Although NG300 had more nitrogen than NG800 (more than 2 times), but the ORR activity was lesser. This is mainly due to the influence of Graphitic-N in NG800.
Conclusions
In summary, the catalysts of the NG300 and NG800 were prepared by pyrolysis and applied for PEMFCs. In comparison, NG800 and NG300 performed higher ORR onset potential than GO without N-doping. The N-doped reduced graphene oxide from urea indicated enhancement to catalytic activity of ORR. The influences of urea and doping conditions made N atoms occur which plays the role of enhancing active sites by improving the activity of the catalysts. This work opens up a new way to synthesize N-doped rGO, which is promising for PEMFCs may be used alone or combined with other metals for the non-platinum group metal. | 2020-04-09T09:16:39.630Z | 2020-04-07T00:00:00.000 | {
"year": 2020,
"sha1": "e985facf360d0274b534fc2dc516d956fbcc18ea",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/463/1/012066",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "dd0b753afb2d86335a90879fd9e0d1092ad82264",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
272168397 | pes2o/s2orc | v3-fos-license | The Influence of Job Happiness and Innovative Work Behavior on Employee Performance
This study aims to analyze and determine the effect of both partial and simultaneous job happiness and innovative work behavior on employee performance. This study’s sample number was 76 employees at the North Maluku Central Bureau of Statistics (BPS). The analytical test tool uses SPSS version 27. The results of this study indicate that: (1) job satisfaction has a positive and significant effect on employee performance. (2) innovative work behavior positively and significantly affects employee performance. (3) work happiness and innovative work behavior together have a positive and significant effect on employee performance.
INTRODUCTION
One of the factors that influence the success of a company is the performance of its employees.Performance is the result of work achieved by a person or group of people in a company/agency in accordance with their respective responsibilitiesrespectively (Afandi, 2018: 83;Sumakud and Irvan, 2021: 430).The performance referred to by Bastian (2001: 392) is carried out to realize the goals, objectives, vision, and mission of the organization in the formulation of an organization's strategic scheme.Skills, experience, sincerity, and time efficiency determine the achievement of one's performance (Hasibuan 2011), meanwhile, employee performance can also be influenced by work happiness factors.Job happiness is shown by the feeling of comfort an employee can improve his performance (Astrama et al., 2019).The higher the happiness of employees at work, the higher their performance will also be.Edgar et al., (2018) said that employees' feelings of happiness or positive emotions at work can improve their performance.
According to Pryces and Jones (2010: 137), happiness at work is a person's positive feelings at any time, because they can know, manage and influence the world of work so as to develop performance and provide satisfaction while working.Happiness at work is a positive emotion and positive activity that is felt by individuals consciously to give full attention to work so that they can increase their performance and potential optimally.However, there is a research contradiction that happiness at work has a negative and insignificant effect on employee performance (Pratama, 2019) while Sumakud and Irvan (2021) in their research found that job happiness has a significant effect on employee performance.
Innovative work behavior is also very much needed by the organization.According to Shalley et al., (2004), competition in a company can be created if there is innovative work behavior and there is a willingness from managers to support it.Every company must view that it is important to motivate employees so that their performance increases through innovative and creative behavior (Demircioglu and Audretsch, 2017).De Jong & Hartog (2010) said that innovative work behavior is a form of behavior whose goal is to achieve the initiation and recognition of an idea, process, procedure, or new ways that are useful for the organization.This innovative work behavior is supported by the research findings of Hadi et al. (2020) that innovative behavior has a significant positive effect on employee performance, Based on the explanation above, the purpose of this study was to analyze and determine the effect of job happiness and innovative work behavior both partially and simultaneously on employee performance.
LITERATURE REVIEW 1. Employee Performance
Performance is the level of achievement of the implementation of an activity to realize the goals, objectives, mission, and vision of the organization in the formulation of an organization's strategic scheme (Bastian, 2001, Ronaldo et al., 2019).The work results of an employee will determine the overall success of the influential factors in determining whether a person will work as well as possible or vice versa (Katili et al., 2021).According to Handoko (2001: 193), there are several factors that affect employee performance, namely: a).Motivation.An important driving factor that causes individuals to work is the desire that must be fulfilled.It is this desire that drives a person to work and get good results in his work; b).Job satisfaction.Job satisfaction describes a person's feelings towards his work.This can be seen from the positive attitude of employees towards work and what they face in their work environment; c).stress level.Stress is a condition that affects one's emotions and thought processes.Stress levels that are too high can threaten a person's ability to deal with the environment so that it interferes with their work performance; d).working conditions.The working conditions that can affect performance are the workplace, air circulation, and lighting in the workspace; e).Compensation system.Compensation is remuneration received by a person for what he has done for the company; f).job design.Job design is a function of determining the work activities of an individual or group of employees organizationally.
Work Happiness
Happiness in life is a concept that refers to the positive emotions that a person feels as well as positive activities that are liked by that person (Seligman, 2005).Ordinary life happiness is characterized by more positive effects than negative effects that a person feels.Happy people will be healthier, more successful, and easy to socialize with (Lyubomirsky and Diener, 2005), while Diener, Scollon, and Lucas (2003) also say that the term welfare is a scientific term for happiness.This term is used by scientists.Work is one of the living environments to get happiness, work is also one of the developmental tasks of adulthood that must be fulfilled (Putri, 2009).According to de Waal (2018), Job happiness can make the organization more attractive.Job happiness, which is described as an employee's feeling of comfort, can improve his performance (Astrama et al., 2019).Job happiness is a positive emotion or feeling comfortable at work that is owned by an employee at work.
According to Wulandari (2014), there are five factors that influence happiness at work, namely: 1) Positive relationships with other people.The relationship between one person and another is not just a passive relationship but an activity that develops more productive, constructive, and satisfying results; 2) Achievements.The business results achieved from what is done or what is attempted; 3) Physical work environment.Everything that is around the workers can influence them in carrying out the tasks assigned with adequate work equipment such as lighting, and air temperature; 4).Compensation.Everything that workers receive as remuneration for the work done, and; 5. Health.State of well-being physically, psychologically, and social activities that enable a person to live productively socially, and economically.In addition, there are two factors that influence employee job happiness, namely: 1) Factors that come from within a person such as individual personality, consistency between job expectations and one's own abilities, and; 2. Factors that come from outside a person such as an uncomfortable work environment and high workload.
Previous research has found that job happiness has a positive and significant effect on employee performance (Yasa et al., 2021;Mangowal et al., 2020;Ronaldo et al., 2019;Katili et al., 2021;Sumakud and Irvan, 2021).Meanwhile, Pratama's research (2019) yields different results that work happiness has a negative and insignificant effect on employee performance.Based on the above considerations, we propose the following hypothesis: H1.Job happiness has a positive effect on employee performance.
Innovative Work Behavior
Innovative work behavior is a form of behavior whose goal is to achieve the initiation and introduction of new ideas, processes, procedures, and methods that are useful for the organization (De Jong & Hartog, 2010).Work innovative behavior can also be seen from the amount of physical and psychological work done by employees in their work, both independently and in groups to achieve a task that is the goal of developing innovation (Messmann, 2012).Every company must consider that it is important to motivate employees so that their performance increases (Demircioglu and Audretsch, 2017).While the factors that influence innovative behavior are human factors, leadership factors, and organizational structure factors.The human factor in its function as a support for innovation.The leadership factor provides benefits in advancing innovation to the individuals they lead by appreciating the ideas of the individuals they lead (Ancok, 2012).Individual self-efficacy and capabilities can also influence innovative work behavior (Berliana and Arsanti, 2018).Individuals with high self-efficacy can be more prepared to experiment through their innovative work behavior to then implement it in their work environment and by having a strong capability orientation, a person will continue to try to improve their innovative behavior to support their work and provide the best results.Individual selfefficacy and capabilities can also influence innovative work behavior (Berliana and Arsanti, 2018).Individuals with high self-efficacy can be more prepared to experiment through their innovative work behavior to then implement it in their work environment and by having a strong capability orientation, a person will continue to try to improve their innovative behavior to support their work and provide the best results.Individual self-efficacy and capabilities can also influence innovative work behavior (Berliana and Arsanti, 2018).Individuals with high self-efficacy can be more prepared to experiment through their innovative work behavior to then implement it in their work environment and by having a strong capability orientation, a person will continue to try to improve their innovative behavior to support their work and provide the best results.
Previous research reported that innovative behavior has a positive effect on employee performance (Astuti, et al., 2019;Muslim et al., 2021;Hadi et al., 2020;Vera and Tutuk, 2018, and;Mangowal et al., 2020).In contrast, the findings of Khodir and Makmur (2020) state that innovative work behavior has no significant effect on employee performance.Based on the above considerations, we propose the following hypothesis: H2. Innovative work behavior has a positive effect on employee performance.
H3. Job happiness and innovative work behavior simultaneously have a positive effect on employee performance.
Based on the background description, literature review, and hypothesis development, the following research model images can be drawn:
RESEARCH METHODS Sample
This research was conducted at the Office of the Central Bureau of Statistics (BPS) in North Maluku with a total population of 76 employees.Sampling uses a saturated sampling technique, this is in accordance with the opinion of Sugiyono, ( 2011) that all members of the population are sampled.This study used a research questionnaire that was encountered directly at the workplace.From the results of the tabulation of data from 76 respondents, there were 40 people were female and 36 people were male.Age of respondents less than 25 years by 1% (1 respondent), age 25-30 years by 32% (25 respondents), age 31-40 years by 36% (28 respondents), age 41-50 years by 22% (17 respondents), and age> 50 years by 6% (5 respondents).Based on the level of education, 11 people (14%) had a high school education, 29 people (38%) had a Diploma, Undergraduate Education (S1) as many as 20 people (26%), and Masters education level as many as 14 people (18%), and Doctoral education (S3) as many as 2 people 2%.For years of service, 18 people (23%) had 1-3 years, 34 people (44%) 4-6 years, 19 people (25%) 7-10 years, and 5 people > 10 years of service ( 6%).
Measurement
Measurement of variables responded to by respondents using a five-point Likert scale (strongly disagree = 1 to strongly agree = 5).Employee performance in this study uses the measurement proposed by Dessler (2008) which consists of: 1).Quality.The degree to which the result of a task is near perfect in other words adjusting some of the ideal ways of performing an activity or meeting the expected goals of an activity; 2) Quantity.The amount obtained is expressed in terms of the number of components and the number of completed activity cycles; 3).Punctuality.The level of an activity completed at the desired time, seen from the output results and maximizing the time available for other activities; 4) Effectiveness.The level of use of organizational resources that is maximized with the aim of increasing profits or reducing losses from each use of resources; 5) Independence.The degree to which an employee is able to carry out his work functions without assistance, instructions from supervisors, or requesting the intervention of supervisors to avoid results that are detrimental to the organization, and; 6) Work commitment.The level at which employees have work commitments as employees and employee responsibilities towards the company or organization.
Job happiness uses measurements from Hills and Argyle ( 2002) which say there are eight indicators, namely: 1) Life is rewarding.Gratitude for the advantages and disadvantages given in this life is very valuable and there is a lot of beauty that can be found in life; 2) Mentally alert.A feeling of being mentally alert to challenges that arise; 3) Pleased with life.The emotional state of a person gives birth to joy and pleasure for something that has been done.4) Find beauty in things.The condition of a person who is able to accept his own situation and his environment and adapt to the changes that occur in life and his environment so that he feels beautiful things and prosperity to achieve happiness in life; 5).satisfied in life.Conditions that are unique to people who have a passion for life and have the ability to adapt to various changes in conditions within themselves and changes in environmental conditions.6) Can be organized time.By implementing good time management, the portion of the 24 hours used will be divided effectively according to the priority scale of activities; 7).look attractive.Appearance is a reason to show our identity to others, even from appearance people can judge our true selves, for example from the clothes we wear, polite clothes or even cause negative people's thoughts, and; 8) Happy memories.Happy past memories will sometimes awaken for life in the present and in the future.
Innovative work behavior is measured using measurements from Janssen (2000) and, Vandavasi et al., (2020) which consist of 1).Creating new ideas; 2).Search for new work methods, techniques, or instruments; 3).Generate solutions to problems; 4) Mobilize support for innovative ideas within the organization; 5) Get approval for innovative ideas within the organization; 6) Making organizational members enthusiastic about innovative ideas; 7) Turning innovative ideas into useful tools; 8) Introducing innovative ideas into a systematic work
Data analysis method
Data were analyzed using inductive statistical methods, both descriptive and inferential.The analytical method uses multiple regression analysis to analyze the effect of independent variables consisting of job happiness (X1), innovative behavior (X2), and the dependent variable, namely employee performance (Y).The reason for using this analytical method is that researchers want to know how much influence job happiness and innovative work behavior have on employee performance either partially or simultaneously.
Hypothesis test
In Table 1 it can be seen that the results of the test for the coefficient of determination R2 obtained an R-Square value of 0.462, this means that the employee performance variable of 46.2% can be explained by the variables of job happiness (X1) and innovative work behavior (X2), while the rest can be explained by other variables not included in this study.Partially testing the output of multiple regression, it is known that the tcount value for the work happiness variable is 4.784 and the innovative work behavior variable is 3.327 while the tcount value is at a significant alpha level of 0.05 with df = nk (76 -3 = 73) the result for t-table is 1,666.Based on the data in table 3, it is known that there is a simultaneous influence of variable X on variable Y with a significance level of 0.001 <0.05, so it can be concluded that H3 is accepted.
Discussion
Based on the results of statistical testing, it is known that the variable of work happiness has a positive and significant effect on employee performance.This finding is supported by the contribution of research items to the variable of work happiness that generally employees feel grateful for the job they currently have, feel they can handle everything well, see lots of fun things in their work environment, find interesting things, feel satisfied with his work, has an attractive appearance and has many good memories in the past.This is in accordance with the statement of Edgar et al., (2018), that employees' feelings of happiness or positive emotions at work can improve their performance.Positive feelings that are felt in the workplace and managed properly will provide satisfaction at work and encourage increased employee performance.This research is consistent with the research of Yasa et al., (2021); Mangowal et al., (2020); Ronaldo et al., (2019); Sumakud and Irvan, (2021), and; Katili et al., (2021) that job happiness has a positive effect on employee performance.
This research also investigates innovative work behavior at work, the findings of innovative work behavior items describe that generally employees always think about how a work result can be improved, generate solutions to solve problems, look for new, better work methods, always make colleagues -Teammates become enthusiastic about their new ideas and ideas, always try to convince others to support their new ideas, always explain their innovative ideas clearly to their environment, always contribute to the implementation of new ideas in the workplace.This research is consistent with previous researchers that innovative work behavior has a positive effect on employee performance (Astuti, et al., 2019;Muslim et al., 2021;Hadi et al., 2020;Vera and Tutuk, 2018, and;Mangowal et al., 2020).
Contribution of Theory and Practice
These findings have implications for the world of work, especially when interviewing prospective employees, it is better if recruit prospective employees who have a good past, are always grateful for what they have lived, and support the nonphysical work environment of each employee and it is hoped that employees will have an attractive appearance while working.will certainly improve employee performance.This research has implications for future researchers to review variations in job happiness and innovative behavior on length of work, marital status, age status, gender status, education level, and position status at work, another implication is to explore knowledge sharing in the world of work to increase behavior proactive work
CONCLUSIONS
Job happiness and innovative work behavior positively and significantly support the improvement of employee work in the world of work.
; 9) Evaluating the function of innovative ideas within the organization. | 2024-08-30T16:30:13.756Z | 2023-04-30T00:00:00.000 | {
"year": 2023,
"sha1": "d61a35b545baf049df640205b233628f0d616d82",
"oa_license": null,
"oa_url": "https://doi.org/10.33005/ebgc.v6i01.311",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "eb17ac4a8b11a01240af7a05db010e719a4dc314",
"s2fieldsofstudy": [
"Business",
"Psychology",
"Economics"
],
"extfieldsofstudy": []
} |
52946424 | pes2o/s2orc | v3-fos-license | Class-B CpG-ODN Formulated With a Nanostructure Induces Type I Interferons-Dependent and CD4+ T Cell-Independent CD8+ T-Cell Response Against Unconjugated Protein Antigen
There is a need for new vaccine adjuvant strategies that offer both vigorous antibody and T-cell mediated protection to combat difficult intracellular pathogens and cancer. To this aim, we formulated class-B synthetic oligodeoxynucleotide containing unmethylated cytosine-guanine motifs (CpG-ODN) with a nanostructure (Coa-ASC16 or coagel) formed by self-assembly of 6-0-ascorbyl palmitate ester. Our previous results demonstrated that mice immunized with ovalbumin (OVA) and CpG-ODN formulated with Coa-ASC16 (OVA/CpG-ODN/Coa-ASC16) elicited strong antibodies (IgG1 and IgG2a) and Th1/Th17 cellular responses without toxic systemic effects. These responses were superior to those induced by a solution of OVA with CpG-ODN or OVA/CpG-ODN formulated with aluminum salts. In this study, we investigated the capacity of this adjuvant strategy (CpG-ODN/Coa-ASC16) to elicit CD8+ T-cell response and some of the underlying cellular and molecular mechanisms involved in adaptive response. We also analyzed whether this adjuvant strategy allows a switch from an immunization scheme of three-doses to one of single-dose. Our results demonstrated that vaccination with OVA/CpG-ODN/Coa-ASC16 elicited an antigen-specific long-lasting humoral response and importantly-high quality CD8+ T-cell immunity with a single-dose immunization. Moreover, Coa-ASC16 promoted co-uptake of OVA and CpG-ODN by dendritic cells. The CD8+ T-cell response induced by OVA/CpG-ODN/Coa-ASC16 was dependent of type I interferons and independent of CD4+ T-cells, and showed polyfunctionality and efficiency against an intracellular pathogen. Furthermore, the cellular and humoral responses elicited by the nanostructured formulation were IL-6-independent. This system provides a simple and inexpensive adjuvant strategy with great potential for future rationally designed vaccines.
There is a need for new vaccine adjuvant strategies that offer both vigorous antibody and T-cell mediated protection to combat difficult intracellular pathogens and cancer. To this aim, we formulated class-B synthetic oligodeoxynucleotide containing unmethylated cytosine-guanine motifs (CpG-ODN) with a nanostructure (Coa-ASC16 or coagel) formed by self-assembly of 6-0-ascorbyl palmitate ester. Our previous results demonstrated that mice immunized with ovalbumin (OVA) and CpG-ODN formulated with Coa-ASC16 (OVA/CpG-ODN/Coa-ASC16) elicited strong antibodies (IgG1 and IgG2a) and Th1/Th17 cellular responses without toxic systemic effects. These responses were superior to those induced by a solution of OVA with CpG-ODN or OVA/CpG-ODN formulated with aluminum salts. In this study, we investigated the capacity of this adjuvant strategy (CpG-ODN/Coa-ASC16) to elicit CD8 + T-cell response and some of the underlying cellular and molecular mechanisms involved in adaptive response. We also analyzed whether this adjuvant strategy allows a switch from an immunization scheme of three-doses to one of single-dose. Our results demonstrated that vaccination with OVA/CpG-ODN/Coa-ASC16 elicited an antigen-specific long-lasting humoral response and importantly-high quality CD8 + T-cell immunity with a single-dose immunization. Moreover, Coa-ASC16 promoted co-uptake of OVA and CpG-ODN by dendritic cells. The CD8 + T-cell response induced by OVA/CpG-ODN/Coa-ASC16 was dependent of type I interferons and
INTRODUCTION
Most current vaccines rely on antibody production for protection but fail to generate robust T-cell immunity crucial for combating intracellular pathogens and cancer (1,2). To overcome this challenge, new adjuvant strategies are being developed worldwide in experimental models or in human clinical trials (3)(4)(5). Among them, there is a special interest on synthetic oligodeoxynucleotides containing unmethylated cytosine-guanine motifs (CpG-ODN), agonists of Toll-like receptor 9. The key features of CpG-ODN used as a vaccine adjuvant, in contrast to currently licensed adjuvants, include the ability to elicit antibody, Th1-like over a Th2-like CD4 + T-cell response and, but only under certain conditions, CD8 + T-cell immunity. Over the last decade many human clinical trials have been carried out with CpG-ODN, some of which are in phase III trials in the vaccine area (6)(7)(8)(9). Most recently, CpG-ODN has been used as adjuvant in a vaccine (Heplisav-B) licensed by FDA, indicated for the prevention of infection caused by Hepatitis B Virus in adults 18 years of age and older (10). However, the use of free CpG-ODN still presents some limitations such as unfavorable pharmacokinetics/biodistribution patterns, high binding to plasma proteins, lack of specificity for target cells and poor cellular uptake that subsequently restricts its bioavailability (9,(11)(12)(13). Hence, there is great interest in developing efficient strategies to sort these difficulties and optimize the CpG-ODN immunostimulatory activity. To this end, multiple strategies such as nano/microparticles construed in a variety of ways using different materials and self-assembled DNA nanostructures have been explored. Although most of these formulations appeared promising, some of them also had some problems mainly related to manufacturing issues, such as the scaling-up of production, and toxicity associated with cationic materials (11,12,14,15).
In order to optimize the adjuvant activity of CpG-ODN, we formulated it with a nanostructure (Coa-ASC16 or coagel) formed by self-assembly of 6-O-ascorbyl palmitate ester (ASC16). Our previous results demonstrated that the nanostructured formulation of ovalbumin (OVA) and CpG-ODN with Coa-ASC16 (OVA/CpG-ODN/Coa-ASC16) remarkably enhanced humoral (IgG1, IgG2a) and cellular (Th1 and Th17) responses in comparison to the soluble counterpart (OVA/CpG-ODN) under a three-dose immunization scheme. When compared the efficiency of CpG-ODN/Coa-ASC16 with CpG-ODN formulated in aluminum salts, we observed that the immunization with OVA/CpG-ODN/Coa-ASC16 was significantly more efficient than CpG-ODN/Al(OH) 3 to induce specific humoral (IgG1 and IgG2a), Th1 and Th17 cellular immune responses. In addition, our preclinical systemic toxicology studies performed at days 21 and 197 after first immunization showed than CpG-ODN/Coa-ASC16 did not induce adverse biological effects (16).
ASC16 is an amphiphilic molecule composed of an ascorbic acid polar headgroup attached to a palmitic acid nonpolar hydrocarbon chain. When an aqueous dispersion of ASC16 is heated above the critical micelle temperature, at which the solubility reaches the critical micelle concentration, aggregates form a gel phase. Upon cooling below the critical micelle temperature, Coa-ASC16 is formed. Our previous studies showed that Coa-ASC16 is a hydrated crystalline phase and their lamellar structure produces at least one highly ordered dimension, so they exhibit sharp X-ray diffraction patterns and optical birefringence. The surfactant hydrocarbon chains have limited freedom of motion, with an interlayer distance of about 10 Å, water occupies the space between the surfactant lamellae (17). After adding CpG-ODN and OVA (both hydrophilic components), the study of behavior of the H 2 O interlayers suggests that they are situated in the aqueous interlamellar domain (18). A schematic picture of this system is shown in Figures 1A,B. Coa-ASC16 has many advantages that make it a very attractive platform for biomedical use: (i) it is formed by two biodegradable components (ascorbic acid and palmitic acid), (ii) ASC16 is listed as a Generally Recognized as Safe substance, and (iii) it is easy to prepare and inexpensive.
The purpose of the present study was to investigate the capacity of this new adjuvant strategy (CpG-ODN/Coa-ASC16) or free CpG-ODN to elicit CD8 + T-cell response and some of the underlying cellular and molecular mechanisms involved in adaptive response. We also analyzed whether this adjuvant strategy allows a switch from an immunization scheme of threedoses to one of single-dose. Our results demonstrated that vaccination with OVA/CpG-ODN/Coa-ASC16 induced longlasting humoral response and importantly-high quality CD8 + T-cell immunity with a single-dose immunization. The CD8 + T-cell response induced by OVA/CpG-ODN/Coa-ASC16 was dependent of type I interferons (IFN-I) and independent of CD4 + T cells, and showed polyfunctionality and efficiency against an intracellular pathogen. Furthermore, the cellular and humoral responses elicited by the nanostructured formulation were IL-6-independent. Therefore, the present work helps to understand the mechanism of action of this new adjuvant strategy and extends its potential use for future rationally designed vaccines.
Mice
Wild-type (WT) C57BL/6 mice were purchased from Fundación Facultad de Ciencias Veterinarias (Universidad Nacional de La Plata, La Plata, Argentina), Il6 −/− and Cd8a −/− mice from Jackson Laboratory (Bar Harbor, ME, USA). Ifnar1 −/− mice were kindly provided by Dr. M. Albert (Institut Pasteur, Paris, France). All mice were bred in our animal facility in accordance with the standards of the Guide to the Care and Use of Experimental Animals, published by the Canadian Council on Animal Care; with the assurance number A5802-01 delivered by the Office of Laboratory Animal Welfare (NIH). The experiments were conducted on 8-12 weeks-old female mice following protocol approved by the Institutional Animal Experimentation Committee, Facultad de Ciencias Químicas, Universidad Nacional de Córdoba in Argentina (# 907/2015) and by the Institutional Animal Care and Use Committee from USA (# AP007-SPS1-0116).
Preparation of Coa-ASC16-Based Formulations
OVA and/or CpG-ODN were added to a dispersion of 2% (w/v) ASC16 in 5% dextrose solution, heated up to 72 • C for 15 min and then left to reach room temperature as described previously (16).
Immunization
Mice were subcutaneously injected with OVA and CpG-ODN in solution (OVA/CpG-ODN), OVA and CpG-ODN formulated with the Coa-ASC16 (OVA/CpG-ODN/Coa-ASC16), OVA formulated with Coa-ASC16 (OVA/Coa-ASC16) or CpG-ODN formulated with Coa-ASC16 (CpG-ODN/Coa-ASC16). Each mouse was immunized with an entire volume of 250 µl equally distributed in 5 sites: the tail base, back and neck region and both hind limbs. CpG-ODN was administered at 75 µg/mouse/dose and OVA at 6 µg/mouse/dose in all experiments except for the one shown in Figure 3 where the dose of OVA was reduced to 2 µg/mouse/dose. Two different immunization schemes were used: (1) mice were immunized on days 0, 7, and 14 and (2) mice were immunized once at day 0. At different time post immunization, blood was collected in heparinized capillary tubes to measure the anti-OVA antibody titers in plasma.
An additional group was immunized with OVA formulated with CFA (OVA/CFA) on days 0, 15 and 30.
Native-Page
Formulations were prepared using CpG-ODN and OVA coupled to near-infrared dye at the final concentrations used to immunize. The CpG-ODN solution was used in a proportion of 1/180 5 ′ IRDye R 800CW CpG-ODN/not labeled CpG-ODN. Before loading, half of the samples were pre-heated at 72 • C for 15 min and then cooled down to room temperature. Samples were loaded in 5% (v/v) glycerol and resolved in a 20% polyacrylamide gel containing no sodium dodecyl sulfate. The gel was scanned using an Odyssey Infrared Imaging System (LI-COR Biosciences).
In vivo Cytotoxicity Assay
Splenocytes of non-immunized syngeneic mice were prepared. Half of the cells were incubated with 10 µg/mL of SIINFEKL peptide at 37 • C for 30 min, then stained with 1.5 µM CFSE (Thermo Fisher Scientific). The remaining cells were stained with 0.15 µM CFSE. Immunized and non-immunized (control) mice were intravenously injected with a 1:1 mixture of these cells (10 × 10 6 of each/mouse). Splenocytes of recipient mice were collected 24 h after transfer, and CFSE + cells were measured by flow cytometry. Cytotoxicity is expressed by percentage of lysis calculated as [1-(r control -r immune )] × 100, where r is given by the expression of %CFSE low /%CFSE high cells from non-immunized and immunized mice, respectively. This assay was performed in WT, Il6 −/− , Cd8a −/− , and Ifnar1 −/− recipient mice using target cells from each mouse strain respectively.
Splenocytes Preparation and Cytokines Quantification
To prepare splenocytes, single cell suspensions of spleen were treated with lysing buffer (Sigma-Aldrich).
Anti-OVA Antibody Titers
Antibody titers were measured by ELISA flowing a previously described protocol (16). HRP-conjugated anti-mouse IgG (Sigma-Aldrich), IgG1 (X56) and IgG2a/c (R19-15) (both from BD Bioscience) were used as detection antibodies. Titer was considered to be the reciprocal of the last plasma dilution that yielded a value of optical density (OD) at 490 nm greater than that of twice the mean value of reagent blank. The plasmas from non-immunized mice were not reactive to OVA.
Determination of Antibody Avidity
96-well half area high binding plates (Greiner Bio One) were coated with OVA (1 µg/well) in 0.1 M sodium carbonatebicarbonate buffer (pH 9.6) and incubated overnight at 4 • C. Plates were washed with 0.05% Tween R 20-PBS and blocked with 0.5% gelatin-PBS. Then, washed and incubated for 1 h at 37 • C with plasma samples at a dilution that gave an OD value at 490 nm between 1.0 and 2.0 in the standard ELISA. Following another washing step, 50 µl of increasing concentrations (0, 0.5, 1.0, 1.5, 2.0, and 2.5 M) of potassium thiocyanate (KSCN) were added to each row of the plate for 15 min. Plates were washed and incubated with anti-mouse HPR-conjugated IgG antibody (Sigma-Aldrich). Plates were developed by adding substrate (ophenylenediamine and H 2 O 2 ) and OD determined at 490 nm. The OD values in the KSCN-treated wells were expressed as a percentage of the untreated reference well as previously described (19).
In vivo Uptake of OVA and CpG-ODN
Statistical Analysis
Data were analyzed using the GraphPad Prism5 R (GraphPad Software, San Diego, CA, USA). In experiments with multiple groups of mice, statistical differences between treatment groups were compared using ANOVA and Bonferroni post test for multiple comparisons. For comparisons between two treatment groups, unpaired Student's t test was used. All data were considered statistically significant if P values were <0.05.
RESULTS
The Formulation of OVA and CpG-ODN With the Nanostructure Coa-ASC16-based scaffolding containing OVA and CpG-ODN is obtained after a heating-cooling process of a mix of three well-defined components (OVA, CpG-ODN, and ASC16) ( Figure 1B). To test whether the manufacturing process could promote interactions between the OVA and CpG-ODN, solutions of OVA, CpG-ODN, or OVA/CpG-ODN were heated or left unheated and resolved by Native-PAGE after reaching room temperature. As shown in Figure 1C, there was no aggregate found between the OVA and the CpG-ODN after the heating-cooling process.
Formulation of OVA/CpG-ODN With Coa-ASC16 Optimizes Humoral and CD8 + T-Cell Responses Independently of IL-6
We have previously shown that OVA/CpG-ODN/Coa-ASC16 elicits Th1 cellular response (16), suggesting that it could also induce CD8 + T-cell response. To test whether the nanostructured formulation was able to induce OVAspecific CD8 + T-cell responses, mice were immunized with a three-dose schedule (days 0, 7, and 14) with OVA/Coa-ASC16, OVA/CpG-ODN, or OVA/CpG-ODN/Coa-ASC16. On day 21, in vivo killing assays were performed. Notably, mice immunized with OVA/CpG-ODN/Coa-ASC16 showed a superior cytotoxic activity than mice immunized with OVA/Coa-ASC16 or OVA/CpG-ODN (Figure 2A). Apart from direct cytolysis mechanisms, the CD8 + T-cells can also orchestrate a rapid host protection by crucial cytokines secretion for the activation of both innate and adaptive immune system (20,21). In this regard, splenocytes from mice immunized with OVA/CpG-ODN/Coa-ASC16 showed higher IFN-γ secretion compared to those from mice immunized with OVA/Coa-ASC16 or OVA/CpG-ODN ( Figure 2B).
Among other cytokines, IL-6 has been widely described as a promoter of the development of cytotoxic CD8 + T-cell (22) and antibody immunity in different adjuvant strategies (23)(24)(25)(26)(27)(28)(29)(30). Since Coa-ASC16 enhances the CpG-ODN-induced humoral response (16) and that Coa-ASC16 alone (without antigen or CpG-ODN) is sensed by the innate immune system with a consequent local production of high amounts of IL-6 (31), we inquired whether this cytokine played a role in our model. To this end, we compared the antigenspecific immune response elicited by immunization with OVA/CpG-ODN/Coa-ASC16 in Il6 −/− vs. WT mice. The positive effects on CD8 + T-cell and humoral responses induced by Coa-ASC16 were not affected by the absence of IL-6 ( Figures 2C-E).
Our previous studies have shown that the magnitude of OVA-specific humoral (IgG1, IgG2a) immune response from mice immunized with OVA/CpG-ODN/Coa-ASC16 is dramatically superior to that from mice immunized with OVA/CpG-ODN in solution (16). Here, we tested whether the formulation of the CpG-ODN with Coa-ASC16 had any impact on the quality of the humoral response. For that, we measured the avidity of the antibodies elicited by immunization with OVA/CpG-ODN, OVA/CpG-ODN/Coa-ASC16 and compared them with those generated by OVA/CFA as a model adjuvant system. Notably, the formulation of CpG-ODN with Coa-ASC16 improved the antibodies avidity up to a level comparable to those exerted by immunization with OVA/CFA ( Figure 2F).
In addition, we showed that the nanostructured formulation allows antigen dose-sparing without significantly affecting the antibody or Th1 cellular responses induced by vaccination ( Figure 3). This characteristic is particularly important in cases were antigens are difficult to obtain or require high-cost manufacturing processes.
Together, these data demonstrated that Coa-ASC16 improves adjuvant effect of CpG-ODN with regard to the antibody avidity and CD8 + T-cell response. The humoral and cellular responses elicited by the nanostructured formulation not require IL-6 signaling.
A Single-Dose of the Nanostructured Formulation Is Sufficient to Induce Robust Humoral and CD8 + T-Cell Immunity Three-dose regimens for vaccines are expensive and difficult to complete. Therefore, we asked whether a single-dose immunization was sufficient to induce an antigen-specific immune response. To this end, mice were immunized only at day 0. A single-dose with OVA/CpG-ODN/Coa-ASC16 elicited early seroconversion while OVA/CpG-ODN failed to generate OVA-specific IgG. In addition, we measured OVAspecific IgG after intraperitoneal challenge with OVA at day 147 post-immunization. This secondary contact with the antigen showed no significant effect on the humoral response induced by OVA/CpG-ODN/Coa-ASC16. However, mice immunized with OVA/CpG-ODN were able to seroconvert after secondary challenge but were never able to induce IgG2c subtype (associated with Th1-biased response) (Figures 4A,B). In addition, we analyzed the CD8 + T-cell response at day 7 after immunization. Mice immunized with OVA/CpG-ODN/Coa-ASC16 showed a cytotoxic activity and IFN-γ production comparable with the one obtained with a three-immunization schedule while mice immunized with OVA/CpG-ODN failed to elicit any CD8 + T-cell response (Figures 2A,B, and 5A,B). A supplementary in vivo killing assay was performed in Cd8a −/− mice to demonstrate that the cellular lysis observed in our experimental model was exclusively carried out by CD8 + T-cells (Supplementary Figure 2). Therefore, our results demonstrate that vaccination with nanostructured formulation is able to induce a long-lasting antibodies and robust CD8 + T-cell responses with a single-dose. We next focus our efforts on the study of CD8 + T-cell response.
CD8 + T-Cell Response Is CD4 + T-Cell Independent
CD4 + T-cell help is generally required for the generation of CD8 + T-cell response. However, it has been reported that CpG-ODN can bypass this need and generate a help-independent CD8 + T-cell response (32,33). To address the question of whether our adjuvant strategy had the same ability, we studied the CD8 + T-cell response in OVA/CpG-ODN/Coa-ASC16 immunized mice depleted from CD4 + T-cells. Our findings indicate that this response is help-independent (Figures 5C,D).
IFN-I Signaling Is Essential for the Induction of CD8 + T-Cell Response
IFN-I are positive regulators of CD8 + T-cell response through multiple direct and indirect mechanisms (34,35), we analyzed the CD8 + T-cell response elicited by the immunization with OVA/CpG-ODN/Coa-ASC16 in Ifnar −/− vs. WT mice. The lack of IFN-I signaling resulted in complete abrogation of the CD8 + T-cell response (Figures 5E,F).
The Formulation of OVA/CpG-ODN With Coa-ASC16 Enhances in vivo Co-uptake of OVA and CpG-ODN by Dendritic Cells
Targeting antigen and adjuvant at the same antigen presenting cell generally results in a potent induction of effector T-cells and therefore is an attractive strategy for vaccine development (36). Considering the robust CD8 + T-cell response displayed by our adjuvant strategy we speculated that the nanostructured formulation could promote the uptake of OVA and CpG-ODN by dendritic cells (DCs). To this aim, draining LN were collected 72 h after immunization with the different formulations. Mice immunized with OVA/CpG-ODN/Coa-ASC16 showed the highest total number of CD11c + cells (Figures 6A,B). These cells were indeed characterized by an enhanced engulfing of both OVA and CpG-ODN in comparison to cells from mice immunized with OVA/CpG-ODN (Figures 6C-F). In addition, despite the fact that our system does not chemically link OVA to CpG-ODN, mice immunized with OVA/CpG-ODN/Coa-ASC16 were able to simultaneously load both molecules to the same CD11c + cells more efficiently than mice immunized with OVA/CpG-ODN (Figures 6G,H).
The Formulation of OVA/CpG-ODN With Coa-ASC16 Enhances Expansion and Polyfunctionality of Effector CD8 + T-Cells
The in vivo cytotoxicity assay revealed a robust lysis rate against target cells loaded with SIINFEKL peptide in mice vaccinated with nanostructured formulation. Here, we analyzed the expansion of SIINFEKL-specific CD8 + T-cells by staining with SIINFEKL-Kb tetramer and the number of CD8 + Tcells producing simultaneously cytokines (polyfunctionality) in different experimental groups. The antigen-specific CD8 + T-cell responses induced by the non-replicative vaccines (OVA/CpG-ODN and OVA/CpG-ODN/Coa-ASC16) was compared with that induced by the attenuated, but replicative competent live vaccine, actA Lm-OVA. Mice immunized with OVA/CpG-ODN/Coa-ASC16 presented a higher expansion of SIINFEKL-K b tetramer + CD8 + T-cells than mice immunized with the other vaccine models (Figures 7A,B). To analyze the quality of the CD8 + T-cell response, we performed an intracellular staining of IFN-γ, TNF-α, and IL-2. Immunization with OVA/CpG-ODN/Coa-ASC16 showed a higher percentage and total number of (IFN-γ + IL-2 + TNF-α + ) triple, (IFN-γ + IL-2 + ) double, and (IFN-γ + ) single positive CD8 + T-cells than the other immunization groups (Figures 7C-E, Supplementary Figure 3 for gating strategy used). Therefore, our results show that vaccination with nanostructured formulation cause the highest expansion of polyfunctional CD8 + T-cells, which would correlate with protection against intracellular infection.
CD8 + T-Cell Response Induced by OVA/CpG-ODN/Coa-ASC16 Protects Against Lm-OVA Infection
To test the protective capacity of OVA-specific CD8 + Tcell response induced by the nanostructured formulation, we challenged immunized mice with Lm-OVA and analyzed the CD8 + T-cell response in spleen and the remaining bacterial CFU burden in liver. In correlation with the superior expansion of antigen-specific CD8 + T-cells (Figures 8A,B), mice immunized with OVA/CpG-ODN/Coa-ASC16 induced a higher amount of IFN-γ + CD107a + TNF-α + CD8 + T-cells than mice immunized with OVA/CpG-ODN (Figures 8C-E, Supplementary Figure 4 show gating strategy used). In concordance with these results, mice vaccinated with nanostructured formulation were able to remarkably reduce the bacterial load in liver. Two days after infection, high bacterial loads were recovered in the liver of mice vaccinated with OVA/CpG-ODN, while OVA/CpG-ODN/Coa-ASC16-immune mice had reduced this bacterial burden by ∼2 logs (Figure 8F).
DISCUSSION
We formulated CpG-ODN with a novel nanostructure with the aim to optimize CpG-ODN adjuvant activity and used OVA as soluble protein antigen model. Our previous work proved that this system is an efficient strategy to elicit strong antibodies (IgG1 and IgG2a) and Th1/Th17 cellular responses in mice with a three-dose immunization regimen and without toxic systemic effects (16). However, the mechanisms whereby Coa-AS16 improves antigen-specific immune response are still not fully elucidated. Based on their mechanisms of action, vaccine adjuvants are classified as immunostimulatory agents, depot systems or vehicles (2,4). In this regard, we have previously shown that Coa-ASC16 promotes antigen retention at the injection site (depot effect) and is sensed by the innate immune system promoting a transient local inflammatory environment involving the release of Damage-Associated Molecular Patterns, cytokines (IL-1β, IL-6, and IL-12) and recruitment of innate cells (neutrophils and Ly6C high monocytes) (31). These facts indicate Coa-ASC16 acts as an antigen reservoir and immunostimulatory agent. In this study, we demonstrate the ability of this adjuvant strategy to elicit an additional effector CD8 + T-cell response while allowing the reduction of antigen-dose and immunization doses without significantly compromising the antigen-specific immune response. The use of Coa-ASC16 to formulate OVA/CpG-ODN allows the induction of a long-lasting antibody response with a singledose avoiding the need of boosts, a goal desired in prophylactic vaccines (Figure 4). Additionally, the nanostructure improves the antibody avidity (Figure 2F) up to a level comparable to the one reached by using the gold standard CFA adjuvant but also the most reactogenic of known adjuvants and hence is unsuitable for human use. Previous studies demonstrated that vaccines that have the ability to improve antibody response have superior capacity to induce germinal center formation (37)(38)(39), unique lymphoid microenvironment in which antigenactivated B cells undergo class switching, affinity maturation, and differentiation into memory B cells. These studies suggest than the nanostructured formulation could modify several mechanisms essential for germinal center formation, a potential linkage that will be important to investigate in subsequent studies.
CpG-ODN is able to induce CD8 + T-cell immunity only under certain conditions. CpG-ODN is not optimally effective to elicit this type of response when is used in a soluble format. However, CpG-ODN is able to induce CD8 + T-cell immunity when it is conjugate with antigen (40) or formulated with different strategy. Here, we report that the formulation of antigen/CpG-ODN (both molecules without conjugation) with Coa-ASC16 improve its ability for inducing CD8 + T-cell immunity comparable to other strategies. However, it is difficult to compare different formulations of CpG-ODN reported in the literature because the amount and the type of CpG ODN and antigen as well as mouse strain are divergent. For example, cytotoxic activity of CD8 + T-cells induced by CpG-ODN/Coa-ASC16 with a single-dose ( Figure 5) is similar to that induced after a single immunization with CpG-ODN formulated with a nanoemulsion (41) or CpG-ODN nanoparticulated (42) or after two immunizations with CpG-ODN conjugated with an albumin-binding lipid (43). The formulation with Coa-ASC16 has the advantage that is a simple and inexpensive platform.
Regarding the powerful CD8 + T-cell response observed by a single immunization with OVA/CpG-ODN/Coa-ASC16 (Figure 5), we attributed this phenomenon to two main factors. First, the nanostructured formulation clearly elicits a higher antigen-specific CD8 + T-cell expansion than the other formulations including the live vaccine vector, actA-Lm-OVA (Figures 7A,B). Second, the characterization of this response revealed a high degree of polyfunctionality including IFNγ + TNF-α + IL-2 + triple positive CD8 + T-cells (Figures 7C-E), which often correlates with better protection against infection. For example, this phenomenon was previously demonstrated in mice vaccinated against T. cruzi using adenovirus vector (44), and in mice vaccinated against malaria with adenoviral and modified vaccinia virus Ankara vectors (45). In addition, human polyfunctional CD8 + T-cells correlate with protection in HIV and Mycobacterium tuberculosis infection (46)(47)(48)(49).
Consistently with these findings, antigen-specific effector CD8 + T-cells induced by OVA/CpG-ODN/Coa-ASC16, after challenge with Lm-OVA, showed higher efficiency to combat this bacterial intracellular infection than the other immunization strategies ( Figure 8F). Additionally, it has been described that polyfunctional IFN-γ + TNF-α + IL-2 + CD8 + T-cells have higher capacity for surviving and providing greater memory protection compared with those that produce only one cytokine (50,51). Therefore, the high number of polyfunctional CD8 + T-cells observed in mice immunized with OVA/CpG-ODN/Coa-ASC16 reveals the potential of this adjuvant strategy for developing memory CD8 + T-cell immunity, the ultimate goal for vaccines. This issue is currently under investigation by our group.
Seeking to explore in more detail the mechanisms by which the Coa-ASC16 improves the antigen-specific immune response, we focused on IL-6. This cytokine has been described as a promoter of the T-follicular helper cells differentiation and germinal center activation (24) and hence it has been reported as a key player for enhancing humoral response in many adjuvant strategies (25)(26)(27)(28)(29)(30). Despite that we have previously reported local high levels of IL-6 released in response to Coa-ASC16 injection (31), the lack of this cytokine had no effect in the enhancer of magnitude or antibody isotype switching of the humoral response elicited by the nanostructured formulation with a threedose schedule (Figure 2E). Yet, its possible role on inducing an earlier seroconversion or somatic hypermutation within the germinal center needs further elucidation. Moreover, the role of IL-6 on the improvement of humoral response in vivo is still controversial. It has been shown that although IL-6 may play a significant role during early stages of the immune response, its impact on late stages is low (52,53). Other reports even suggest that in certain models, the absence of IL-6 can be compensated by other cytokines such as IL-21 and IL-27 (24). In relation to CD8 + T-cell response, several reports showed that IL-6 plays a role in the promotion of effector response in viral infections (54) or in vaccination using monophosphoryl lipid A/alum as adjuvant (22). In contrast, we found that IL-6 is not necessary to induce CD8 + T-cell response by OVA/CpG-ODN/Coa-ASC16 (Figures 2C,D). Collectively, the considerable variation among the molecular mechanisms found to be involved in different adjuvant systems, highlights the importance of studying each individual system separately.
Although the complete mechanism by which this nanostructured formulation works is yet unknown, we speculate that the slow antigen release (31) is the main reason why we observed a similar long-lasting humoral and an effector CD8 + Tcell response after reducing the number of immunizations from three doses to one (Figures 4, 5A). A similar outcome is observed when reducing to one third the antigen dose, demonstrating the Coa-ASC16 ability for dose-sparing (Figure 3).
It is important that our vaccine model mimics the composition of a subunit vaccine (based on highly purified or recombinant antigens). In order for this kind of vaccine to elicit a CD8 + T-cell response, the nature of antigen (particulate/nonparticulate, conjugate/unconjugated with adjuvant) and the inflammatory environment in which the DCs encounter the antigen are extremely important. A proper inflammatory environment at the injection site and draining LN provide the signals necessary for activation/maturation of DCs and the nature of the antigen impact on the efficiency of antigen uptake and presentation by DCs (55). Particularly, for an exogenous antigen like OVA to be presented by DCs to naïve CD8 + T-cells, it must undergo a process called cross-presentation (56). This process, in addition to antigen cytosolic delivery into DCs, require co-association of signal of an appropriate adjuvant to the same DCs (36). In mice, resident CD8 + DCs and migratory CD103 + DCs have been described as the most efficient cross-presenting DCs. Moreover, pDCs has been shown to efficiently cross-present when stimulated by TLR ligands (57). Cross-presentation can be facilitated by signals deployed by intracellular Toll-like receptors agonists like CpG-ODN and/or cytokines like IFN-I, among others (34,35,58,59). To secure the co-localization of both molecules within the same DCs, several adjuvants strategies involving CpG-ODN used chemical or physical conjugations between the antigen and the CpG-ODN (40,60). In this work, we found that the nanostructured formulation is able to induce a strong CD8 + T-cell response and that requires both, CpG-ODN and IFN-I signaling. This is proved by our findings where immunization with OVA/Coa-ASC16 or the abrogation of IFN-I pathway completely impairs CD8 + T-cell response (Figures 2A,B, 5E,F). The CpG-ODN-dependency for exerting a CD8 + T-cell response suggests that the co-localization of OVA and CpG-ODN by the same DCs is important for its licensing and posterior priming of naïve CD8 + T-cells. In this regard, we have shown that our adjuvant strategy promotes the co-uptake of OVA and CpG-ODN by DCs in draining LN (Figure 6G,H) despite being offered in an unconjugated manner ( Figure 1C). These results, combined with the efficacy of CD8 + T-cell response, invite us to elucidate which subset of DCs could be involved in this process and to understand how this nanostructure promotes co-uptake of both molecules. The characterization of this phenomenon will allow us to confirm if there is a strict link between uptake, crosspresentation and the optimizing of activation of CD8 + T-cell (cross-priming). Although these inquiries remain unknown, we hope to address them on the near future.
Focusing on IFN-I, it is still unclear whether its production is being elicited by the CpG-ODN or/and the Coa-ASC16. Nevertheless, this fact is specially intriguing since we used a type of CpG-ODN that belongs to class-B family. As a consequence of its structure, class-B CpG-ODN are not characterized for inducing IFN-I as other CpG-ODN families such as class-A CpG-ODN (13). A possible mechanism involved in the induction of these cytokines might be related to the Coa-ASC16 itself. It has been reported that self or pathogen dsDNA can activate cytosolic DNA sensor proteins inducing the production of IFN-I (61). Considering Coa-ASC16 ability for inducing in vivo death of resident cells at the injection site with the transient release of Damage-Associated Molecular Patterns as dsDNA previously shown (31), this nanostructure might be triggering the secretion of IFN-I by the activation of cytosolic DNA sensor proteins. Yet, the fact that self dsDNA could additionally activate Toll-like receptor 9 pathway should not be dismissed. Here, we present an adjuvant strategy that efficiently induced IFN-I dependent CD8 + T-cell response using class-B CpG-ODN not conjugated with the antigen. This is a desired effect for two main reasons. First, free class-A CpG-ODN aggregates into uncontrolled higher order structures of different sizes that are unpredictable and hence not suitable for clinical applications (13). Second, the conjugation of a protein with CpG-ODN can disrupt the structure of the antigen affecting its properties (62).
In summary, these data indicate that the formulation system presented herein strongly optimizes CpG-ODN adjuvant activity thus enhancing quantitatively and qualitatively both humoral and CD8 + T-cell responses with a single-dose, thus providing a substantive vaccine platform for use with subunit antigens. In addition, this system offers most of the desired effects in a vaccine adjuvant such as allowing the reduction of the number of immunizations and dose-sparing without compromising the resulting immune response while offering a simple, biocompatible, and inexpensive system that could easily scale up to massive production.
AUTHOR CONTRIBUTIONS
AC and BM designed the experiments. AC performed most of the experiments, analyzed data, prepared figures, and collaborated in manuscript writing. MS, JD, MC, CM, SS, DA, SP, and MP-P and GM contributed to study design, analysis of results, and corrected the manuscript. BM conceived and supervised the study and wrote the manuscript.
FUNDING
This work was supported by grants from the Agencia Nacional de Promoción Científica y Técnica (PICT-MICINN 2011 # 2772 and PICT 2014 # 3497) and the Secretaría de Ciencia y Técnica de la Universidad Nacional de Córdoba (to BM). | 2018-10-10T18:45:10.271Z | 2018-10-10T00:00:00.000 | {
"year": 2018,
"sha1": "5676e3606e4820158511ba55ec7508710438fb8c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2018.02319/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5676e3606e4820158511ba55ec7508710438fb8c",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
251370946 | pes2o/s2orc | v3-fos-license | Study on Ergonomic Design of Artificial Intelligence Lower Limb Assist Brace for the Elderly
The ergonomic design study of artificial intelligence lower limb-assisted brace for the elderly is a new design standard of lower limb-assisted brace for the elderly with mobility problems. Based on human factors engineering, this study tested and analyzed the advantages and disadvantages of human lower limb motion mechanics, human gait motion law, and existing lower limb assisted brace design cases at home and abroad and concluded that the common external assisted method is less man-machine efficient than the internal assisted method. Therefore, a new brace joint rotation curvature, component parameters, and other key information were designed based on the structure of the medial assistance method. With the help of the engineering and scientific analysis methods in human factors engineering, the designed machines and systems are made more adaptable to the physiological and psychological characteristics of human beings. This study explores the interaction between humans and machines and the rationality of their mutual integration, which can effectively avoid repetitive strain injuries and other muscle diseases over time for users in the process of assistance and achieve efficiency, health, and safety. Subsequently, Rhino software was used for digital modeling, physical prototyping, experimental testing, and analysis of the design solution and continuous optimization of the design. At the same time, the perceptual engineering design method was utilized to meet the humanized aesthetic design requirements. The prototype of the design study was finally completed, which is more in line with the evaluation criteria of “human-machine-environment system” than the existing market design in terms of functional rationality, human-machine performance, and human experience. This demonstrates the validity of the design method and is an important reference for the design standard of the lower limb support for the elderly.
Introduction
Since 2014, China's population aged 60 years and above has been growing, and the aging process in China has accelerated, and the Chinese population has now entered the elderly type. At the end of 2021, China's population aged 60 and above will be 267.36 million, accounting for 18.9%, an increase of 3.29 million compared to 2020. Among the 267.36 million people aged 60 and above in China in 2021, the population aged 65 and above will be 20.56 million, an increase of 9.92 million or 14.2% from 2020. e proportion of population aged 60 and above and 65 and above in China will be 18.7% and 13.5%, respectively, in 2020; the proportion of population aged 60 and above and 65 and above in China will increase by 0.2 and 0.7 percentage points, respectively, in 2021 compared with 2020, and the degree of aging will further deepen [1]. As the aging of the elderly population deepens and the cost of rehabilitation treatment for the elderly increases, the operational load of social nature care institutions such as homes for the elderly, hospitals, and rehabilitation institutions and the cost of care for their families will be greatly increased, and the pressure and resistance to social development will be further expanded.
Lower extremity exoskeleton has been proven to be efficient to provide highly repeatable and accurate rehabilitation exercise, but most existing exoskeletons' gait trajectories will not vary with the users [2]. In order to cope with this global social phenomenon and problem, many countries in the world have already developed lower limb assistance braces for the elderly. Along with the continuous research and development of related products, the technology of lower limb assistance brace has been continuously improved. However, the current product technology on the market tends to be homogeneous and expensive, and its design ideas are relatively solidified, and the suitability design standard has not been effectively established and not effectively combined with today's new industrial technology capabilities, module integration capabilities, and humanized use demand scenarios. is has led to low patient coverage and the inability to effectively exploit the functional advantages of this type of product. Ultimately, the research field cannot be developed sustainably, resulting in a delay in the development of the research field. Under the same level of assistance, the lower limb assistance brace with more ergonomic structure and design aesthetics can improve the experience of the elderly; through a more reasonable structural design to reduce product falsification, it can make the elderly have a more suitable assistance product.
is study digs deep into user pain points and industry needs and fully verifies the rationality of product design ideas from multiple perspectives, including domestic and international industry status research, human factors engineering methods, modeling and structure design, suitability design, and computer simulation testing, and proposes new solutions. e design research direction can effectively supplement and enrich the research field of lower limb assistance brace at home and abroad, fill the relevant industry gaps, promote the development of artificial intelligence lower limb assistance brace industry, and become a new development trend and design standard of lower limb assistance brace in the future.
Status of Foreign Research on Artificial Intelligence Lower
Limb Brace. Lokomat is a lower limb gait training brace system used to enhance the mobility of the elderly. It consists of a human gravity support, a gait corrector, and a running table, and the parameters of Lokomat are adjusted to simulate the different physiological gait trajectories of different elderly people with mobility problems and to drive the elderly's lower limbs for walking training in Figure 1 [3]. e ALEXI lower limb assistance brace from the United States uses a nonlinear filter to assist the elderly to walk according to a preset trajectory [4]. e HAL lower extremity assist brace from Cyberdyne, Japan, provides gait training and lower extremity walking assistance for the elderly in Figure 2 [5]. e structure of HAL is a lateral traction brace that is strapped to the leg of the elderly and uses biosensors to monitor the bioelectrical signals on the leg muscles to enhance the strength and stability of walking.
e ReWalk lower extremity assist brace is also a lateral traction assist brace to assist the elderly in walking (see Figure 3) [6].
Ekso Bionics is a lower limb assistance brace developed for military, medical, and other fields. Ekso has built-in highprecision sensors, anthropomorphic skeletal joints, microdrive motors, a powerful central processor, and drive software system. e inflatable lower limb support brace developed by Panasonic uses soft mechanical principles to support a variety of lower limb muscle movements and compared to the traditional lower limb support brace. e quality is also lighter and easier to wear, and there is still a certain distance from the mass production of products (see Figure 4) [7].
Status of Domestic Research on Artificial Intelligence
Lower Limb Power-Assisted Brace. Shanghai Jiaotong University fuses surface EMG signals and interactive forces into a lower limb-assisted brace [8]. Beijing DAI's Ailegs lower extremity assist brace is made of titanium alloy and adopts the form of assisted structure with external traction, strapped to the leg of the elderly, which can be adjusted in size to fit different body types of the elderly and can bear up to 100 kg of weight. Shanghai Fourier X1 lower limb assistance brace integrates mechanics, human biosensor, electromechanical drive integration, gait analysis, and Computational Intelligence and Neuroscience other multidisciplinary science and technology to help the elderly achieve basic assistance functions such as sitting, standing, walking, and going up and down stairs (see Figure 5) [9]. e auto-LEE lower limb assist brace from the Shenzhen Institute of Advanced Technology of the Chinese Academy of Sciences can drive each component independently and work together to assist the elderly to walk without the assistance of other devices compared to the traditional assist brace [10].
Overview of Human Factors Engineering
Human factors engineering, also known as "Human Factors Engineering", is an interdisciplinary discipline involving human factors engineering, perceptual engineering, engineering, psychology, physiology, anthropometry, anatomy, environmental science, system science, management, safety science, labor science, and other disciplines. It has been widely applied in many fields.
Human factors engineering takes human as the core factor and emphasizes the factors that prioritize human needs in engineering and work management. By systematically studying basic human data such as information on human abilities, behaviors, limitations and characteristics, and parameters of various behaviors and by systematically applying these data elements to the design and manufacture of products, operating procedures, and the environment, in which they are used, we study the interaction between humans and machines and solve the problem of the efficiency of collaboration between humans and machines. With the help of robots and intelligent systems, artificial intelligence technology can improve problems such as physical and human constraints; for example, during Computational Intelligence and Neuroscience human-robot collaboration, robots can use human biomechanical models and habitual motion trajectories to adjust work patterns and consider a variety of human factors engineering check indicators, including joint torque, body posture, mechanical stress, lower limb maneuverability, and muscle fatigue levels. e interaction between the "humanmachine-environment system" is optimized so that its operation is compatible with the physiological composition and psychological needs of human beings. erefore, human factors engineering research is often accompanied by the application of perceptual engineering. By quantifying the various perceptual factors of human beings in an engineering way, the relationship between each perceptual quantity and the product design is matched and finally transformed into the physical design elements of the product. Human factors engineering enables human beings to act efficiently, safely, conveniently, healthily, and comfortably under different conditions in activities such as living, working, and leisure by analyzing and changing the interrelationships between human beings and human behavior and the products, equipment, facilities, procedures, systems, and their associated environments used.
Design Method of Lower Limb Assistance
Brace for the Elderly
Gait Detection and Data Analysis of the Lower Limbs of the
Elderly. Gait detection and analysis are crucial in the ergonomic study of the lower extremity of the elderly. e lower limb gait of the elderly includes walking, running, going up and down stairs, and other periodic movements, in which the legs alternate and move forward to drive the body forward, among which the most frequently used movement is walking, in which the legs and limbs need to be highly coordinated with each other. In order to design a fully applicable auxiliary brace for the lower limbs of the elderly it is necessary to master the design of various parameters at different levels of gait movements of different elderly people through in-depth research on the gait movement rules of the elderly. e synchronized study of basic human data of the elderly, together with various parameters such as information related to the ability, limitation, and characteristics of gait movement of the elderly, can provide the engineering data support for the design of the lower limb assistance brace for the elderly in a reasonable and suitable range. e whole-body muscles need to be involved in the process of gait movement of the elderly, coordinating with the hip, knee, and ankle joint rotation and flexion, tilt and rotation of the trunk, and the lateral, longitudinal, forward, and backward movement of the human body, which is a complex movement of the elderly body. e three main joints of hip, knee, and ankle are the basic joints in the gait movement of the lower limbs of the elderly, while other lower limb joints play more or less the role of regulation.
In the detection of gait motion of the elderly, the macroscopic view is that the torso is the main body of the whole movement, and the two lower limbs are in the subordinate position; microscopic view is that the thighs belong to the main body, and the lower legs are in the subordinate position, in decreasing order, and they are connected by the joints that play the role of mutual connection and support, so the range of activities of each independent joint is calibrated according to the need of human factors engineering, and the body movement pattern of the elderly is obtained after a sampling test. In the walking process of the elderly, a gait motion cycle starts from the landing of one foot to the end of the landing of the foot again. e process includes a number of movement phases such as foot landing, single foot support, single foot off the ground, and single foot swing. ere is basically no difference in the proportion of each phase among older adults of different genders, ages, and heights. Firstly, the group randomly selected a certain range of sample subjects among different types of elderly people, then continuously observed the samples, objectively recorded walking duration, movement direction, movement speed, displacement path, and other movement data, categorized and analyzed the data, and finally mapped the data in the corresponding three-dimensional spatial range, which reduced the instability brought by individual differences and could improve the accuracy of the lower limb brace research work.
e gait study of the elderly utilizes foot sensors, and the subject team collects the movement trajectory and time of the elderly walking to quantitatively analyze and assess the kinematic and kinetic parameters of gait cycle and mechanics, mainly including the parameters of stride length, stride width, stride frequency, and joint operation angle (see Figure 6) [11]. For example, the normal value of stride length is 1500∼1600 mm. e normal value of step width is 50∼100 mm. e rotation angle of the foot joint is 0∼7°. And this is used to construct the gait motion model. e regularity, parameters, and periodicity of normal human gait motion are relatively stable although there are very slight individual differences. However, as the joints of the lower extremities of the elderly age or develop disease, the gait motion characteristics will change significantly. According to the principle of human factors engineering, this requires the design process to customize different lower limb brace assistance structures and modes according to the aging degree of different elderly people's lower limbs and therefore requires the lower limb assistance brace to have the function of independent adjustment of joints.
Human Factors Engineering Research Method
Intervention.
e design and development of the artificial intelligence lower limb brace follow the ergonomic design principle. anks to the new industrial technology capability and module integration capability, the lower limb brace can be more humanized, i.e., the integration and humanization of the artificial intelligence lower limb brace body when it is used with people is improved.
Using the principle of ergonomics, this paper studies the changes of motion posture, the position of support points and the changes of comfort of people with the same height, size, and weight in various scenes, and finally establishes the parameter model of human dynamic support. e experimental test data and results show that the most reasonable relative position between human comfort and integrated support structure components, anthropomorphic joints, hydraulic devices, power motors, and transmission systems can reduce power transmission losses, improve the ability of auxiliary actions, and achieve the most comfortable auxiliary use.
As most of the existing artificial intelligence skeleton is based on the lateral skeleton leg support, which leads to too many restrictions on the range of activities and poor humancomputer interaction experience, therefore, the artificial intelligence-assisted lower limb development project can use experience, human-computer interaction, temperature and mechanical structure, control model, power module, new materials, and other aspects to carry out innovative research and design research. e main technologies are shown in Figure 7.
Modeling and Structural Design.
e modeling design of the elderly artificial intelligence lower limb assistance brace needs to meet the functional needs, suitability needs, and potential humanized sensibility needs of consumers (see Figure 8). e research direction is to adopt the medial assistance method. Compared with the traditional external traction assistance method, it can be changed into the form of support assistance. is kind of assistance, such as the auxiliary unicycle, can effectively share the weight of the elderly when walking, standing, squatting, climbing stairs, and other actions, so that users have a more relaxed and comfortable experience. It can prevent the traditional structure of long-term use that will lead to the body of the brace and the human body in contact with the skin compression, and blood is not smooth.
is structure should not only have advantages in the external product function design but also have excellent sensory visual effect in the product appearance. Compared with the traditional booster structure, the main components are hidden in the inner measurement of the legs, which can use the legs to cover the main bracket structure and have a better performance in terms of aesthetics.
In terms of suitability requirements, the structure is designed to be lightweight by bionic design, concerning animal bones and biological structures under the premise of fully considering the rationality of its structure, and the weight is reduced by hollowing out some parts, reducing the three circumferences, and changing the combined structure, while the cost of materials can be reduced, making it less burdensome for the elderly to purchase and use the product under the premise of satisfying the lower limb support function and strength requirements.
Moreover, in terms of humanized sensual needs, according to the principle of sensual engineering design, compared with the traditional traction assistance method, which uses a machine to control people and tug them forward, this design method is more like a tool that can ride, which can transform the tool controlling people into the psychological implication of people controlling the tool. e research of artificial intelligence lower limb assistance brace for the elderly needs to make comprehensive reference to the advantages and disadvantages of the existing international and domestic cases, as well as the real use experience and feedback of the elderly, to exclude the unreasonable and even hazardous parts of the existing design for the health of the elderly and to make preliminary screening. In addition, we should fully consider the living habits of the elderly and the habits of using the lower limb brace and design the structural and functional solutions according to the gait motion parameters of the elderly in order to meet the basic lower limb assistance function of the elderly and also meet other physiological and psychological needs of the elderly. e main structure and joints are realized by 3D printing technology, and the main mechanical structure consists of hip seat support, hip joint, knee joint, ankle joint, exoskeleton module, foot module, and other parts (see Figure 9).
Adopting the design of a new structure, compared with the traditional traction type wearable lower limb powerassisted brace and suspension type wearable lower limb power-assisted brace, using a new technology, highly integrated, humanized design method, designed a new brace joint rotation curvature, component parameters, and other key information, using Rhino software for three-dimensional modeling, physical prototyping, experimental testing analysis, and continuous optimization of the design plan. Computational Intelligence and Neuroscience e design of the brace is designed to solve the problems of the user's movement being restricted due to the complex and large design of the body, the uncomfortable wearing due to the lack of ergonomic design, and the need for the user to overcome the psychological acceptance of wearing the brace. e lower extremity brace for the elderly needs to be reasonably configured based on the degree of freedom and joint drive range of the brace, taking into account the degree of matching with the human lower extremity, joint drive, and system complexity. e driving power of hip joint and driving distance of knee joint are in great demand, while the ankle joint is the motion end of the lower limb brace, and the position changes frequently and randomly. erefore, the hip and knee joint degrees of freedom in the sagittal plane are assigned to the booster drive, i.e., the active drive is used for the hip and knee surface/extension; the ankle joint is configured as passive degrees of freedom, i.e., the ankle joint is a flexible passive joint, which maximizes the advantage of "man in the ring" and brings into play the ability of man in maintaining balance and reduces the difficulty of mechanism design and control. It reduces the difficulty of mechanism design and control. e joint range of motion of the human lower limb horizontal walking and the joint range of the booster are shown in Table 1.
Suitability Design
From the perspective of suitability design, the modular design of the artificial intelligence lower extremity brace for the elderly can reduce the upgrade cost by replacing only the modules for functional upgrade. e products on the market are limited by the limitations of earlier technical standards and design ideas, and the integrated structure is used to ensure the functional integrity and reliability of the whole. e modular design can effectively avoid a series of problems brought about by the integrated design, such as larger volume and weight, a single way to assist action, maintenance difficulties, repurchase of new equipment, environmental protection, and poor economy. e use of modular design can make the lower extremity assistance bracket with the potential for subsequent functional upgrades, in order to achieve richer and more efficient functions without having to buy a new product can be replaced on the original equipment or pretend to new functional modules. At the same time, the modular design, if the equipment is damaged, only the corresponding damaged modules or parts need to be replaced, without the overall disassembly of the body for maintenance, low maintenance difficulty, low zero to whole ratio, high success rate of maintenance, and short maintenance time, and even the elderly can replace the repair themselves according to the manufacturer's instructions.
Lightweight, Integrated.
At present, a large number of mechanical structures, power systems and power transmission systems are used in the market to drive the support to help the body. One of the two drawbacks is that the brace itself has a large number of structural components and uses metal or alloy materials to ensure the structural strength, and the overall mass is huge. Its power system needs to share a lot of energy to support its own weight, in order to use the remaining power reserves to help the body, resulting in low efficiency and poor range. Second, the structural components are complex and nested in layers. is limits the angle, amplitude, and distance of the support, and because the body is too large, the range of motion and angle of human limbs is also limited.
ere is a large amount of material redundancy in the middle part of the traditional brace structure, which can be hollowed out to reduce weight and optimize the structure design of the lower limb brace components. At the same time, on this basis, a large numbers of aluminum alloy, titanium alloy, carbon fiber, graphene, and other lightweight composite materials are used in the key parts to further reduce the overall mass.
Combined with ergonomic principles, the combination of new materials is improved to simplify the structural components and achieve a highly integrated structural design. Compared with the existing research direction, while ensuring the structural reliability of the exoskeleton, it can greatly reduce the self-weight of the lower limb power support, enhance the payload capacity, reduce the burden of wearing, and enhance the flexibility and comfort of wearing.
Modular 3D Fabric Printing Design.
At present, the lower limb support for the elderly at home and abroad is mainly to meet the basic functions, ignoring the humanization and use experience, as well as the lack of ergonomic design, which leads to the possibility that the force area and position of human exoskeleton worn by different users are unreasonable. e exoskeleton body will lead to wear and tear of the skin in contact with the human body. e body and foot moving parts are designed with modular customization, and 3D printing technology is applied to realize the customization of muscle body, which can achieve a higher conformity to the body auxiliary parts from human factors engineering, and customize the product according to the user's bone size data, so that the man-machine model can cope with the different needs of the elderly with different gender, age, body size, and degree of mobility.
In addition, the 3D smart knitting machine can realize the digital molding of various colors, patterns, thicknesses, and other design factors to achieve the goal of personalized customization of the bracket component units. It maximizes the physiological and psychological needs of elderly users. From the perspective of humanized sensual engineering design, the elderly will not just be satisfied with the ordinary assistance mode of traditional products. Help different needs of the elderly to enhance limb mobility and expand the range of action and action time, which can expand the intelligent scenario and has the same important significance for the development of artificial intelligence lower limb assistance brace and human society.
Computational Intelligence and Neuroscience 7
Computer Simulation Test
e artificial intelligence lower limb assisted brace for the elderly uses an adaptive control method similar to the human-machine learning observer to allow the machine to learn the kinetic model feedforward that the system should have. is control method allows the lower limb assisted brace to overcome the damping of the motion process to guide the patient to complete the required motor movements and gradually reduce the force according to the patient's completion, until the lower limb assisted brace is completely free of force and completely through the patient's own muscle force to complete the desired motor movements, to achieve the "suitable walking state "function (see Figure 10). At the same time, this algorithm can also identify the robot's motion model and perform feedforward compensation during the motion. e control software is the core of the whole set of lower limb assistance brace system for the elderly, which provides three modes for users to choose, walking mode, up and down stairs mode, and ProStep (automatically sensing the user's body movement to trigger each step); the elderly can choose according to their own situation and assistance progress. At the same time, the product can walk through the intelligent bracket to achieve real-time data monitoring, and count and upload relevant data to assist medical personnel in analysis and use.
With the help of computer 3D software and related simulation software, the product is adjusted between shape and structure. Combined with the "suitable walking state" function, the computer software simulates the changes in the movement posture and force state of people of different body types and people of the same body type with different strengths under various scenarios of movement conditions. Test and record the position, area size, and morphological changes of the support contact surface that the testers feel comfortable with under different assistance strengths, and build a model of human dynamic support force parameters according to these data parameters. A variety of exoskeleton structural components of various sizes and forms were prototyped with reference to the force parameter models and tested. Based on the experimental test data and results, the most versatile structural parts that meet the comfort and integration of the human body and the exoskeleton body can be derived from the experimental test data and results, so as to arrive at the optimal design direction and design scheme. e method of computer simulation can improve the efficiency and quality of design demonstration, shorten the research period, and reduce the cost of research and development. e human-machine simulation experiments using computer software are shown in Figure 11.
Conclusion
Based on human factors engineering, this study analyzed and researched the existing artificial intelligence brace lower limb brace at home and abroad and proposed a new design method for the structure of the brace and the ergonomic design standard of the lower limb brace. It makes it possible to meet the humanized design principles of the elderly on the basis of meeting the lower limb gait movement assistance of the elderly. e comprehensive design methods of ergonomic design research, user analysis, engineering analysis, behavior study, scenario analysis, functional design, structural design, and modeling design of the lower limb assisted brace for the elderly artificial intelligence brace are demonstrated. e rationality was verified by computer simulation and experimental prototype testing, which provided an important theoretical basis and prototype modeling reference for a new design standard in the field of lower limb assistance brace for the elderly. rough this study, the importance and feasibility of human factors engineering in the research field of lower limb assistance braces for the elderly are reflected. e future research direction of the artificial intelligence-assisted brace for the elderly lies in the cross-collaboration of multiple disciplines under the continuous accumulation and transformation of design thinking and high technology. It can be said with certainty that the humanization, comfort, applicability, and convenience design of the lower limb assistance brace based on the ergonomics of the elderly are an inevitable trend. e elderly will therefore have more ergonomic assistance products and be more willing to walk with the help of braces, which can greatly improve the quality of life, work efficiency, and happiness of the elderly group.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request. Computational Intelligence and Neuroscience 9 Conflicts of Interest e author declares that there are no conflicts of interest regarding the publication of this paper. | 2022-08-07T15:01:51.122Z | 2022-08-05T00:00:00.000 | {
"year": 2022,
"sha1": "1a3638d8d7e75258d3199e0d66f5d4e15bac9a81",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/cin/2022/3304513.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e9a63f333619de991dd4d7a7a112b06de8f8737b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252350747 | pes2o/s2orc | v3-fos-license | The empire of the narrative: Plan making through the prism of classical and postclassical narratologies
This article theorizes the “narrative turn” in urban planning studies, using Gérard Genette’s work to differentiate first- and second-degree narratives. Genette defines the latter as paratexts that determine the public’s reception of the former. The article assesses how second-degree narratives work with different perceptual regimes to construct the reception of the political vision of territory. To that end, it resorts to the recent work of postclassical narratology. Indeed, the latter is particularly interested in the way in which the narrative, in various forms, affects its addressee. Postclassical narratology allows us to renew the theory of narrative in urban planning by focusing on what hypothetically happens in the consciousness of the receiver of the narrative when he or she becomes aware of it. Consequently, the paper sheds light on an emerging aspect of the design process: disambiguating signals embedded in urban planning documents intended for a wider public.
Introduction
This article draws on contributions that have looked at the narrative dimension of urban planning over the past 30 years (Mandelbaum, 1991;Throgmorton, 1992Throgmorton, , 1996Throgmorton, , 2003Sandercock, 2003;Van Hulst, 2012;Ameel, 2016Ameel, , 2021. It explores how the narration of planning documents has become essential for urban governance, which is increasingly sensitive to creating attention for its actions (Erwein and Matthey, 2019). We first redefine the urban planning narrative in a more restrictive way based on current debates in the field of narratology. Drawing on a distinction made by Gérard Genette (1982) between firstand second-degree narratives, we identify two narrative regimes in urban planning.
The first regime is conveyed by technical urban planning documents. These are bearers of a cultural, social, economic, and political vision of a given territory which emit signals primarily to a specific audience, such as technicians and professionals (Hopkins, 2001;Hopkins andKnaap, 2018, 2019). In our opinion, the term narrative is most often used here as a metaphor: while they undeniably convey an imaginary, these documents were not drawn up primarily to tell a story (see also below, page 5).
The second regime transforms the technical document into a more intelligible form for a greater number of people to ensure that it is read according to the plan's proposed vision. These second-degree narratives communicate the stakes of planning policies and control the reception of the signal embedded in the plan because it is received by a wider audience than planning professionals.
This article discusses second-degree narratives based on a case study of the Geneva Master Plan. It assesses how these accompanying narratives construct the reception of the plan as a signal. Using a conceptual framework from postclassical narratology (Baroni, 2007;, we show that the narrative in urban planning directs the reception of a political vision of the territory by working with different systems of perception and offering cognitive frames. Therefore, this article renews the Foucauldian approach that has often targeted planning discourses as a contribution to the theory of planning. The narration of plans, instead of the discourse, has become essential now. Unlike discourse, narrative has the advantage of rendering the speaker invisible (Benveniste, 1976: 238-241). It therefore takes the recipient into a relationship, inviting them to perceive a story via different points of view. The narrative has become a communication tool, necessary for urban planning documents because their recipients have become more heterogeneous due to the democratization of the plan's production. New communicational engineering is being instituted to attenuate the noise (in the sense accepted in communication theory) that may accompany the signals a plan emits. In other words, the narrative builds an understanding of the signal.
A blind task: The role of narrative in an economy of attention From prospective fictions to urban planning storytelling For many years, urban planning research has been interested in the narrative dimension of a practice (planning) that has long been considered from a strictly technical perspective.
Researchers have tried to show how urban planning's transformative power is largely due to its capacity to articulate not only political fictions (Secchi, 1984) but also prospective fictions (Throgmorton, 1996) around material reality. Research has also shown how the narration of the project does or does not favor its realization (Throgmorton, 1992;, generating more democratic approaches (Mandelbaum, 1991;Forester, 1999;Eckstein and Throgmorton, 2003;Bulkens et al., 2015). Other researchers have explored the structure of plots (Walter, 2013;Keunen and Verraest, 2012) that urban planning narratives develop. They identified styles that can facilitate the understanding and sharing of territorial issues. Drawing on the avant-garde work of feminist geography (Hanson, 1997;Valentine, 2008;McDowell et al., 2014), researchers have endeavored to grasp stylistic processes that guarantee a diversity of voices (Sandercock, 2003(Sandercock, , 2010Eckstein, 2003). Using at times ethnographic methods, monographs have also explained how narratives enable the conception of space (Vitalis and Guéna, 2017;Grigorovschi, 2016). Thus, a number of works have dwelled on the narrative skills necessary to construct a sense of inhabited space (Childs, 2008;Dionne, 2018). Here, it is possible to identify a strong influence from Paul Ricoeur's writings on the thinking of urban narrative theorists (Uyttenhove et al. 2021): it is the narrative, inasmuch as it makes it possible to give changes meaning, that is considered here.
Monographs devoted to the reception of narratives (Healey, 1996) have also developed due to the growing interest in the narrative dimension of urban planning. Influenced by research in the field of cultural studies (Morley, 1980(Morley, , 1992 and literary geography (Hones, 2008(Hones, , 2010(Hones, , 2011, researchers have examined the effects of context and place on the way that readers appropriate narratives, sometimes attributing to them a completely different meaning. Further, some studies have focused on the use of narrative as a tool of communication by new urban governance (Duranel, 2019;Ouvrard, 2016). This has renewed the line of research that originated particularly in political science, which is centered around discourse (Hastings, 1999;Jacobs, 2006) and its rhetoric (Fisher and Forester, 1993;Fischler, 2000). The narrative is part of what Caune (1997) has called an "aesthetic of communication" that persuades without resorting to argumentation or the agonistic dimension of debate (Hillier, 2003;Rannila and Loivaranta, 2015;Mattila, 2018). Since the early 2000s, researchers have developed themes while working on language and the contributions of Lacanian theory (Gunder, 2003;Hillier, 2003) to the field of "planning through debate" (Healey, 1992). Hillier (2003) showed, for example, the role of Lacanian theory in reconciling Habermasian approaches and intersubjective compromise with agonistic theories of political space in a dialectical transcendence (Hillier, 2003: 55).
Finally, drawing on work done on the aesthetics of communication, some contemporary monographs have focused explicitly on the use of storytelling in urban planning as a tool for good communication. The public authorities' interest in urban planning storytelling indicates the growing importance of the principles of communicative storytelling that have, hitherto, solely served politics (Matthey, 2014;. These approaches consider the narrative as an instrument that facilitates the exercise of power, from the perspective of the neo-liberal turn of public policies (Purcell, 2009), urban regimes (Irazábal, 2009;Lambelet, 2019), or urban branding (Jensen, 2007).
Narrating plans to ensure their reception: Channeling attention to reduce the noise of the plan's emitted signal The "narrative turn in urban planning" (Ameel, 2021) corresponds to some of the recent planning discourse studies, notably the one by Purcell (2012Purcell ( , 2009). Purcell has shown how neoliberal urban policies use the Habermasian communicative ideal to perpetuate the cultural hegemony of urban powers. However, the transformation of power relations requires radical counter-hegemonic mobilizations (Purcell, 2009) rather than a "communicative turn" (Healey, 1992) in urbanism. Therefore, in the field that is interested in narrative from the perspective of urban power, narrative is perceived differently. Narrative can signify a story that is told by development project owners to promote their acceptance (see Throgmorton's pioneering work). Further, the narrative approach also perceives narratives by plotting urban planning documents and explaining the narrative modalities that they implement, both graphically and from a more literary perspective. Additionally, research has also been conducted on the tropes used in urban planning documents and how they activate major cultural schemas by mobilizing specific literary genres, such as utopia, idyll, or pastoral (Keunen and Verraest, 2012;Uyttenhove et al., 2021).
Few monographs have developed a narratological approach to urban planning documents that allows us to understand how they mobilize different sensitive registers, and how they affect their recipients. This is probably because narrative approaches have paid more attention to the tropes and other stylistic devices employed in plans and other urban planning documents to tell a story, than to the way that story affects its recipient.
Postclassical narratology opens up a promising avenue here. The term "postclassical narratology" refers to the various approaches that have been developing in the field of narratology since the late 1990s. These approaches, which do not disown but renew classical narratology as it was developed in the 1960s-70s around structuralism (Baroni 2017;Patron 2017;Herman 1997), are more sensitive to the (cultural, historical, etc.) contexts in which narratives emerge. They also look at the diversity of media that are likely to convey narratives. Finally, they are particularly interested in the cognitive and emotional dimension of narratives.
It is in this general context that in the mid-1990s James Phelan began developing a rhetorical approach to narrative by drawing on the intentionality of the one he calls the "teller." According to Phelan, intentionality needs to be at the center of the analysis (2018: 47). Its primary aim is to understand how the "'teller' tries to shape narrative elements to specific ends" (2018: 48). It targets in equal measure the "affective, ethical, as well as esthetic effects of a story." This rhetorical approach has methodological consequences. Phelan thus seeks to retrospectively reconstruct the effects produced on a recipient by the various narration techniques deployed in a narrative (2018: 48). We will draw on this method further below.
David Hermann (2018) has, for his part, done seminal work on "narrative worldmaking" (2018: 96). He is interested in the "storyworlds" that result from the interaction between the interpretations to which a narrative is subjected, and from the way in which the narrative itself is a producer of meaning. This interest in storyworlds led him to speak of "narrativization practices" (2018: 99). The narrative organizes sometimes heterogeneous events into more or less coherent worlds. In this sense, it is a cognitive resource (2018: 99-100).
Raphael Baroni goes further still in bringing together narratology and cognitive sciences. In his work, La tension narrative ["Narrative tension"], he focuses on the aesthetic and affective effect that narrative has on its recipient. To this end, he draws on studies in psychology, particularly cognitive and affective psychology. These make it possible to explain how we are "affected" by narrative structures, whose function is to create suspense or curiosity, for example. As we can see, narrative is no longer approached from the point of view of what classical narratology called the "closure of the text" (i.e., explaining the text by the text, within the text). The focus now is on the recipient of the narrative-and his or her experiences.
Finally, the postclassical current considerably broadens the terrains open to narratology. Thus, in 2004 Marie-Laure Ryan put forward the term "transmedial narratology." Transmedial narratology postulates that narrative (1) is not limited to literary texts and (2) is deployed through various media (text, image, sound, etc.) and in various ways (books, films, internet, etc.). Seeking to transcend the boundaries of a narratology deemed to stop at language acts alone, postclassical narratology endeavors to detect narrativity in other forms of expression.
This transmedial approach thus leads to a new definition of narrative that is both flexible and rigorous. In the debate between her and David Rudrum (2005;, Marie-Laure Ryan (2006) has suggested classifying acts of language according to their relationship with a prototypical form of narrative (2006: 193), depending on whether they share one, two, three… or all of the nine traits identified (e.g., "Narrative must be about a world populated by individuated existents," "This world must be situated in history and undergo changes of state" 1 - Ryan, 2006: 193), which attest to a story's degree of narrativity. This open but conditional approach avoids the excesses of another that sees narrative as the result of a social convention, as suggested by David Rudrum. Whatever Rudrum says, narrative does not, Ryan contends, arise from a simple agreement between members of a community about the status of an act of language. Narrative always comes with its own semantics, starting with the inescapable semantics of "telling somebody that something happened" (Phelan quoted by Ryan 2006: 192).
Thus, the approaches promoted by postclassical narratology encourage us to consider the effects that a narrative has on its recipient, particularly from the point of view of its rhetoric. They also invite us to consider how storyworlds are made. They grasp narrative from the point of view of its transmedial manifestations. They are sensitive to different stories' degree of narrativity. For all these reasons, they are part of renewed approaches to narrative in urban planning which we believe make a contribution to planning theory, particularly from the point of view of the "logic of making plans" (to paraphrase a famous work by Hopkins, 2001). This is where this article's contribution to planning theory lies: on the one hand, it helps clarify the levels of narrativity of urban planning stories; on the other hand, it attempts to analyze a type of story that is intentionally produced to facilitate, by means of a simulated world, the adoption of a development plan by the general public.
"What is a narrative in urban planning": A never-ending story?
Commenting on the diverse notions of narrative used in the field of urban planning, Ameel (2021: 4) observed that early narratological approaches in the field of planning were not very rigorous, often using different terms (story, imagination, and narrative) as synonyms. The acknowledgement of narrative's omnipresence and polysemy is, of course, not novel. Planning discourse studies reached the same conclusion nearly twenty years ago (Hastings, 1999;Sharp and Richardson, 2001;Jacobs, 2006), insisting on the profusion and diversity of discourses in urban planning. Recently, Ameel (2016Ameel ( , 2021 has identified different uses of narrative in planning (and related areas) before producing a typology. He identified narratives "for, in and of planning." The first refer to the collective or individual "local narratives" that "precede" and feed into planning. These are distinct from the second type which refer to the narrative found in the "documents or activities" of planning (Ameel, 2021: 13), authored by planners. Finally, "narratives of planning" relate to narratives of planning that has already been conducted.
There is another way of considering the diverse uses of narrative in the field of planning, and more specifically in the context of plan making. It consists in looking at them from the point of view of their chronology ( Figure 1).
Generally speaking, plan making is understood as a process during which a territorial vision is translated into a plan, technical documents, regulatory apparatuses, and political discourses. Increasingly, these plans, documents, and political discourses are accompanied by explanatory comments aimed at the general public, which come in the form of intentional narrative. Narrative is deployed in different ways in this sequence. Ryan (2006: 190) insisted that, from a semantic point of view, a narrative is usually an assertion, that is, the most specific register of the narrative mode. Genette believed that, at the very least, there is a narrative when "a thing that has happened" is "recounted" (1983 [2007]: 302-303). However, narrative in urban planning in its ordinary form is most often prescriptive and proceeds from an imperative mode. In a way, it connotes what must or at least should be done. It is deployed in a document that states what should accompany objectives or measures, particularly regarding a plan. From a pragmatic viewpoint, this plan can be understood as a narrative produced by actors, who agree to give it this status (here, we agree with Rudrum). However, from a semantic viewpoint (here, we agree with Ryan), it is not endowed with the attributes of a narrative in the narrower sense.
Of course, while the initial vision is political and strategic in nature, it can nevertheless feed on a narrative imagination. The latter employs what Marie-Laure Ryan (2018) calls "metaphorical narratives." These are not entirely narrative in nature as they do not meet all the requirements of a story. In other words, their narrativity is weak. They belong more to the realm of imagination, of the "collective values of a culture" (2018: 149). Obviously, this vision can sometimes employ inhabitants' narratives, which have been collected as part of a participatory approach that feeds into and informs the planning process. Similarly, the plan that spatializes this vision can also make use of weak narrativity. Indeed, plan makers sometimes organize the space into a narrative sequence so as to persuade decision makers.
Rather than getting entangled in an endless epistemological conflict, one could posit that a master plan is a first-degree narrative, that is, narrative in a metaphorical sense. These narratives develop from the narration of a story through techniques-such as tables, graphs, maps, diagrams, action sheets, etc.-that are specific to a discipline (planning or urban planning). Quite regularly, this document uses components of other studies and quotes from interviews or political statements set in boxes or headings: the "objective presence of one text in another" (Genette, 1982(Genette, [1992: 8), which is characteristic of intertextuality, "producing significance" (Riffaterre quoted in Genette, 1982Genette, [1992: 9) and contributing to the "mechanism[s] of influence" (Bloom quoted in Genette 1982Genette [1992: 9).
To satisfy the democratic debate, "second-degree" (1982 [1992]) narratives accompany this first-degree narrative, which they either comment on or complete. These narratives are reworkings of first-degree text, which are more comprehensible to the general public. They also promote the plan's performativity. These second-degree texts are intentionally narrative in nature. They seek to tell someone about something that has happened, is happening, or will happen (to take up the quote from Phelan already mentioned). They are peopled with individuals to whom things happen. They have a "mental life and react emotionally to the states of the world" (Ryan, 2006: 194). Some of "the events… objectively happen in the story world." In short, they combine more levels of Ryan's narrativity scale. In fact, they employ narrative techniques in order to (1) tell the story in a language other than the one used by the experts and (2) bring about a specific effect on the recipient. They allow recipients to appropriate the plan more intuitively, particularly by means of a simulated world. At the same time, they refine the ways in which the plan is incorporated, helping to share a political vision with as many inhabitants as possible.
In fact, their function is often to ensure a reading that conforms to the technical and political vision conveyed by the plan; to emulate a world that conforms to it, employing different perceptive and affectual regimes. From this perspective, these accompanying narratives are part of what many researchers now refer to as an economy of attention (Arpin et al., 2015;Peltola and Tuomisaari, 2015), in the sense that the challenge now is to capture the attention "of a public submerged by proposals… one more attractive than the next" (Citton, 2014). Narrative's immersive experience constructs the plan's reception by projecting its recipient into a future world or by making them experience the problems the plan is seeking to solve. Immersive narrative experience reinforces attention (Ash 2012), which is understood as the alignment of certain dispositions (an openness to a certain reality; Ash, 2012).
It is in this respect that postclassical narratological approaches lead us to reconsider the way in which narrative is deployed in planning theory, particularly in relation to plan making. Contemporary (postclassical) narratology increasingly views narratives from the perspective of its effect on the recipient's sensory apparatus. The narrative creates a "tension" (Baroni, 2007) within its recipient (a "desire to know what happens next"; Bottcher, 2013) and establishes a horizon of expectation. The recipient is immersed in a story. He or she is involved in its understanding. He or she may subscribe to particular perspectives that define the cognitive frame of a given issue. By playing with the narrator's sensibility, the narrative potentially contributes to these apparatuses currently engaged in the communication of urban projects to reinforce their acceptance (Bailleul, 2008). This strategy is currently implemented in conveying urban planning documents, such as plans. It reduces the noise surrounding a plan's signal as soon as its recipients are diversified.
This is precisely what we discuss in this study. We postulate that narrative in urban planning is mainly about the narrative of plans. This narration aims to make plans intelligible for a wider audience. It must reduce the "noise" around the emitted signal. This proposal defines a corpus (the narratives that explain the plans) and legitimizes an interest in taking charge of the signal through the narrative.
Below, we will draw on elements from a long-term investigation, which combines old fields devoted to the communicative turn in Geneva's urban planning (Erwein and Matthey, 2019), and a more recent field centered around the narrative techniques used in Geneva's contemporary urban planning documents.
Making narratives speak through second-degree planning: Corpus and method
Our analysis focuses on the current transformations of the Geneva 2030 Cantonal Master Plan. In Switzerland, a cantonal master plan is a planning document required by the Federal Law on Spatial Planning (LAT). The aim of this document is to coordinate activities with spatial implications (ARE, 2018). It includes different themes, applicable for 20-25 years. It determines the requirements that must be respected during the planning process: spatial conditions, timetable, and organization (ARE, 2018). The whole plan takes the form of a technical document that lists objectives, associates them with measures, and establishes the scope of action (Figure 2). This constitutes the first-level narrative, which is primarily intended for professionals.
The federal government must approve each of the 26 cantonal master plans after lengthy procedures. The first version of the plan is presented to the civil society (especially non-governmental and professional associations) and municipalities for public consultation. This results in a number of comments that are eventually addressed by the technical services of the public administration. The revised document is then submitted to the cantonal parliament, which accepts or rejects it via a vote. Further, the plan is submitted to the Federal Office for Spatial Development, which verifies its conformity with current laws, particularly those concerned with planning objectives, and accepts it either completely or partially with requested revisions. Finally, the plan is forwarded to the federal executive, where the document is ratified. If it is not ratified, then the canton's spatial development is blocked.
Increasingly, this document, which determines a community's future for the following 20-25 years, has triggered strong debates among both citizens and politicians. Let us take, for example, the Geneva 2030 Cantonal Master Plan, which was submitted for public inquiry in the context of a severe housing crisis (the vacancy rate was 0.15%, far from the 1.5% synonymous with a fluid market in Switzerland) caused by very strong economic and demographic growth. Approximately 544 comments from the public inquiry conducted in 2011 were referred to the technical services. The municipalities also had reservations because they were reluctant to welcome new urbanization projects in their region. Therefore, there was a strong possibility that the parliament would reject the document. The cantonal administration noted the need for significant "educational work" (framework, cantonal administration), which had to be done in relation to both elected municipal representatives and all the inhabitants of the canton. The aim was to make people understand the challenges of a cross-border region (Greater Geneva), where the In its technical form, the master plan mobilizes a synthesis map (top), action sheets (left), analysis and objectives (center), equal framework, and reference studies (right).
central unit (the canton of Geneva) is an employment hub, while the peripheral areas (France and the canton of Vaud) specialize in housing. At the same time, one had to bear in mind the Franco-Swiss territorial pact signed by the partners of Greater Geneva. Geneva produced housing while measures were being promoted to rebalance the region and stimulate the creation of activities in France.
Consequently, the need for a pedagogical approach led to the creation of a "general public section," the Genève, Envie ["Geneva, Desire"] brochure, in the Geneva 2030 Cantonal Master Plan's final version. It "presents the issues and objectives of the master plan": This document has been designed to allow a non-specialist audience to understand the underlying objectives of the master plan, which is very technical in its writing and form. This section has not been updated. The brochure also includes a simplified map of the developments proposed by the cantonal master plan and a brief history of urban planning in Geneva.
This brochure (and its by-products) is the subject of our narratological interpretation that assesses the techniques used to direct the audience's attention. The brochure is an example of the "echo" narrative from Genette's typology. 2 It is a narrative that mobilizes, sometimes integrates, and above all comments on the "first-degree narrative" of the master plan. This narrative is called paratextuality; its function is to anticipate the public's reception of a technical document (a master plan, a neighborhood plan, etc.). At the same time, the second-degree narrative is a political option that addresses urban problems. Figure 3 It should be noted that this paratext can take on various forms. It can be directly linked to the text that it comments on as a preface, annotations, text box. It can also be deployed remotely via other media or communication supports (Genette, 1992(Genette, [1982: 11). For example, a promotional film, an exhibition, an elected official's speech (Figures 4 and 5).
These often implicit commentary relationships between different texts provide a capacity for dissemination that makes the narrative an interesting tool for communicating urban policies. First-degree texts are covered with a thick layer of glosses ( Figure 5), which are at the center of our narratological investigation. The analysis we offer mobilizes elements from a didactic film and accompanying texts signed by the elected officials who are in charge of Genevan territorial planning. All these documents, except the Geneva 2030 Cantonal Master Plan (in its strict sense), are second-degree narratives. Their aim is to explain how an interested but uninformed public should understand the signals emitted by the plan in a language other than that used in the technical documents.
For the purposes of our analysis, we incorporated elements from postclassical narratology (Herman, 1997;Baroni, 2007Baroni, , 2017Baetens, 2017) that shed light on the cognitive framing which operates through the story's narration while producing the urban planning narrative. We will reconstruct our material taking our inspiration from the principles of hermeneutic description (Laplantine, 1996), setting out our interpretations at the same time as we describe our corpus (Revel, 1995).
Story/narration/narrative: Creating tension among the addressees
Second-degree narratives are produced by organizing a story using narrative techniques. The tension identified by Genette (1972: 11-20) between the story (what is being told), the narrative (the way the story is being told), and narration (the exposition choices that structure the story) can be found here.
Each of these constituents of what Genette called the "narrative fact" 3 (1983 [2007]: 297) is open to a specific analysis and refers to a particular reception of the text produced. Choosing the story to be told, its plotting, and the resulting narrative are three distinct but complementary moments.
Story
If we try to rephrase the purpose of technical documents (first-degree narrative, in this case the 2030 Master Plan of the Republic and Canton of Geneva) using layperson's language, the following story emerges: a public authority is required to produce 50,000 housing units by 2030 to accommodate the strong economic and demographic growth predicted by various models. The region in question is one of the most dynamic and attractive regions in Europe (UBS, 2021).
In fact, creating 50,000 housing units must avoid an exodus of Genevans to the canton of Vaud (the district of Nyon, in particular) along the French border, where housing is more abundant and less expensive. Importantly, the story that this type of technical document develops is not always the same (Joye and Kaufmann, 1998;Léveillé, 2011). Plans from 1945 to 1949 expressed the necessity to modernize an area that had become international (Canosa et al., 2003). Plans during the 1960s told the story of an area that had conquered its periphery but was concerned about the quality of life (Canosa et al., 2003). History mentions a plot in relation to the region at a given moment in its political, economic, social, and environmental future. Now, this plot goes through a certain number of stages that, from an initial but now-broken equilibrium, moves toward a new stable equilibrium.
Plotting
The narrative becomes that of a personified territory (giving Geneva a protean character). It has known from time immemorial how to "provide intelligent and continuous proximity between built and unbuilt areas," developing like a "starfish" along "historical axes of communication" (République et canton de Genève, 2013: 4). However, this balance is now under threat: "like a starfish, it would not be good if some of its arms were to atrophy, while others were to blister" (République et canton de Genève, 2013: 4). Will Geneva Screenshot of a clip about the issues at stake in the Geneva region. Commentary: "Today, thousands of Genevans cannot find housing. To avoid their leaving the canton and increased commuting throughout the area, we need to develop places for living, working, and recreation." manage to preserve this balance? Various adventures unfold, with just as many chapters and corresponding desires. The brochure Genève, Envie includes eight chapters 4 and offers a narrative and pictorial summary of the challenges presented in the technical document.
Sometimes, the plot's resolution is deferred, at other times, the quest is fulfilled. Opponents and helpers mobilize to maintain the addressee's attention, but also educate him or her as they determine the rights and wrongs of planning. The story's addressee is progressively intrigued by the future of Geneva and the starfish, being less interested in reading the technical document's objectives and measurement sheets.
Narrating
Narration refers to the engineering (the "mechanisms of the text," its "mechanical" aspect- Genette 1983: 294) that produces a narrative. Narration is that aspect of narratology that Genette refers to as "formal" or "modal" (Genette 1983: 300). It analyzes the narrative as a mode of "representing stories" (Genette 1983: 300) because it focuses on "figures," that is, narrative procedures. Analysis of these processes facilitates the understanding of the "organization of the narrative" (Guillemette and Lévesque, 2016) based on questions of voice, mode, and time. The narrative unfolds from a speaker (voice), adopts a perspective (mode), and plays a role in exposure times (time). Subsequently, it maintains strong links with the story's construction, meaning it is likely to "have a direct influence on the dynamics of the plot" (Baroni, 2017: 82). Narrative devices are, indeed, "textual means deployed by the author to … tie up or untie a plot, arouse suspense, inspire curiosity or surprise the reader" (Baroni, 2017: 83). Narration arouses "curiosity" or "suspense" as it delays information or reveals too much information. Slowly, it draws the recipient into an immersive universe, offering various thematic frames and frames of experience.
In the brochure Genève, Envie, the narrative begins in the middle of things (in media res, as narratologists would say), immediately plunging the addressee into the context of an action. The plot revolves around a disequilibrium between jobs, inhabitants, and housing. The following is a comment by an external narrator on this action.
One hundred thousand Genevans are now under the age of 20. They are the ones that the authorities of this precious region must think about when painting, in broad strokes, the future of our spaces. To avoid their being forced into exile tomorrow, we must offer them places where they can live, work, and enjoy themselves (République et canton de Genève, 2013: 2).
A threat persists that is accentuated in the next paragraph: "Where and how will our children live?" As the story progresses, the narrator describes the scene of the action (an idyllic place: "Geneva has a particularly rich green belt around its urban areas"), introduces its characters (Geneva, the protean territory, the Genevans, the commuters, the tourists, the delinquents, the young, the families, the elderly, and the taxpayers), and plays with the idea of time (everything that Geneva has inherited and all that it has to accomplish). The perspective changes because the territory becomes a character in the story until the narrator provides an overview. Sometimes the narration dwells on details, such as numerical descriptions, that are meant to inform and educate the reader: "parks and green areas … cover 2.5% [of the region]. Housing is concentrated in 23% of the cantonal area, almost half of which is in the villa zone" (République et canton de Genève, 2013: 4). The reader may prefer to skip these passages, but they need this content to discover how the story ends; it seems reading secondary narratives is not as dry as technical documents.
Narrative
Finally, the narrative results from the story's narration (its "product"; Genette 1983: 298). It is a story which is established through a plot and related using various techniques. It requires engineering. For those wishing to understand the renewal of technologies, which facilitates the circulation of power over the city, the process makes the analysis interesting. Analyzing the narrative, taking an interest in the "workings of the plot" (Baroni, 2017), the way in which the narrative establishes "narrative tension" (Baroni, 2017), allows access to the intentions or auctorial stance of the author(s). Thanks to singular techniques, the addressee is affected, the reader may or may not identify with a narrator, the addressee may adopt the narrator's perspective, and the addressee may begin to think according to a particular schema. The explicitness of the narrative's message is one lever among others in the analysis of urban powers. The following section discusses the level of narration more systematically.
Voice, view, rhythm, value: Engineering through attention
Understanding urban-planning narrative as a technology of power requires an examination of the engineering implemented to create the effect desired. The recent update of the instrument of classical narratology to postclassical narratology invites interesting avenues for urban studies. Postclassical narratology has renewed narrative analysis as it focuses not only on the mechanisms of the narrative, but also on the influence it has on the recipient. Therefore, there is an opportunity to identify a series of methodological principles in narration that explain how it affects the addressee.
Giving a voice and framing a perspective
The first principle states that a voice always communicates the narrative and mobilizes one or more perspectives. The question of voice and perspectives has given rise to numerous works in narratology. First, it produced an interest in identifying different types of narrators according to their "relationship to the story" or the narrative "level" 5 (Jouve, 1999(Jouve, [2017). Such research identified various functions of the narrator, depending on whether they narrated, organized the text, or communicated additional elements necessary for understanding the plot. Later, it was established that voice cannot be reduced to perspective. François Jost (quoted in Baroni, 2017: 99) proposed distinguishing between focalization and "ocularization/auricularization," that is, between the information stated by the narrator and information that the audience gets through the characters' sensations. Subsequently, others have identified the need to differentiate between the "voice that narrates … [and] consciousness that perceives" (Jouve, 1999(Jouve, [2017). Therefore, a narrative mobilizes a "perceptual or cognitive focus." The "management of information that makes it possible to understand a narrative situation" (Baroni, 2017: 100) underpins it. Sometimes, a narrative comes from an external and objective perspective that states facts, lists problems, and describes solutions (when the master plan is considered a technical document). Other times, it comes from a narrator who is also the author (e.g., the testimony of an inhabitant). It may also originate from an omniscient narrator who orchestrates different perspectives (the narrative of the great project grasped by the communicators). Finally, the author sometimes plays on the confusion between author and narrator or becomes a character of its narrative (e.g., the narrative of a project developed by a designer during an urban design competition).
Each of these foci causes a specific reception because it plays on the recipient's space. This causes the addressee to frame the plot in an expected way. Therefore, identifying instances of narrative fiction and analyzing the relationships that may exist between the instances of fiction and real people become a possible lever of critical hermeneutics relevant to urban planning. For example, a close analysis of the relationships that are woven between the narrator and recipient allows the researcher to sketch the recipient's contours, understand what this reader is supposed to know, what the reader is likely to be sensitive toward, and what touches and moves the reader. The recipient of the Genève, Envie brochure is a middle-class urbanite. They are attached to Geneva, knowing its implicit cultural and economic history. They have difficulty finding a home that matches their residential aspirations. They do not question the region's economic attractiveness. Life in suburban areas appears to be the worst, unless a polycentric model is developed.
Pacing
The second principle signifies that the narrative always carries out work on time. This necessitates an analysis of the "relations between the time of the story (measurable in centuries, years, days, hours, etc.) and the time of the narrative (measurable in numbers of lines or pages)," that is, the "time told" and the "time taken to tell" (Jouve, 1999(Jouve, [2017: 43). Different questions have allowed narratologists to capture various uses of time in a narrative. Generally, these questions seek to understand "when" and "at what pace" a story is told, and "how often" and in "what order" the events are narrated.
In urban planning, the narrative focuses on narrating future history (a world organized around the principle of a "compact, multipolar, and green Geneva" [République et canton de Genève, 2013: 8]), a near or distant past (a Geneva that has preserved nearly 50% of its area as an agricultural zone), or a contemporary event (a Geneva without housing, a situation that is driving out its children). The narrative can linger on certain details that the narrator, but not necessarily the reader, finds important (providing figures and recounting all the steps in an embedded narrative that sets out a master plan). It can also use ellipses, speeding up certain sequences that the reader would perhaps like to see developed to understand better (it is satisfied with announcing the scope of major projects rather than explaining how each scope was determined). The narrative can recount the same event one or more times from different perspectives (from the perspective of an omniscient narrator and then from that of a character with limited rationality), be told by different narrators (a resident's perspective may differ from that of an elected official or an investor), or use different modes of narration (an event can be recounted in the text or in the form of boxes, headings, or images). Finally, the order in which the constituent events of a story are told is a powerful storytelling agent. The story can begin with the end ("Today, thousands of Genevans cannot find housing") or go back to the causes (lack of "places for living, working, and recreation"; République canton de Genève, 2018). It can follow an external and objective chronology that suggests a long and inevitable decline.
The spirit of the laws originally intended to protect citizens from the effects of excessive rent variations has been perverted. Low rents do not necessarily benefit people on low incomes. … young households, working families, are the first to suffer from this evolution of things. (Republique et canton de Genève, 2013: 18-19) This can only be salvaged by the solutions promoted by a development plan: "to house the 100,000 children currently residing in Geneva, we need to build sufficient housing over the next 20 years" (Republique et canton de Genève, 2013: 22).
In short, the time taken to narrate a plot's sequences signifies attentional pace that a narrative encourages. This information makes it possible to understand what the narrator considers useful for the addressee. However, the narrative does more than direct attention; rather, it organizes a hierarchy of territorial values.
The "differential valence" 6 of perspectives The third principle signifies that the narrative organizes values. The early narrative semiotics put forward different models (Proppian model, actancial model, etc.) offering a pathway to the "vision," "values," and "intention" (Jouve, 1999(Jouve, [2017: 64) embedded in a narrative. In a narrative, some "actants" hinder the plot's resolution (the opponents) and thwart the quest, while others promote resolution (the helpers). These actants are often associated with positive or negative values. Focusing on their characterization facilitates the understanding of the system of values and their meaning conveyed by the narrative. According to narratology, it is up to "thematic analysis" to understand the "preferential axes" (Jouve, 1999(Jouve, [2017: 64) mobilized by a narrative and the fields it prioritizes. Similarly, interest in the "narrative grammar" or "narrative program" that fuels a narrative allows dealing with its "value effect," that is, the way it "conveys an ideology and transmits it" (Jouve, 1999(Jouve, [2017: 64).
The principle of a "compact, multipolar, and green urban area" in the Geneva Constitution becomes, in the Genève, Envie brochure, the sign of a "desire for the city" animating the local population (République et canton de Genève, 2013: 8). Compactness, a helper which should facilitate the arduous task of creating 50,000 housing units by 2030 while preserving the cramped region's countryside, is then associated with "what is best in humans": "civility, urbanity, city, citizenship" (République et canton de Genève, 2013: 8). Multipolarity, another helper that balances the Genevan territory, prevents its center from being "deserted at night and on weekends, except for a few tourists and illicit activities." It preserves inhabitants from an "absurd daily migration on asphyxiated transport routes between center and suburbs-these banished places, these non-places between town and country" (République et canton de Genève, 2013: 8). A twilight world, which is the old model of an active center and a periphery designed only for sleeping, emerges in the background. It characterizes the values associated with the reversal of the proposals of the Master Plan of the Republic and Canton of Geneva. The narrative opponents are individual housing, mobility through cars, and residing on the periphery of the Genevan territorial system. In another narrative, these narrative opponents could have been described as rural idyll locations for residences outside the city.
In fact, analyzing the differential valence of perspectives is essential because the narrative, which aims to create narrative tension in order to develop a plot and arouse curiosity or build suspense, seeks to be immersive. The addressee is immersed in the plot; the addressee's attention follows the proposed cognitive schema; the addressee gradually moves toward a possible world rather than in a different direction as a result of the different choices made by the author or auctorial authority. Therefore, second-degree narratives in urban planning can be seen as mechanisms that aim to draw attention and open new fields for planning theory, more specifically, the critical analysis of immersive urban communication devices.
Making second-degree urban planning narratives talk
This contribution focuses on second-degree narratives of planning documents and how they are part of an economy of attention aimed at identifying a specific narrative regime. These narratives are particularly revealing as they widen the circle of addressees and frame the reception of technical documents. We have thus observed the processes of defining a story, setting up a plot, and narrating the story, which are based on intention. These narrative techniques used during narrations provide access to cognitive framing strategies, create tension among the addressees, and construct a horizon of expectation that, to our knowledge, can facilitate analysis in the field of planning. Therefore, the approach proposed in this article is likely to shed new light on the numerous studies that have been interested in planning discourse as a modality for both the exercise of power and neo-liberal marketing. More significantly, this approach allows for a better understanding of certain contemporary aspects that participate in producing plans. The appearance of second-degree narratives is a way of reducing the noise of signals emitted by first-degree documents.
Second-degree narratives are more than just narratively stated speech. They are also more than communication narratives. Discourse lies within the space of rhetoric. It aims to persuade by using codified figures of speech, in a space that remains inter-subjective. Such narratives tend to forget the political space of debate by immersing the addressee or narrator into the story. Symmetrically, the communication narrative invites the addressee to experience content through a story with a meaningful moral. A reading pact, stated at the narrative's beginning (I will start by telling you a story), generally identifies the simulation that is offered to the addressee. As we have tried to show, second-degree narratives often proceed by creating an ellipsis of the reading pact, which is often rendered implicit. The planning document narrative does not say that it is a narrative. It tells a lived reality from a given point of view, usually that of a narrator. It diminishes the possibility of a debate.
Narratives in urban planning can be understood as a tool for those who wish to extend the project of the archaeology of power (Foucault, 1976). Second-degree narratives shape the reception of information and direct the addressee's attention toward one spatiotemporal dimension as opposed to another. At the same time, they provide a cognitive framework for addressees. Consequently, the use of narrative in urban planning becomes a new technology of power, this deduction being in line with some of the conclusions that propose the deployment of a Foucauldian apparatus to analyze planning discourses (Fischler, 2000).
The main challenge, however, is to use this apparatus to analyze the narratives that city builders produce in order to support a pedagogy surrounding a project. Second-degree narratives are a tool used to shape the reception of urban projects as desired by local authorities (Lambelet, 2019).
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/ or publication of this article. | 2022-09-18T15:07:26.341Z | 2022-09-16T00:00:00.000 | {
"year": 2022,
"sha1": "204afac853fbb20e64894474aba36669dc902129",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1177/14730952221125174",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "38b5a4c674341472dfa9c31133ffe0fccda1eca3",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245591057 | pes2o/s2orc | v3-fos-license | Melt Spinning of Flexible and Conductive Immiscible Thermoplastic/Elastomer Monofilament for Water Detection
In many textile fields, such as industrial structures or clothes, one way to detect a specific liquid leak is the electrical conductivity variation of a yarn. This yarn can be developed using melt spun of Conductive Polymer Composites (CPCs), which blend insulating polymer and electrically conductive fillers. This study examines the influence of the proportions of an immiscible thermoplastic/elastomer blend for its implementation and its water detection. The thermoplastic polymer used for the detection property is the polyamide 6.6 (PA6.6) filled with enough carbon nanotubes (CNT) to exceed the percolation threshold. However, the addition of fillers decreases the polymer fluidity, resulting in the difficulty to implement the CPC. Using an immiscible polymers blend with an elastomer, which is a propylene-based elastomer (PBE) permits to increase this fluidity and to create a flexible conductive monofilament. After characterizations (morphology, rheological and mechanical) of this blend (PA6.6CNT/PBE) in different proportions, two principles of water detection are established and carried out with the monofilaments: the principle of absorption and the short circuit. It is found that the morphology of the immiscible polymer blend had a significant role in the water detection.
Introduction
In the last decade, the development of new detectors in the smart textile fields has been increasing. They are used to control physical parameters like for instance temperature variation [1], stress and deformation [2] for athletics or health fields or presence of liquids and gases [3][4][5][6][7] in industry and environmental protection [8]. This technology of smart textiles is based on the electrical conductivity variation of the material in interaction with its environment to detect and transmit the data. In the field of monitoring, more and more construction industries use composites to reinforce their concrete structures as for instance retention tanks. Concrete is a fragile material, which can cause fluid leakages, and which is sometimes dangerous for the environment (pollutant). Smart textiles can be incorporated inside the composite resin or concrete to monitor the stress and deformation of the structure and detect a fluid leakage in case of damage. Consequently, more and more researches develop intelligent composite membranes [9][10][11][12]. The intelligent composite membranes are composed of two linked parts: the matrix, which is the resin, or the concrete reinforced by textile structure including smart filaments, which detect problems.
To detect fluids with smart textiles, different technologies that are based on the electrical conductivity variation of the textile in contact with the fluid can be employed. The most commonly studied in the literature are the intrinsically conducting polymers (ICPs) [13][14][15] or the conductive polymer composites (CPCs) [16][17][18]. They have the particularity of modifying their electrical conductivity according to the affinity of the polymer with the fluid [18][19][20]. They can be presented in the form of films [21], monofilaments [22], multifilaments [6,20], or coating [23]. Regarding the ICPs, the detection of fluid results in the
Conductive Yarns
The monofilaments are composed of a blend of a thermoplastic polymer with fillers and a propylene-based elastomer (PBE).
The first blend is the thermoplastic polymer with 3 wt.% of fillers. The thermoplastic polymer is the polyamide 6.6 (PA6.6) TORZEN U4803 NC01 produced by Invista (Wichita, KS, USA), which has a melting point of 263 • C. The fillers are the multiwalled carbon nanotubes NC 7000 (MWCNT) supplied by Nanocyl (Sambreville, Belgium). These MWCNTs have an average length of approximately 1.5 µm, a diameter of 9.5 nm, and a specific area of 250-300 m 2 /g. The polyamide 6.6 has a moisture regain of about 4% [46]. Therefore, thanks to the PA6.6's affinity to water, this blend is used for the detection by the electrical conductivity variation.
The second blend is the addition of the PBE to the PA6.6 3CNT to increase the fluidity of the blend and, thus, to facilitate the compounds preparation [47]. The PBE is also used to add a flexible property to the detection yarn to not break during the resin cracking. The employed elastomer is the VISTAMAXX 3000, which is supplied by ExxonMobil Chemical (Houston, TX, USA).
Compounds Preparations
Two successive extrusions are realized in order to obtain detector yarns. First, the incorporation and the dispersion of 3 wt.% of MWCNT in the PA6.6 (PA6.6 3CNT ) are processed by a co-rotating intermeshing twin-screw extruder from Thermo-Haake PTW 16/25p (barrel length = 25:1 L/D). The second step allows to add different percentages of PBE (from 0 to 50 wt.%) in the blend ( Table 1). The second extrusion use the Process 11 Parallel Twin-Screw Extruder from Thermofischer (Waltham, MA, USA) with a barrel length of 40:1 L/D. The processing conditions was based on the study of Javadi Toghchi et al. [43], which have already worked on the extrusion of the PA6.6 3CNT . The rotating speed of these extruders is 100 RPM, and the temperatures profiles are reported in the Table 2. Before each extrusion, the polymer pellets are dried at 80 • C for 16 h. T1 T2 T3 T4 T5 T6 T7 T8 PA6.6 3CNT 260 270 275 275 280 ---PA6.6 3CNT /PBE 215 275 285 285 278 275 270 270 The monofilaments have a diameter of approximately 1.5 mm ± 0.07 mm. For the Transmission Electron Microscopy (TEM) images, the samples are prepared by the ultramicrotomy method to have clean and flat surfaces. The different polymer phases are detectable thanks to the CNT present in the PA6.6.
Rheological Properties Characterization
The Melt Flow Index (MFI) determines the flow ability of a polymer and more precisely the flowing polymer weight in 10 min at a certain temperature. In this study, the test is executed on the melt flow tester from Thermo-Haake with a temperature of 270 • C and a pressure of 2.16 kg according to the standard ISO-11333. Before the test, the polymer pellets are dried at 80 • C for 16 h.
Mechanical Property Characterization
The elongation at break of monofilaments is measured by an MTS Criterion tensile bench from MTS (Minnesota, USA). The tests are realized with an initial length of 150 mm, initial speed of 500 mm/min and a pre-loaded of 5 N. They are executed under a controlled and conditioned atmosphere of 65% of relative humidity and a temperature of 20 • C. Five measurements for each blend are necessary to accept the results with a standard deviation of about 20%.
Water Detection Methods
To validate the results and the repeatability of the protocols, all the water detection tests are realized ten times for each monofilament and conditioned at a room temperature of 20 • C and a relative humidity of 35%. Since the detection's mechanisms depends directly on the ionic conductivity of the demineralized water, this parameter is controlled before each test with a conductimeter (Tacussel electronic type cdrv 62) and a conductivity standard solution of 1413 µS/cm at a temperature of 25 • C (KCL, Fischer Scientific, Waltham, MA, USA). The surface tension of the water is also overseen and measured by the tensiometer "3S Scales" from GBX Instruments. The demineralized water had an ionic conductivity of about 4.7 ± 0.9 µS/cm and a surface tension of about 71.8 ± 3.2 mN/m. For all the electrical conductivity tests, the monofilaments are connected to an a Keithley 2461 SourceMeter (Beaverton, OR, USA). This device measures the current intensity while applying a voltage which ranges from −0.5 V to 15 V with an increment of 0.1 V. Using the data of the current as a function of the voltage sent, the electrical conductivity for a length of 10 cm is determined by the Equation (1). The inverse of the resistance is the directing coefficient of the linear trendline of the current-voltage functions between 0 and 12 V.
where σ is the electrical conductivity of the system, L is the monofilament's length (L = 0.1 m), S is the monofilament's area (m 2 ) and R is the resistance measured (Ω).
The short circuit is based on a conductive path creation between two parallel monofilaments through the drop of water (Figure 1a). The electrical signal is detected when the water makes the link between the two electrically conductive filaments. To compare the different proportions of the PBE in the blend, the conductance is calculated (Equation (2)) from the measured resistance. A square of 3 × 3 cm of absorbent paper is added on the parallel monofilaments to deposit the drop of water to make the test repeatable. With absorbent paper, the spreading of the drop is controlled by eliminating the shape of the drop and the problems of absorption that could vary the conductance of the circuit: where G is the conductance of the circuit (S) and R is the resistance of the circuit (Ω).
Nanomaterials 2022, 12, x FOR PEER REVIEW 5 of 13 different proportions of the PBE in the blend, the conductance is calculated (Equation (2)) from the measured resistance. A square of 3 × 3 cm of absorbent paper is added on the parallel monofilaments to deposit the drop of water to make the test repeatable. With absorbent paper, the spreading of the drop is controlled by eliminating the shape of the drop and the problems of absorption that could vary the conductance of the circuit: where G is the conductance of the circuit (S) and R is the resistance of the circuit (Ω). The principle of detection by absorption ( Figure 1b) is based on the modification of the resistance of the yarn when it is soaked in water. The electrical conductivities of the dry and wetted monofilament are calculated (Equation (1)). To observe the influence of the PBE percentage on the detection, the detector sensitivity (Sw) calculated corresponds to the change in the electrical conductivity between the dry and the wetted monofilament (Equation (3)): where Sw is the detector sensitivity (%), σd represents the dry electrical conductivity (S/m), and σw is the wetted electrical conductivity (S/m). To find the optimal formulation for the CPC detector filament, a figure of merit is defined. It considers the mechanical and detection properties (Equation (4)): where F is the figure of merit (%), e is the elongation at break (%) for the mechanical property, and Sw the filament sensitivity to water (%).
Morphology of the Blends
The visualization of the SEM and TEM images are important to validate the different assumptions of the blends' behaviors. By hypothesis, the continuity of the PA6.63CNT phase is influenced by the blend proportions and the localization of the CNT in the PA6.6 is favored by the separation of the compounds process into two extrusions.
The observation of TEM image of the PA6.63CNT permits to conclude on the good dispersion of the CNT (black color on the Figure 2a) in the PA6.6 polymer (lighter background on the Figure 2a).
Regarding PA6.63CNT90/PBE10 blend, the SEM image ( Figure 2b) reveals a nodular morphology of the PBE in the PA6.63CNT. Moreover, no migration or aggregate of the CNT in the PBE or at the interface between the two polymers are detected with the TEM image ( Figure 2c). This kind of morphology and this CNT localization can be confirmed in other studies [46,48,49]. The principle of detection by absorption ( Figure 1b) is based on the modification of the resistance of the yarn when it is soaked in water. The electrical conductivities of the dry and wetted monofilament are calculated (Equation (1)). To observe the influence of the PBE percentage on the detection, the detector sensitivity (Sw) calculated corresponds to the change in the electrical conductivity between the dry and the wetted monofilament (Equation (3)): where Sw is the detector sensitivity (%), σd represents the dry electrical conductivity (S/m), and σw is the wetted electrical conductivity (S/m). To find the optimal formulation for the CPC detector filament, a figure of merit is defined. It considers the mechanical and detection properties (Equation (4)): where F is the figure of merit (%), e is the elongation at break (%) for the mechanical property, and Sw the filament sensitivity to water (%). Regarding the PA6.63CNT60/PBE40 blend, the SEM image ( Figure 2d) indicates a fibrillar morphology of the PA6.63CNT in the PBE. Therefore, the addition of 40 wt.% of PBE in the blend permits to achieve a phase inversion (Figure 3). The majority phase of PBE becomes the CPC matrix, which modify the monofilaments properties. Moreover, thanks to the TEM image (Figure 2e), the CNT (black) have not migrated into the PBE (lighter color) or aggregated at the interface as the PA6.63CNT90/PBE10 blend. Regarding PA6.6 3CNT 90/PBE10 blend, the SEM image ( Figure 2b) reveals a nodular morphology of the PBE in the PA6.6 3CNT . Moreover, no migration or aggregate of the CNT in the PBE or at the interface between the two polymers are detected with the TEM image ( Figure 2c). This kind of morphology and this CNT localization can be confirmed in other studies [46,48,49].
Rheological Properties
Regarding the PA6.6 3CNT 60/PBE40 blend, the SEM image ( Figure 2d) indicates a fibrillar morphology of the PA6.6 3CNT in the PBE. Therefore, the addition of 40 wt.% of PBE in the blend permits to achieve a phase inversion (Figure 3). The majority phase of PBE becomes the CPC matrix, which modify the monofilaments properties. Moreover, thanks to the TEM image (Figure 2e), the CNT (black) have not migrated into the PBE (lighter color) or aggregated at the interface as the PA6.6 3CNT 90/PBE10 blend. The influence of the proportion of CNT and PBE on the rheological properties is investigated through the Melt Flow Index (MFI) analysis ( Figure 4). The CNT decreases the blend fluidity whereas the PBE permits to increase it.
The addition of PBE does not improve the fluidity of the blend without fillers: 73 g/10 min for the PA6.6 and 66 g/10 min for PA6.6 blended with 50 wt.% of PBE. However, it permits to overcome the decrease of the fluidity due to the CNT: from 9.5 g/10 min for the PA6.63CNT to 26 g/10 min for the PA6.63CNT50/PBE50, so the fluidity of the filled blend increases with the proportion of elastomer. Therefore, it is found that the CNT increases the viscosity of the blend contrary to the PBE, which increases the fluidity of the polymers blend. The addition of CNT creates more links between the fillers and, thus, reduces the mobility of the macromolecular chains of the PA6.6 [43,48]. The PBE is, thus, adding to overcome to this high viscosity and to improve the blend implementation by melt spun [27].
Mechanical Property
The elongation at break is also studied to observe the influence of the fillers and the elastomer on the blend ( Figure 5). The addition of CNT in the blend decreases weakly the mechanical property [49] contrary to the PBE addition. The PBE permits to increase the elongation at break of the filled blends until the phase inversion.
Regarding the unfilled blends, the addition of 10 wt.% of PBE permits to increase of 23% the monofilament elongation. Beyond 10 wt.%, the weak interfacial cohesion between PA6.6 and PBE, which increases with the PBE proportion, leads to a decrease in elongation.
Regarding the filled blend, the elongation at break of the monofilaments with less than 30 wt.% of PBE slightly varied from 16 to 20%. While, after 30 wt.% of PBE, the mechanical property abruptly decreases: to 4%. This mechanical property variation can be correlated with the morphology variation. Before 30 wt.%, the nodules of PBE permits a larger elongation of the monofilament before the break. The interfacial area between the two polymers has a low cohesion. Therefore, with a high percentage of PBE, the interface between the PA6.63CNT fibrils and the PBE increases and causes the premature break of the monofilament. This result is confirmed in the study of Qiu et al. [50] on a polyamide 6/polyolefin elastomer blend. The tensile strength property decreases with the addition of elastomer in the blend. They have explained that this result was expected due to the morphology of the blend and the lower tensile strength of elastomer compared with polyamide 6. Theses hypotheses are also verified with other studies on the influence of the elastomer on polycarbonate/CNT blend [45,51]. The addition of PBE does not improve the fluidity of the blend without fillers: 73 g/10 min for the PA6.6 and 66 g/10 min for PA6.6 blended with 50 wt.% of PBE. However, it permits to overcome the decrease of the fluidity due to the CNT: from 9.5 g/10 min for the PA6.6 3CNT to 26 g/10 min for the PA6.6 3CNT 50/PBE50, so the fluidity of the filled blend increases with the proportion of elastomer. Therefore, it is found that the CNT increases the viscosity of the blend contrary to the PBE, which increases the fluidity of the polymers blend. The addition of CNT creates more links between the fillers and, thus, reduces the mobility of the macromolecular chains of the PA6.6 [43,48]. The PBE is, thus, adding to overcome to this high viscosity and to improve the blend implementation by melt spun [27].
Mechanical Property
The elongation at break is also studied to observe the influence of the fillers and the elastomer on the blend ( Figure 5). The addition of CNT in the blend decreases weakly the mechanical property [49] contrary to the PBE addition. The PBE permits to increase the elongation at break of the filled blends until the phase inversion.
Regarding the unfilled blends, the addition of 10 wt.% of PBE permits to increase of 23% the monofilament elongation. Beyond 10 wt.%, the weak interfacial cohesion between PA6.6 and PBE, which increases with the PBE proportion, leads to a decrease in elongation.
Regarding the filled blend, the elongation at break of the monofilaments with less than 30 wt.% of PBE slightly varied from 16 to 20%. While, after 30 wt.% of PBE, the mechanical property abruptly decreases: to 4%. This mechanical property variation can be correlated with the morphology variation. Before 30 wt.%, the nodules of PBE permits a larger elongation of the monofilament before the break. The interfacial area between the two polymers has a low cohesion. Therefore, with a high percentage of PBE, the interface between the PA6.6 3CNT fibrils and the PBE increases and causes the premature break of the monofilament. This result is confirmed in the study of Qiu et al. [50] on a polyamide 6/polyolefin elastomer blend. The tensile strength property decreases with the addition of elastomer in the blend. They have explained that this result was expected due to the morphology of the blend and the lower tensile strength of elastomer compared with polyamide 6. Theses hypotheses are also verified with other studies on the influence of the elastomer on polycarbonate/CNT blend [45,51].
Electrical Properties
The initial electrical conductivity depends on the morphology of the blend: the localization and the dispersion of the CNT and the morphology of the polymer phases.
The electrical conductivity decreases with the adding of PBE in the blend ( Figure 6): from 1.2 × 10 −2 S/m to about 2.6 × 10 −3 S/m. With a nodular morphology, the dry conductivities are approximately the same: from, respectively, 2.6 × 10 −3 S/m to 1.5 × 10 −3 S/m for 10 wt.% and 30 wt.% of PBE. However, the dry conductivity decreases suddenly to 2 × 10 −7 S/m with the percentage of PBE above 30 wt.%. The phase inversion between 30 and 40 wt.% causes the increase in the interparticular distance and, thus, the decrease of the conductivity.
Principle of Short Circuit
The short circuit's signal depends on several parameters: the distance between the two parallel monofilaments and the water properties, which are both fixed, as well as on the monofilaments' properties. In this study, the conductance of the short circuit depends on the dry conductivity of the monofilament and, thus, the proportion of PBE in the blend
Electrical Properties
The initial electrical conductivity depends on the morphology of the blend: the localization and the dispersion of the CNT and the morphology of the polymer phases.
The electrical conductivity decreases with the adding of PBE in the blend ( Figure 6): from 1.2 × 10 −2 S/m to about 2.6 × 10 −3 S/m. With a nodular morphology, the dry conductivities are approximately the same: from, respectively, 2.6 × 10 −3 S/m to 1.5 × 10 −3 S/m for 10 wt.% and 30 wt.% of PBE. However, the dry conductivity decreases suddenly to 2 × 10 −7 S/m with the percentage of PBE above 30 wt.%. The phase inversion between 30 and 40 wt.% causes the increase in the interparticular distance and, thus, the decrease of the conductivity.
Electrical Properties
The initial electrical conductivity depends on the morphology of the blend: the localization and the dispersion of the CNT and the morphology of the polymer phases.
The electrical conductivity decreases with the adding of PBE in the blend ( Figure 6): from 1.2 × 10 −2 S/m to about 2.6 × 10 −3 S/m. With a nodular morphology, the dry conductivities are approximately the same: from, respectively, 2.6 × 10 −3 S/m to 1.5 × 10 −3 S/m for 10 wt.% and 30 wt.% of PBE. However, the dry conductivity decreases suddenly to 2 × 10 −7 S/m with the percentage of PBE above 30 wt.%. The phase inversion between 30 and 40 wt.% causes the increase in the interparticular distance and, thus, the decrease of the conductivity.
Principle of Short Circuit
The short circuit's signal depends on several parameters: the distance between the two parallel monofilaments and the water properties, which are both fixed, as well as on the monofilaments' properties. In this study, the conductance of the short circuit depends on the dry conductivity of the monofilament and, thus, the proportion of PBE in the blend
Principle of Short Circuit
The short circuit's signal depends on several parameters: the distance between the two parallel monofilaments and the water properties, which are both fixed, as well as on the monofilaments' properties. In this study, the conductance of the short circuit depends on the dry conductivity of the monofilament and, thus, the proportion of PBE in the blend (Figure 7). The detection signal is better when the dry conductivity of the monofilament is high and, therefore, when the proportion of PBE is small in the blend. The signal is from about 1.4 × 10 −7 S without PBE to 8.3 × 10 −9 S with 30 wt.% of PBE in the blend, and it decreases to 1.7 × 10 −12 S with 50 wt.% of PBE in the blend. Nanomaterials 2022, 12, x FOR PEER REVIEW 9 of 13 ( Figure 7). The detection signal is better when the dry conductivity of the monofilament is high and, therefore, when the proportion of PBE is small in the blend. The signal is from about 1.4 × 10 −7 S without PBE to 8.3 × 10 −9 S with 30 wt.% of PBE in the blend, and it decreases to 1.7 × 10 −12 S with 50 wt.% of PBE in the blend.
Principle of absorption
The water detector sensitivity (Sw) of the principle of absorption is based on the variation of the yarn's conductivity. It is the change between the electrical conductivity of the dry monofilament (dry conductivity) and of the wetted monofilament (wetted conductivity). The sensitivity to water depends on the blends' formulation. Therefore, it has the same trend as the initial electrical conductivity (Figure 8). The blends with a high dry conductivity have a positive sensitivity: from 43 ± 13 to 28 ± 20% with the percentage from 0 wt.% to 30 wt.% of PBE. However, the blends with a low one have a negative sensitivity: respectively, −26 ± 22 and −51 ± 30 % for 40 and 50 wt.% of PBE. By comparing the Sw of the different blends, it is possible to make hypotheses regarding the percentage of PBE on the Sw (Figure 9). Regarding the positive Sw, the absorbed water increases the monofilament electrical conductivity by increasing the number of conductive paths between the CNT (Figure 9a). The negative Sw is due to the blend
Principle of Absorption
The water detector sensitivity (Sw) of the principle of absorption is based on the variation of the yarn's conductivity. It is the change between the electrical conductivity of the dry monofilament (dry conductivity) and of the wetted monofilament (wetted conductivity). The sensitivity to water depends on the blends' formulation. Therefore, it has the same trend as the initial electrical conductivity (Figure 8). The blends with a high dry conductivity have a positive sensitivity: from 43 ± 13 to 28 ± 20% with the percentage from 0 wt.% to 30 wt.% of PBE. However, the blends with a low one have a negative sensitivity: respectively, −26 ± 22 and −51 ± 30 % for 40 and 50 wt.% of PBE. Nanomaterials 2022, 12, x FOR PEER REVIEW 9 of 13 ( Figure 7). The detection signal is better when the dry conductivity of the monofilament is high and, therefore, when the proportion of PBE is small in the blend. The signal is from about 1.4 × 10 −7 S without PBE to 8.3 × 10 −9 S with 30 wt.% of PBE in the blend, and it decreases to 1.7 × 10 −12 S with 50 wt.% of PBE in the blend.
Principle of absorption
The water detector sensitivity (Sw) of the principle of absorption is based on the variation of the yarn's conductivity. It is the change between the electrical conductivity of the dry monofilament (dry conductivity) and of the wetted monofilament (wetted conductivity). The sensitivity to water depends on the blends' formulation. Therefore, it has the same trend as the initial electrical conductivity (Figure 8). The blends with a high dry conductivity have a positive sensitivity: from 43 ± 13 to 28 ± 20% with the percentage from 0 wt.% to 30 wt.% of PBE. However, the blends with a low one have a negative sensitivity: respectively, −26 ± 22 and −51 ± 30 % for 40 and 50 wt.% of PBE. By comparing the Sw of the different blends, it is possible to make hypotheses regarding the percentage of PBE on the Sw (Figure 9). Regarding the positive Sw, the absorbed water increases the monofilament electrical conductivity by increasing the number of conductive paths between the CNT (Figure 9a). The negative Sw is due to the blend By comparing the Sw of the different blends, it is possible to make hypotheses regarding the percentage of PBE on the Sw (Figure 9). Regarding the positive Sw, the absorbed water increases the monofilament electrical conductivity by increasing the number of conductive paths between the CNT (Figure 9a). The negative Sw is due to the blend morphology inversion and with the presence of water. By hypothesis, the water permits the swelling of the PA6.6 3CNT fibrils, resulting in the increase of the distance between the CNT, which reduces the conductivity of the monofilament (Figure 9b). Nanomaterials 2022, 12, x FOR PEER REVIEW 10 of 13 morphology inversion and with the presence of water. By hypothesis, the water permits the swelling of the PA6.63CNT fibrils, resulting in the increase of the distance between the CNT, which reduces the conductivity of the monofilament (Figure 9b). To develop the filament detector, a trade-off between good rheological and mechanical properties and a good sensitivity is needed. The figure of merit (F), which quantifies the mechanical and detection property, decreases slightly before dropping sharply for the blends with more than 30 wt.% PBE ( Figure 10). As all the properties, this fall corresponds to the phase inversion of the morphology between nodular and fibrillar. The figure of merit highlights the decrease of the elongation at break and the sensitivity to water with the addition of PBE in the blend. Moreover, to develop the detector filament using the melt spinning, the melt flow index (MFI) has to be around 20 to 25 g/10 min. The two formulations that correspond to these criteria are PA6.63CNT80/PBE20 and the PA6.63CNT70/PBE30. Therefore, the optimum CPC filament to optimize is revealed thanks to the figure of merit which is the PA6.63CNT70/PBE30.
Conclusions
This study is focused on the development and the characterization of an immiscible thermoplastic/elastomer blend for the water detection. The monofilament is created by To develop the filament detector, a trade-off between good rheological and mechanical properties and a good sensitivity is needed. The figure of merit (F), which quantifies the mechanical and detection property, decreases slightly before dropping sharply for the blends with more than 30 wt.% PBE ( Figure 10). As all the properties, this fall corresponds to the phase inversion of the morphology between nodular and fibrillar. The figure of merit highlights the decrease of the elongation at break and the sensitivity to water with the addition of PBE in the blend. Moreover, to develop the detector filament using the melt spinning, the melt flow index (MFI) has to be around 20 to 25 g/10 min. The two formulations that correspond to these criteria are PA6.6 3CNT 80/PBE20 and the PA6.6 3CNT 70/PBE30. Therefore, the optimum CPC filament to optimize is revealed thanks to the figure of merit which is the PA6.6 3CNT 70/PBE30. Nanomaterials 2022, 12, x FOR PEER REVIEW 10 of 13 morphology inversion and with the presence of water. By hypothesis, the water permits the swelling of the PA6.63CNT fibrils, resulting in the increase of the distance between the CNT, which reduces the conductivity of the monofilament (Figure 9b). To develop the filament detector, a trade-off between good rheological and mechanical properties and a good sensitivity is needed. The figure of merit (F), which quantifies the mechanical and detection property, decreases slightly before dropping sharply for the blends with more than 30 wt.% PBE ( Figure 10). As all the properties, this fall corresponds to the phase inversion of the morphology between nodular and fibrillar. The figure of merit highlights the decrease of the elongation at break and the sensitivity to water with the addition of PBE in the blend. Moreover, to develop the detector filament using the melt spinning, the melt flow index (MFI) has to be around 20 to 25 g/10 min. The two formulations that correspond to these criteria are PA6.63CNT80/PBE20 and the PA6.63CNT70/PBE30. Therefore, the optimum CPC filament to optimize is revealed thanks to the figure of merit which is the PA6.63CNT70/PBE30.
Conclusions
This study is focused on the development and the characterization of an immiscible thermoplastic/elastomer blend for the water detection. The monofilament is created by
Conclusions
This study is focused on the development and the characterization of an immiscible thermoplastic/elastomer blend for the water detection. The monofilament is created by two successive extrusions: by filling the PA6.6 with CNT first, and then by adding the PBE in different blend proportions.
Regarding the rheological properties, the CNT increase the blend viscosity whereas the fluidity increases with the proportion of PBE. The observations have correlated the mechanical and water detection properties with the blends' morphology. Two morphologies are revealed by the SEM and TEM images: the nodular and fibrillar morphology.
Below 30 wt.% of PBE in the blend, the PBE is in the form of nodules dispersed in the PA6.6 3CNT . The elongation at break increases with the proportion of PBE, while the electrical conductivity slightly decreases. The sensitivity of the absorption principle and the conductance of the short circuit follow the same trend as the dry conductivity of the monofilament. The sensitivity has a positive change, which means that the conductivity of the detector monofilament increases with the water contact. It creates new conductive paths between fillers and water.
Above 30 wt.% of PBE, the morphology changes for a fibrillar form of the PA6.6 3CNT in the PBE. This loss of phase percolation corresponds to the decrease of the mechanical, electrically conductive and detection properties. Regarding the mechanical property, the bad interface between PA6.6 3CNT and PBE increases with the PBE proportion, which causes a premature break of the monofilament. Regarding the absorption principle, the sensitivity becomes negative. The electrical conductivity of the monofilament decreases with the contact with water due to the modification of the interparticular distance. To find a trade-off between good rheological property and good mechanical and water detection properties, the figure of merit shows that the best candidate to optimize is the monofilament of PA6.6 3CNT 70/PBE30. The objective is to increase and refine the sensitivity of the detection filament by passing this formulation in melt spinning. Future work aims to develop this multifilament to reduce the standard deviation of its water sensitivity and to verify its morphology and its electrical property. | 2021-12-31T16:12:06.458Z | 2021-12-29T00:00:00.000 | {
"year": 2021,
"sha1": "085945d87022fe33fdf1e54570e37210265ba97c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "acd0502f7640d6d034e2703c837bcbc36fd98ca8",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252274298 | pes2o/s2orc | v3-fos-license | Self-supervised language learning from raw audio: Lessons from the Zero Resource Speech Challenge
Recent progress in self-supervised or unsupervised machine learning has opened the possibility of building a full speech processing system from raw audio without using any textual representations or expert labels such as phonemes, dictionaries or parse trees. The contribution of the Zero Resource Speech Challenge series since 2015 has been to break down this long-term objective into four well-defined tasks -- Acoustic Unit Discovery, Spoken Term Discovery, Discrete Resynthesis, and Spoken Language Modeling -- and introduce associated metrics and benchmarks enabling model comparison and cumulative progress. We present an overview of the six editions of this challenge series since 2015, discuss the lessons learned, and outline the areas which need more work or give puzzling results.
are concerned, language has primarily been used in written form. When it comes to dealing with spoken language, however, this has given rise to a division of labor between, on the one hand, speech components which aim at converting speech to text or text to speech (ASR, automatic speech recognition, and TTS, text-to-speech synthesis), and, on the other hand, components that perform a variety of language tasks based on text (language understanding, dialogue, language generation). As a result, even speech-first applications like speech-tospeech translation or speech assistants like Alexa or Siri are cobbled together in a Frankensteinian fashion, with some components trained on text and others trained on speech (see Figure 1a)-and with all the speech components trained using large amounts of supervision (textual transcription) so that they can communicate with the text-based components. But is this a necessity? Could we build spoken-language based applications directly from the audio stream without using any text?
Preschoolers around the world demonstrate clearly that it is possible to learn to naturally interact in language without knowing how to read or write [1], [2]. Written language is, in a way, a tool for archiving spoken or signed language. Many languages have no writing system at all, and many other language communities do not use the written form of their language much (reportedly, more than half the world's languages do not have a stable or widely used writing system).
Reverse engineering the feat of learning a spoken language from speech input only is a fascinating scientific question. The Zero Resource Speech Challenge (ZRC) focuses on spoken languages. (It is an important and pressing question how New Figure 1 Acoustic modeling Note. d is a dissimilarity measure between word or frame embeddings, d h is a human dissimilarity judgment, ED the edit distance over the phonetic transcriptions of segments (streches of speech between two time-stamps), D(U ) the total duration of dataset U (in sec), p the probability of a given discrete unit u i in U ,p a pseudo-probability computed by the model over an input sequence (word or sentence). * indicates ungrammaticality zero-resource technology could be applied to working with signed languages.) For spoken language technology, advancing this question would unlock a number of novel applications. For one thing, it would allow for applications that address the needs of languages that are entirely or mostly unwritten. Even in languages with large amounts of textual resources, learning language representations from raw audio would help capture the dimensions of language that are typically poorly represented in text (prosody, emotional and non-verbal vocalizations, and so on). Beyond helping to model these aspects of language, capturing unwritten oral expression could improve the ability of machine learning systems to deal with spontaneous speech, thereby unlocking the rich syntax and vocabulary of oral registers, which very often differ greatly from the written form. This would foster more naturalistic human/machine interactions. While some research has focused on how self-supervised modelling can improve existing supervised speech tasks (for example, the SUPERB benchmark: [3]), the Zero Resource Challenge series assesses progress toward spoken language systems that function without any textual supervision at any point. Building a text-free dialogue system using only raw speech is a difficult machine learning challenge. It requires us to rethink the spoken language processing stack from the ground up. The ZRC series has been designed to address two interlocking research problems: the task problem and the evaluation problem.
The task problem is to break down the ill-defined objective of "learning to process spoken language without text" into a series of well-defined sub-problems. The ZRC series follows the general architecture of a spoken digital assistant to define the learning problem implied by the training of each component-the acoustic model, a lexicon, language models, waveform generation, and so on. But instead of using phonemes, characters or words as an intermediate representation, the components are allowed to develop their own latent representations. For instance, instead of outputting phonemes or characters, the acoustic model is assumed to output acoustic units which may or may not be discrete. Such an architecture (see Figure 1b) naturally gives rise to the following four tasks: (Task 1) Acoustic Unit Discovery; (Task 2) Spoken Term Discovery; (Task 3) Unsupervised Discrete Resynthesis; (Task 4) Spoken Language Modeling. These are the textless counterparts of well-known tasks: (Task 1) phone recognition; (Task 2) word recognition (i.e., ASR); (Task 3) TTS; (Task 4) Language Modeling. We will review these tasks in turn in the following sections.
The evaluation problem is to define metrics that enable model comparison and cumulative progress for tasks that are defined only through a self-supervised objective. For instance, ASR systems can readily be measured through phone or word error rates. But their self-supervised counterparts, Acoustic Unit Discovery systems, do not aim at recovering phonemes, but a latent representation. How can we evaluate theses systems? Interest in some of the above mentionned tasks predates the ZRC series (see for instance [4]- [9], for Task 1), but since each of the published papers used its own metrics, it was difficult (and still is) to compare systems and measure progress. The general strategy of the ZRC series is to develop zero-shot probe tasks that are inspired by human psycholinguistics, and which require no model retraining. The reasoning is that, since the aim is to probe for latent representations at various levels of a self-supervised system, it is best to not train any classifier on top of it. Otherwise, it would be unclear whether the performance obtained reflects the system under observation or simply the classifier. For each task, zero-shot metrics were developed that probe for the different levels of linguistic knowledge that the selfsupervised system is supposed to have learned. They only require the extraction of information readily available in the system (for example, embeddings, pseudo-probabilities), and are computed by a separate fixed module which is identical across systems. The evaluation metrics that go with the tasks are listed in Table I and will be presented in more detail in TABLE II SUMMARY OF TASKS AND BENCHMARKS IN THE ZERO RESOURCE CHALLENGE SERIES. Chall. Tasks Benchmark 2015 [10] T1, T2 ABX-15, TDE-15 2017 [11] T1, T2 ABX-17, TDE-17 2019 [12] T3 TTS0-19 2020 [13] T1,T2,T3 reboot of 2017 & 2019 2021a [14] T1,T4 ABX-LS, sLM-21 (sWUGGY, 2021b T1,T4 sBLIMP, sSIMI) the following sections.
In this paper, we provide a comprehensive overview of the results obtained across the different tasks and metrics of the ZRC series since 2015, and we discuss the lessons learned and outline the areas that need more work or give puzzling results.
II. PAST AND PRESENT
Six editions of the Zero Resource Challenge have been proposed over the years as events in different venues (Interspeech, ASRU, NeurIPS) and are summarized in Table II. Each edition has explored a different combination of tasks and introduced different datasets. Overall, the six editions have received a total of 115 submissions from 29 teams. In addition, several papers have been published using some of the Zero Resource benchmark metrics, which we also include in our review. Table III gives the complete list of submitted systems to all four tasks, and the abbreviations we use for them, including citations for published systems and model types as explained in the upcoming sections.
A. Task 1: Acoustic Unit Discovery
The goal of acoustic unit discovery is to learn representations (embeddings) of speech sounds that retain linguistically relevant information and discard acoustic information which is irrelevant or secondary to recovering the linguistic content, like speaker voice type or recording conditions (additive noise, reverberation, etc). In text-based systems, such representations are typically phonemes (as defined by a pronunciation dictionary) or characters. Here, we let the representations be latent, which means that they may take any form (dense vectors for each frame, probabilistic codes, discrete codes, etc). This poses an evaluation problem. The ZRC series takes the view that, while discovered units may not necessarily take the shape of straighforwardly linguistically interpretable entities like phonemes or phonetic features, they should at least maintain the same key linguistic function: linguistic contrast. 1) Evaluation metrics: Phoneme categories are typically defined as the smallest element of speech that can induce a difference in meaning between words. In English, for instance, the phonemic contrast between /r/ and /l/ is demonstrated by the fact that "fly" and "fry" are distinct words. Two instances of "fly" would remain instances of the word "fly" even in spite of variations in speaker or recording condition, and an instance of "fry" is (for speakers with a standard pronunciation) never an instance of "fly." The same goes for possible, as opposed to actual, words: "pla" and "pra" would not be the same word, if they were words. The minimal pair ABX task [72], [73] is inspired by matchto-sample tasks used in human psychophysics and measures discriminability between two sound categories. We define ∆, the ABX-discriminability of category A from category B, as the probability that tokens a and x from A are further apart than token b from B and x are, according to a dissimilarity function d, see Equation 1: where 1 is the indicator function and |A| the number of tokens in category A. The discriminability score is symmetrized by averaging ∆(A, B) and ∆(B, A).
Tokens are representation of speech sequences as output by the model under evaluation. In general, they will be sequences of frame embeddings, and dynamic time warping is used to realign them. Frame-level dissimilarities are averaged along the realignment path to obtain d. Most submissions used angular dissimilarity (arccos of the normalized dot product of the frame embeddings), while others used the KL divergence when the frame representations were posterior probabilities.
For the categories A and B we use triphone sequences that only vary in the middle phoneme (like "fly" and "fry"), as extracted from longer utterances in our test set. Thus, participants apply their trained models to these audio files, and output sequences of representations for the entire file. These representations must be time-stamped, so that the ABX evaluation software can identify the beginning and end of each three-phone token in the output. This was done in all three ABXbenchmarks; only the ABX evaluation that was contained in the TTS0-19 evaluation package did otherwise, using small audio files containing only the three-phone sequence (see Section II-C) 1 .
For the within-speaker variant of this task, all of the phone triplets belong to the same speaker (e.g., a = fly T1 , b = fry T1 , x = fly T1 ). The error rates for a given minimal pair are first averaged across all of the speakers for which this minimal pair exists. The resulting scores are then averaged over all found contexts for a given pair of central phones (e.g., for the pair /l/-/r/, average the scores for all attested contexts such as /f y/, /f i/, /a o/, etc.). Finally, the scores for every pair of central phones are averaged and subtracted from 1 to yield the reported within-talker ABX error rate. For the acrossspeaker variant, a and b belong to the same speaker, and x to a different one. a = fly T1 , b = fry T1 , x = fly T2 . The scores for a given minimal pair are first averaged across all of the pairs of speakers for which this contrast can be made. As above, the resulting scores are averaged over all contexts over all pairs of central phones and converted to an error rate.
2) Datasets: As seen in Table IV, the first two benchmarks (ABX-15 and ABX-17) used relatively modest datasets sizes for training (from 2.5h to 45h) over 6 languages. ABX-15 used the training set as test set, while the ABX-17 introduced a separate test set with new speakers, to test for the ability of the learned representations to generalize to new speakers. The more recent benchmark (ABX-LS) uses the full LibriSpeech (960h) as training set, and introduced a separate dev and test set in order to avoid overfitting the model's hyperparameters to the test set. As more recent models tend to use more and more data, the benchmarks are open to submissions of systems trained on datasets other than the default ones, so long as they contain no labels and are described in detail.
3) Baselines: For Task 1, our low end reference score ("baseline") is calculated by computing the distances on spectral representations (MFCC). Good representations that highlight linguistically relevant differences, and thus neutralize speaker or channel differences, should at least do better than MFCCs. On the high end, using the gold annotations to generate a frame-by-frame one-hot phonemic representation mechanically leads to a perfect ABX score. To give a fairer high end comparison, we have also often included scores for the output of an off-the-shelf supervised GMM-HMM ASR. We included such a "topline" score in the ABX-15 and ABX- 17 results, as well as in the unit evaluation component of the TTS0-19 evaluation, which also contained an ABX test (see below). In the first two cases, the representations we used were posteriorgrams (that is, rather than a one-hot vector at each frame representing the decoding, we calculate the model's posterior decoding probability for each frame). This reference score was beaten not long after the 2017 challenge (by Cho19), and remains quite poor compared to modern systems (see below). In the TTS0-19 evaluation, we instead used the hard decoding, rather than probabilities, and observed that this reference score was in fact very easy to beat (for one of the two languages, the MFCC scores were already better), because of the numerous errors in decoding made by the offthe-shelf ASR system. More recently, submitted systems have become so good at the ABX task (see Figure 2 and below) that such low-fidelity topline systems have become less relevant. 4) Results: Since 2015, several approaches have been taken to Task 1. Most start from the principle that a central characteristic of text (or phonemic transcription) is that it provides a highly compressed latent representation for speech. For reference, a 16 kHz 16-bit waveform is coded using 256 Kb/sec, which generic audio codecs like Opus or MP3 can compress down to between 32 and 16 Kb/sec (a factor of 8 to 16). In contrast, a phonemic transcription is about 70 b/sec. This represents a compression of more than 200x compared to general audio codecs! Many objective functions proposed for Task 1 have as their primary goal to reduce the amount of information coded.
A simple and remarkably successful version of this ideainspired by the universal background models used in speaker encodings-is to model acoustic frames using a mixture of full-covariance Gaussians (GMM). The posteriorgram of the mixture is taken as a new, sparse code for the speech input. In other words, each acoustic frame in the input file is assigned a sparse vector of probabilities, which correspond to the a posteriori probability of each of the discovered Gaussian distributions as the source of the given frame. Since individual frames are very short, and they are clustered as independent observations, this gives rise to a code which classifies speech in terms of short-term acoustic events, typically with several hundred Gaussian clusters discovered. This strategy, supplemented with additional speaker normalization or teacher-student tricks, was able to obtain top scores in the 2015 and 2017 editions (Che15a-d, Hec17a,b: see Figure 2).
Another type of approach seeking to find a compressed latent representation uses autoencoders (AutoEnc), which aim at reconstrucing the signal through an information bottleneck, sometimes achieved by using a discrete codebook: Bad15a-c, Cho19. The codebook + WaveNet autoencoder of Cho19 obtained better results than previous, mixture model-based systems.
Since 2020, a new generation of predictive models (Predict) began to appear which have given rise to never-before-seen performance: contrastive predictive coding, or CPC [74], wav2vec 2.0 [75], HuBERT [76]; see [77] for a review. Two salient differences with this new wave of models is how they integrate context, and how they scale by working from the raw audio. The compression based models tended to have a frame-based view of the speech signal, modeling the probability distribution of individual acoustic frames through a compressed latent representation. In contrast, the predictive models aim to reconstruct large missing or masked parts of the signal conditional on visible parts of the sequence. For instance, CPC predicts future frames within a 10 to 120 ms window based on past frames, and obtained excellent ABX scores (Kha20, Ngu21, BAS4-sm,lg: around 4.5% across speaker). See a more thorough discussion of CPC in the section about Task 4 below. Wav2vec 2.0 and HuBERT try to reconstruct a masked part of the signal (of the order of 100ms), based on left and right context.
Independently of their predictive objectives, these models are also more sophisticated than previous ones in their encoding of context. Instead of processing signals within small receptive fields (Bad15a-c,Thi15,Che15a-d), the new systems use recurrences, transformers, or both, allowing them to model temporal correlation at longer distances. At the phonological level, language can be viewed as an autocorrective code due to redundancies introduced by phonotactics and lexical regularities. Previous work showed that top-down lexical context [22], [78] or phonotactics [79], [80] can indeed help with the discovery of phonetic units. This may explain why predictive/masked objectives together with large receptive fields help learning the acoustic properties of units jointly with their functional role in the language, yielding more relevant units.
The new models are also large, and, accordingly, are typically trained on large amounts of data (thousands of hours), which is orders of magnitude more than the training sets used in the initial ZRC series. In addition, some of the new models work directly from waveform instead of relying on engineered features like MFCC or mel filterbanks. Allowing models to be large, working from the raw audio, and training them on large amounts of data might push them to mimic the evolution of the human auditory system and its adaptations to speech. Indeed, [81], [82] find that wav2vec 2.0, HuBERT, and, particularly, CPC, are good predictors of low-level (sub-phonemic) auditory and speech processing in humans. In addition, it is well known that human perception relies on temporal fine structure not captured by magnitude spectrograms [83], particularly in difficult listening conditions. Models working from waveform might have an advantage. 2 To sum up, predictive models seem to have an edge on compressive models, and enjoy increasing popularity for a variety of downstream applications (see models presented in the SUPERB benchmarks [3], [85]). In the context of the ZRC series, a fair comparison equating architecture and dataset size would be necessary before claiming a definitive win. In addition, new models combining the two ideas like Masked AutoEncoders are emerging and need to be tested [86], [87].
Other interesting ideas have been explored in the ZRC series. Although they have not made it to the top of the leaderboard, they may still have much to contribute, perhaps in combination with other approaches. For example, some systems have attempted to use possible synergies with Task 2, and have used Spoken Term Discovery to obtain pairs of segments that have potentially the same phonemes, and use them in a Siamese architecture. Such pairs have also been used as a form of data augmentation with some AutoEnc models as well. In addition, most systems have not attempted to model the duration of linguistic units like phonemes or syllables yielding representation of much shorter duration acoustic events (10 ms or so). Yet duration is a principal concern of the HMM-based unit discovery system of Base-line3 as well as the segmental CPC approach of Bha21a. The approach of Pan19a-b,Kum20a-b implicitly considers duration by dividing the signal into syllables, which are then further divided into subsyllabic units (see also Task 4 in Section II-D). None of these other approaches has reached state-of-the-art performance yet, but, once again, duration is quite clearly critical to speech perception, and so it seems likely that research will need to examine these ideas further.
B. Task 2: Spoken Term Discovery
Just as the infant learns the words of their language by listening, Spoken Term Discovery seeks to find recurring patterns in the input, proposing a set of boundaries delimiting the start and end of proposed word tokens discovered in the speech, and category labels indicating proposed word types. 3 This problem was explored by several papers prior to the ZRC [88]- [92], and served as inspiration for the challenge itself. Although the task of "finding words" seems intuitively simple, it is made up of at least three subproblems which we evaluate separately.
• The matching subproblem is to find all pairs of speech fragments that are instances of the same sequence of phonemes. This can be evaluated based on how phonemically similar the discovered fragments are using the gold transcription (normalized edit distance: NED) and how much of the corpus they cover (coverage). • The lexicon discovery subproblem is to group these fragments into clusters (as opposed to simple pairwise matching). The goal is to find a lexicon of types. A proposed cluster can be evaluated based on how well the members match on the sequence of phonemes (Grouping) and how well the sets match the gold-standard lexicon of word types (Type F -score). • The word segmentation subproblem attempts to find onsets and offsets of fragments that are aligned with the word boundaries as defined in the gold-standard text. 1) Evaluation metrics: To maximize comparability with text-based word discovery approaches, all of these evaluations are done by forced aligning the test set with its phoneme transcription. Any discovered speech fragment is converted into its transcription (which means taking decisions about phonemes on the left or right edge which may be partially covered: we include a phoneme if the fragment contains more than more than 30 ms of that phoneme or more than 50% of its duration).
The evaluation of spoken term discovery systems as matching systems consists of two scores, NED (normalized edit distance) and coverage. NED is the average, over all matched pairs, of the Levenshtein distance between their phonemic transcriptions, divided by the max of their phonemic length (ED(a, b)/max(|a|, |b|), where a and b are the two elements of a proposed match). The coverage is the fraction of the discoverable part of the corpus that is covered by the set of all discovered fragments. The discoverable part of the corpus is found by computing the union of all of the intervals corresponding to all of the pairs of n-grams (with n between 3 and 20). This is almost all of the corpus, except for unigram and bigram hapaxes.
Six scores are used to evaluate the performance of a spoken term discovery system in terms of lexicon discovery. The first three are grouping precision, grouping recall, and grouping F-score. These are defined in terms of P clus , the set of all pairs of fragments that are groups in the same cluster, and P goldcl , the set of all non-overlapping pairs of fragments which are both discovered by the system (not necessarily in the same cluster) and have exactly the same gold transcription.
Prec: t∈types(Pclus) w(t, P clus ) |occ(t, P clus ∩ P goldcl )| |occ(t, P clus )| Rec: where t ranges over the types of fragments (defined by the transcription) in a cluster C, and occ(t, C) is the number of occurrences of that type, w the number of occurrences divided by the size of the cluster. In other words, Prec is a weighted measure of cluster purity and Rec, of the inverse of the cluster's fragmentation. The other three scores are type precision, type recall, and type F-score. Type precision is the probability that discovered types belong to the gold set of types (real words), whereas type recall is the probability that gold types are discovered. We restrict both sets to words between three and twenty phonemes long.
To evaluate systems on the word segmentation subproblem, we use the token and boundary F -scores with respect to the gold text as is usual in text-based word segmentation. The token F -score evaluates whether the set of discovered tokens -first erage) matches the gold set of tokens, and the boundary F -score evaluates whether the set of boundaries (the delimitations between tokens, in terms of which phones in the gold transcription are separated by a boundary and which are not) corresponds to the gold set of boundaries.
The baseline system we provide is that of [32], which matches acoustic pairs using locally-sensitive hashing and then groups the pairs together using graph clustering. The topline was based on applying the adaptor grammar segmentation algorithm of [93] to the gold textual transcriptions.
2) Datasets: The datasets for the 2015 and 2017 Task 2 benchmark (TDE-15 and TDE-17, see Table IV) are coextensive with those of the corresponding Task 1 benchmarks, with one exception: the test set is always same as the training set. This may seem unorthodox from a machine learning point of view, but is quite common in the text segmentation litterature, as the models are evaluated on their ability to extract words and boundaries from the training set.
3) Results: The Spoken Term Discovery task is still very challenging and has not received the same attention as Acoustic Unit Discovery. One major finding across the three ZRC editions that featured this task is the existence of a tradeoff between attempting to find a lot of words and ensuring that the discovered words are accurate. The quality of the set of words that are treated as matches/repetitions by the system, as measured by the normalized edit distance (NED), will necessarily be better if systems do not commit to extracting more dubious word candidates in the first place; however, the more candidates are ignored, the less of the corpus will receive an analysis (lower coverage) and the fewer of the gold word boundaries will be found (leading also to lower boundary Fscores). The tradeoff between term quality and coverage is shown in Figure 3.
Systems that take a "matching first" approach, like Ras15a-c, Ras20a-b, seek primarily to find recurrent pho-netic patterns. Boundaries here are merely designations of the edges of these discovered segments. The system that currently does best at balancing NED with coverage and segmentation quality, Ras20a, takes a matching-based approach, based only on MFCC inputs. This system begins by doing a lowresolution search for candidate matches by dividing the input utterances into fixed-length down-sampled speech segments. Then, it filters the candidate matches using a higher-resolution matching algorithm based on dynamic time warping.
It might seem surprising that an algorithm that uses MFCC inputs, rather than features learned by acoustic unit discovery, would yield good performance. Nevertheless, [94] demonstrated that ABX error rate may not be the best indicator of downstream lexicon discovery, and our own informal experiments have shown that naively feeding improved acoustic units into a generic matching system (for example, those learned by Thi15) can actually make matching quality worse.
Other Task 2 systems take segmentation-oriented approaches, putting a priority on discovering boundaries. Building on earlier text-based systems using non-parametric Bayesian models like [93], [95], [96], systems like Kam15,Kam17 jointly optimize an exhaustive segmentation and a dictionary of clustered word embeddings. In a different, bottom-up approach, Bha20a,b matches learned segmental acoustic units to construct a full word segmentation. As would be expected, since these systems strive to optimize segmentation measures, they fare rather poorly on matching measures. Figure 4 focuses on segmentation by itself and displays the Token F-score for each of the submitted systems, compared to a topline unigram adaptor grammar segmentation system trained on the corresponding text (phonemized text without the blank spaces between words). All of the high-coverage segmentation-oriented models are on the left and all of the low NED, matching-first models on the right. The segmentationoriented models are more likely to do well on this metric, which assesses how many of the true word tokens were correctly segmented. Included here are two new models, Kam22 and Alg22, which do not even attempt to build a lexicon of types. The first one is in the same vein as [97], which posits word boundaries at peaks in surprisal across sequences of learned segmental units, while the second uses a nonparametric Bayesian approach directly on tokens. As can be seen, however, the gap between the best speech based models and the text-based ones is still large.
The reason is likely multi-fold. First, speech is variable, which means that the same word will surface with a variety of acoustic shapes. Even if good invariant quantized acoustic representations are used, the potential (and actual) variability in the different "transcriptions" in terms of these quantized units for the "same" word grows exponentially with word length. This makes it difficult to build a reliable lexicon of word types. Second, speech rate and phone durations are variable in time, with the result that both phoneme duration and word duration can change substantially from occurrence to occurrence, a problem that does not exist in text. Finally, speech is typically coded into frames, which gives it a finer granularity than text (e.g., 10 ms frames, whereas phonemes Segment-first (high coverage) Match-first (low coverage) * * Fig. 4. Task 2 (term discovery): Token F-scores, measuring how many words are correctly segmented, averaged across 5 languages (ZR17 and ZR20 plus two new papers). The topline is a unigram word segmentation adaptor grammar trained on the same amount of text. The dotted line is a baseline consisting in random segmentations every 120ms. Starred (*) models compute the segmentation without even building any lexicon of discrete types. last on average around 70 ms): a potential word boundary can therefore occur in more places in speech than in text. This increases the number of potential segmentation errors that can be made. This last point is one of the motivations for systems such as Kam22, Bha21a, Cue22, all of which, jointly or sequentially, infer word boundaries hierarchically on the basis of learned acoustic unit boundaries. Any future work will need to address all three of these challenges to achieve better performance on this task.
C. Task 3: Discrete Resynthesis (TTS without T)
Here, we investigate a task which is similar to what infants may do when they repeat a word or a sentence: they encode the signal into some representation, and then reproduce the same content in their own voice. Defined like this, the task is already known as voice cloning or voice transfer, and it can be performed at a rather low level by introducing a target speaker embedding in the decoder part of a simple encoder-decoder architecture. Here, however, we add the constraint that there be a discrete bottleneck between the encoder and decoder, and we measure the bitrate of the encoding. In other words, we ask participants to use discovered acoustic units instead of phonemes, and we push these units to approach the bitrate of phonemic transcription. Prior to the ZRC, [98] demonstrated the feasibility of unsupervised discrete resynthesis. Furthermore, some of the models in Task 1 (Bad15a-c,Cho19) already used a similar discrete bottleneck autoencoder architecture, although they did not evaluate the quality of the reconstruction nor the bitrate of the representation. Participants on this task are provided with a unit dataset from multiple speakers used to discover discrete units, and a voice dataset to train a synthesizer for the target voice taking the units as input. The test dataset consists of novel utterances by unseen speakers, which must be resynthesized in the target voice. Participants submit both the acoustic unit representation and the resynthesis for evaluation. 1) Evaluation metrics: As for the acoustic units, they are evaluated in terms of their bitrate, where each unique embedding value is counted as a single symbol type. For the bitrate computation, each vector is processed as a character string. A dictionary of the possible values is constructed over the embedding file for the submitted test set. We thus assume that the entire test set corresponds to a sequence of vectors U of length n: U = [u 1 , ..., u n ]. The bit rate for U is then B The numerator is n times the entropy of the symbols, which gives the optimal number of bits needed to transmit the sequence of symbols s 1:n . To obtain a bitrate, we divide by D(U ), the total duration of U in seconds. 4 Not reported here, Task 3 also included a version of ABX in which the minimal triphones ("fly" versus "fry") were extracted and presented as small audio clips, in order to not penalize the evaluation of sequence-to-sequence systems that would lack the alignemnt between units and audio signals. As it happended, no such system has been submitted as yet.
As for the generated waveforms, native speakers of the test languages were recruited online to evaluate the quality of the synthesis in terms of intelligibility, naturalness, and similarity. Intelligibility was measured by asking participants to orthographically transcribe the synthesized sentence. Each transcription was compared with the gold transcription using the Levenshtein distance, yielding a Character Error Rate (CER). The overall naturalness of the synthesis was assessed on a 1 to 5 scale, yielding a Mean Opinion Score (MOS). 5 Speaker similarity was assessed using a 1 to 5 scale. Sentences were presented in pairs (target voice, system voice). 6 Each sentence token was evaluated at least once with each system, as well as the original (gold) recordings.
2) Datasets: The 2019 Benchmark for Task 3 (TTS0-19) provides training and test data for two language: English (the dev language) and Indonesian (the test language). For each language, one "Unit" dataset is provided to train unit discovery (around 15h, betwee, 100 and 120 speakers), and one "Voice" dataset is provided to train speech synthesis in the target voice 4 A fixed frame rate transcription may have a higher bitrate than a "textual" representation due to the repetition of symbols across frames. For instance, the bitrate of a 5 ms framewise gold phonetic transcription is around 450 bits/sec and that of a "textual" transcription around 60 bits/sec. 5 The question posed was: Rate how natural the audio is, between 1 and 5 (1=very unnatural, 3 = neutral, 5=very natural). 6 The question posed was: Rate the similarity between the reference voice and the system voice, between 1 and 5 (1 = very different voices, 3 = neither similar nor different voices, 5 = very similar voices). Ten additional trials were included, for each participant, in which the reference voice was not the target voice but the source voice. Table IV for detailed numbers). None of these datasets are provided with labels except an anonymous speaker ID.
3) Baselines: The baseline system consists of an existing acoustic unit discovery system which discovers GMM-HMM models and clusters them using an unsupervised Bayesian approach (see [43]). We then use decoding from this system (i.e., sequences of unit labels) instead of phonemes, and train an out-of-the-box speech synthesizer (Merlin, with the Ossian front end [99]). For the topline system, we replace the unit discovery with an off-the-shelf GMM-HMM ASR system. 4) Results: The performance was overall quite good, with several systems achieving better resynthesis than the text-based topline. As shown in Figure 5, there is a general tradeoff between synthesis quality and bitrate, which held both in the dev language (English) and in the heldout surprise test language (Indonesian). As shown by the black point in the figure (the decoded output of a simple phone recognizer), phonemic transcription is a highly-compressed representation of speech which is excellent for this task (the middling MOS scores are, as for Task 1, attributable to the fact that the outof-the-box ASR and TTS were not optimized to the task).
Many of the systems that have a low bitrate (under 100b/sec) learn a discrete autoencoder on acoustic features (Kam19a-b,Yus19,Gök19,Liu19a-b,Gün20), generally taking further steps such as filtering or downsampling to reduce the temporal resolution. Taking a slightly different approach, our baseline model, as well as the related Yus20a-c, learn latent HMMs as acoustic units, in order to explicitly model duration. On the other hand, Pan19a-b,Kum20a,b put temporal reduction in an initial step of acoustic segmentation based on syllable-like units. Among these models, Kum20b, which presegments and then learns HMM acoustic units, stands out as reaching performance comparable to higher bitrate models (it admittedly has a somewhat higher bitrate than the other models listed here). Syllable-like presegmentation, as noted above, has also been used productively in Task 2 by Ras17, Alg22. It is fair to say that syllables have been underutilized in zero resource speech processing, given their promise.
Most of the remaining systems have a high bitrate between 100 and 600b/sec. Supervised posteriorgrams are on the upper end of this, and MFCC representations have a bitrate around 1500. 7 Most of the submitted systems in this range are compression approaches using discrete autoencoders, including the system of Che20b, which gives excellent performance. The system of Nie20a,b stands out among the others as yielding high quality results. This is the only submitted system which uses a predictive loss based on CPC-although, unlike typical CPC models, it works from spectrogram and is trained on the small (15h) dataset provided for the 2019 edition.
The results of [100] also support the claim that CPC and related approaches are well-adapted to discrete resynthesis. In addition, [100] demonstrated that an automatic evaluation using ASR is strongly predictive of human evaluators' ratings, and that the discrete representations can be used to support learning a language model.
D. Task 4: spoken LM
Spoken language modeling is the task of learning a language model directly from audio. Such a model could be end-toend, learning directly from speech, or it could take as input discrete or continuous representations from Task 1 or word level representations from Task 2-so long as these input representations are learned without supervision from text or other labels. The task can be understood as the modeling of the probability distribution of spoken utterances in an unknown language.
1) Evaluation metrics: For Task 4, the evaluation problem is severe. Language models trained from text are typically evaluated by the perplexity over a test corpus, or by finetuning on downstream tasks. As discussed above, the ZRC series focuses evaluation on zero-shot tasks that require no training. This excludes a fine-tuning evaluation. As for perplexity, in text-based systems, it is derived from the conditional probability distribution of the next token given a past sequence of tokens. In speech-based systems that use discrete pseudotext units, the number of such units is a latent variable, making the perplexities difficult to compare across models. The problem becomes worse for systems that do not use discrete representations at all, where the estimation of the conditional probabilities themselves becomes model dependent. The two editions of the ZRC 2021 used a battery of 4 metrics, each one measuring performance at a different linguistic level: acoustic, lexical, syntactic and semantic.
At the level of acoustics, we the ABX-LS benchmark as defined in Section II-A, built on top of LibriSpeech dev and test sets (see [101], [102]).
At the lexical and syntactic levels, instead of computing an average perplexity across a corpus, the ZRC uses a contrastive approach, where a "pseudo-probability"p is computed for minimal pairs of utterances-one grammatically legal, the other illegal. The pseudoprobability can be obtained from a language model by decomposing the probability of an utterance U into a product of conditional probabilities of each of its constituant units u i :p(U ) = p(u 1 )p(u 2 |u 1 )..p(u N |u 1 ..u n−1 ), or by computing an average perplexity score or of the loss function over the utterance U . An accuracy Acc is computed by counting how oftenp is higher for the legal than for the illegal utterance: where T is a test set containing pairs of audio, one legal (a), one illegal (b); the chance level is 0.5. To probe the lexical level, pairs of well matched words versus nonwords (e.g. "brick" versus "blick") are constructed using the Wuggy nonword generator [103]. The syntactic levels is probed by using pairs of grammatical and non grammatical sentence derived from the BLIMP dataset [104]. All stimuli are synthetized using the Google TTS API resulting in the sWUGGY and sBLIMP test sets, respectively (see below for details).
Finally, ZRC evaluated the semantic level by using a similarity probe task used to investigate word embeddings [105]. It correlates the similarity of systems' representations of words with human similarity judgments. This enables us to measure the extent to which the model is able to extract lexical semantic knowledge. As for the ABX task, participants provide embeddings for input tokens as well as a distance to compute similarity. The Spearman rank correlation is calculated between the dissimilarity scores provided in the submission test set (sSIMI), d(a, b), and the dissimilarity scores given by human judgments, d h (a, b). The challenge provided by default the cosine distance computed over pooled embeddings (with mean, max or min pooling).
2) Datasets:
The there is only one benchmark (sLM-21) associated with Task 4. The default training set is LibriSpeech 960h, although participants can use other training sets so long as no labels is provided besides speaker ID. The test set is split into sWUGGY, sBLIMP and sSIMI, that evaluate the lexical, syntactic and semantic levels, respectively. These test sets are described in details in [59] and only briefly summarized here.
The sWUGGY dev and test sets consists of 5,000 and 20,000 pairs of words and nonwords respectively, with the existing words being part of the LibriSpeech train vocabulary. There is also an additional OOV-sWUGGY dev and test set sconsisting of 5,000, and 20,000 pairs respectively, with existing words which do not appear in the LibriSpeech training set. The nonwords are produced with WUGGY [103], which generates, for a given word, a list of candidate nonwords best matched in phonotactics and syllabic structure, which were additionally filtered for pronouncability using G2P, and for having on average the same unigram and bigram phoneme frequencies as words. Waveforms were produced with the Google Speech API.
The sBLIMP dev and test sets are adapted from BLIMP [104], a set of linguistic minimal sentence pairs of matched grammatical and ungrammatical sentences. The dev and test sets contain 6,300 and 63,000 pairs respectively, with no sentence pair overlap. Stimuli were filtered to contain LibriSpeech vocabulary and for natural prosodic contours, and synthesised as above.
The sSIMI dataset was constructed out of 13 existing semantic similarity and relatedness datasets: WordSim-353 [106], WordSim-353-SIM [107], mc-30 [108], rg-65 [109], Rare-Word (or rw) [110], simLex999 [111], simverb-3500 [112], verb-143 [113] , YP-130 [106] and the relatednessbased datasets include MEN [114], Wordsim-353-REL [107], mturk-287 [115], and mturk-771 [116]. All scores were normalised on a 0-10 scale, and pairs within a same dataset containing the same words in different order were averaged. Pairs containing a word absent from LibriSpeech train set [117] were discarded. The mturk-771 dataset was set aside as a dev set and the other 12 datasets were used as test sets, after removing overlapping pairs across dev and test sets. Given the unequal size of the test sets, the ZRC Benchmark introduced a weighted average of the Spearman scores which we report here. Two subsets of audio files, one synthetic, and one natural, were created, the latter being obtained by extracting the audio sequences corresponding to each word from LibriSpeech, as in [105]. In this subset, each word can appear in multiple tokens, providing phonetic diversity; duplicated scores are averaged in the analysis step. The natural subset is smaller than its synthetic counterpart, as we had to discard pairs from the test and dev sets which were not present in the LibriSpeech test and dev sets respectively. The synthesized subset is composed of 9744 and 705 word pairs for the test and dev sets respectively, and the LibriSpeech subset is composed of 3753 and 309 pairs for the test and dev sets.
3) Baselines: Our baseline system, described in [59], is based on first training a contrastive predictive coding (CPC) model. We review the CPC acoustic model [74] here for clarity. Given an input waveform x, the encoder component of the model maps it to a sequence z = (z 1 , . . . , z T ).
An autoregressive component then predicts the future, taking z 1 , . . . , z t and outputting a latent representation c t , which is a representation of the context. Given the context c t , CPC tries to predict the K next future embeddings {z t+k } 1≤k≤K by minimizing a constrastive loss: where N t is a random subset of negative embedding samples, and W k is a linear classifier used to predict step k of the future.
Our baseline system then clusters the resulting framewise representations (as independent observations) using k-means, to reduce them to 50 units. The resulting discrete sequences are passed as input to a character-based language model. We experimented with both BERT and LSTM models, and found that large BERT models performed best. 4) Results: The first round of submissions was documented in 2021 [14]; the best-performing systems were variants of our baseline system. A second round was opened as a NeurIPS 2021 challenge, including a visually-grounded training option. Briefly, this modified scenario expands the range of data that models can be trained on, to include multi-modal datasets (like speech and image, or speech and video). The rationale is that young children learn in a multimodal, multisensory enviroment rather than by just listening. Some earlier models of word discovery and representation learning demonstrated the feasibility of such muldimodal training [118]- [120]. Following [60], Task 4 was expanded to include "visually-grounded" training. Participants were to indicate the dataset they used. Systems were only tested with speech-only inputs, however, for comparability with non grounded systems. Here, we present for the first time the results of these latest submissions to Task 4 (see Table V).
Similar to the baseline models, the systems of Gan21, Ngu21a,d Bha21a,b, and Gao21a-c take the approach of training acoustic units and then constructing a language model on their outputs. The distinction between high-budget systems and low-budget systems is made the basis of the number of GPU hours needed to train the language model. Gao21a-c apply segmentation and pooling to reduce the temporal resolution of the units, while Bha21a,b use Segmental CPC to learn units and segmentation jointly. Ngu21a,d are technical improvements on the previous best system BAS4-lg. On the other hand, Ngu21b,c are HuBERT systems, trained end-toend on a masked language modelling task.
The systems of Pen21 and Lee21a,b are visually grounded. In the case of these two systems, that means they both start from acoustic units that are trained using parallel speechimage data (picture captions). One difference between the two models is the type of training-Pen21 trains end-to-end on a masked language modelling objective, while Lee21a,b use the pre-trained features as input to a small BERT.
Task 4 is clearly in its very early stages (this in spite of the excellent ABX performance of the units used in systems up to now). However, even at this stage, after only one year's worth of submissions, spoken language modelling has shown improvement on the spot-the-word task (moving from the best speech-based baseline's 75% accuracy up to 80%) and on the syntactic judgment task (improving from 56% to 60% accuracy). The approach so far has been simple: high-quality units and a powerful language model. In the baseline models as well as most submissions, these components were trained separately; newer models like HuBERT [76] learn them jointly. The two approaches are currently tied for the top position on the leaderboard (Ngu21a,d, CPC units fed to a large BERT model, and Ngu21b,c, HuBERT systems). As for capturing word semantics, the Fast-VGS+ system of Pen21 stands out as a serious competitor. This visually-grounded system takes advantage of spoken image caption data in training.
III. WHAT NEXT? Over the six editions of the ZRC series, the following lessons can be drawn: • Great progress has been made in Acoustic Unit Discovery (T1) due to recent breakthroughs in self-supervised representation learning showing good scaling properties in large corpora. The latent units discovered at this stage, though, are not interpretable linguistic units like phonemes but represent shorter-duration acoustic events. • The Discrete Resynthesis task (T3) obtained excellent results, sometimes surpassing text-based systems in resynthesis quality-at the expense of bitrate, which remains about 4 to 8 times higher than phonemes or text. • Spoken Word Discovery (T2) still remains disappointingly difficult in all of its three subcomponents (matching, clustering, and segmentation). Presumably, understudied effects of the acoustic and temporal variability inherent to speech still hampers current approaches. • Spoken Language Modeling (T4) got surprisingly promising results, given that the task is complex, considering the difficulties found in Task 2, and the fact that most systems only worked from Task 1 units with sub-phonemic temporal granularity. There is however room for progress, given the gap between speech-based and text-based language models on syntactic and semantic tests. Given the large body of results that have accumulated, it may now be useful to reflect on some of the basic assumptions and methods of the ZRC series to determine how to move forward. The assumptions are related to the architecture presented in Figure 1b and its corresponding task decomposition. The methods are related to the particular choice of metrics that were chosen to evaluate each of these tasks. We discuss in particular the role of Acoustic Modeling and the Lexicon.
A. Acoustic Modeling and ABX.
One of the basic assumptions of the ZRC series is the existence of an Acoustic Modeling component that turns speech input into a latent representation, which plays the role of phonemes or text in that it can be directly used as input to other processing components: the Lexicon, Waveform Generation, and possibly even Language Modeling. Methodologically, we proposed the machine ABX task as a metric to gauge the quality of this latent representation. This makes the prediction that there should be a strong positive correlation between ABX scores and the relevant metrics in the other components. Inspection of this correlation across tasks shows that such a correlation exists, but that it is in some cases weak.
• T1 and T3: a reanalysis of the 35 ZRC submissions shows a Pearson correlation coefficient of r = .57 and r = .54 (English and Indonesian, resp.) between ABX and intelligibility as measured by Character Error Rate (CER). The systems differed not only in the encoder but also in the decoder introducing noise in the correlation. A more controlled correlation across 9 systems with matched decoders [100] reported r = .905. 8 • T1 and T4: we also find a reasonably high correlation between ABX and spot-the word (r = .52 across the ZRC submitted systems, and r = .853 in [100] in a more controlled comparison with matched language models). • T1 and T2: the situation is confusing. Since the beginning, we have seen that these two tasks may require different representations. For instance, unit discovery worked well with MFCC, but word discovery worked better with PLP. Similarly, [94] showed that, across 16 types of word embeddings (supervised or unsupervised), ABX scores correlate only moderately well with two other proxies for word segmentation (frequency estimation: .53; Mean Average Precision: .45).
While the observed level of correlation may be sufficient to still use ABX as a proxy for comparing models before using them for downstream tasks, some caution is necessary, especially for the link between T1 and T2. One possible explanation for this discrepancy is that the assumption of a single acoustic level feeding all downstream tasks is wrong, and that there are instead several different acoustic codes with different properties. Alternatively, it could be that there is a single code, but that the ABX metric is not capturing the linguistic proprerties of this representation relevant for all the other tasks. While ABX was constructed to measure contrasts in minimal pairs of possible word across changes in speaker, it may not capture well other kinds of invariance (speaking rate, phonetic context) that are crucial for some of the other tasks. Further studies will be needed to sort out this question.
B. The Lexicon, discrete units and interpretability
One reason text-like representations-be they phonemic, alphabetic, or logographic-are fundamental to speech and language processing is that they serve a dual function. On the one hand, they record linguistically important properties of the form (what was actually uttered). On the other hand, they support straightforward analysis of the content ("meaningful" properties like morphology, syntax, and semantics).
While human listeners are sensitive to detailed, subphonemic properties [121], and while various gradations in lexical meaning can be observed [122], the two kinds of variability are not generally correlated. For example, although it is possible to pronounce the noun sun with an initial sound that would be intermediate between an /s/ and a /f/-making it sound somewhat more like the adjective fun-this gradient change does not evoke a concept of "slightly amusing star," nor make the word more adjective-like. In other words, textlike representations would thus seem to be necessary for decorrelating form from meaning using an arbitrary mapping between a word's phonological forms and its semantic or syntactic representations [123].
However, achieving this crucial decorrelation may not necessarily require that the representations be discrete, nor that they correspond to interpretable linguistic units or "words''' as defined in a dictionary. First, linguistically, there are at least three distinct notions of 'words' (prosodic, syntactic and semantic [124]), which may not may not be aligned with how dictionaries are constructed. 9 Second, dictionaries are the result of a long cultural evolution where many design choices have been made that may not be consistent within or across languages. As a result, it could be that the requirement for T2 to provide segmentations and lexicons that are aligned with the written text is too strong. The fact that word-based units like BPE work well for text-based applications does not mean that the equivalent units for speech would align well with word boundaries. We could imagine in the future replacing the T2 metrics by new metrics reflecting the functional role of word segmentation, i.e., that of providing a level of granularity where arbitrary mappings between form an meaning can be learned (along the lines of the sSIMI metric in T4).
C. The future of the Zero Resource Speech Challenge
The submission site, www.zerospeech.com, is now open continuously and allows for running evaluations on all of the past benchmarks. The field of unsupervised representation learning is established enough that it is no longer necessary to channel it through special events. Indeed, self-supervised audio models are such an active domain that there are many relevant new models (for example, WavLM: [125]) which have yet to be evaluated on the ZRC metrics. Existing benchmarks, especially for Tasks 2 and 4, also still have a lot of potential for improvement, without creating more difficult tasks.
One exception to this is shown in Figure 1. Combining Task 3 and Task 4 leads naturally to consider the possibility of generating spoken language. A traditional spoken dialogue system will (conditional on some knowledge source) generate text, typically using a neural language model. The text is synthesized into speech. A spoken language model can be made to generate spoken language directly, as demonstrated by [67], [100], [126], [127]. Much as Task 3 is complementary to Task 1-but has slightly different constraints-the task of generating speech from a spoken language model is complementary to Task 4, yielding a potential Task 5. The evaluation of such a potential task may follow [100], replacing human evaluations of intelligibility and meaningfulness of by ASRbased proxy measurements (Phone Error Rate for intelligibility in a Task 3 setting, and continuation BLEU or VERT score for the prompted or unmprompted generations).
Another reason for not declaring the ZRC series closed is that there is still a lot to understand on the evaluation side for existing tasks. For Task 1, recent research has shown that many discovered representations may not be speaker invariant [128] and differ from how humans perceive speech sounds [81], [129]. Further, their ability to perform out of domain (noisy enviroments, accented speech) has not been evaluated [31], [130].
IV. CONCLUSION: TOWARDS TEXTLESS NLP
Research on Acoustic Unit Discovery has led to a wave of new models using unsupervised pre-training to advance ASR. It also opens up the more radical possibility that one may get rid of text altogether, and proceed with building language processing pipelines directly from raw audio. Up to now, this possibility has done little to change the dominance of text as the basic currency of NLP. One exception is translation, in which the idea of training machine translation directly from speech to speech in an end-to-end fashion has seen substantial uptake [131]- [135]. Other ways of removing text from the processing stack have also been explored, such as for language generation [100]. In general, however, converting speech to text and back remains the first and the last step of current speech-based NLP systems.
The allure of replacing text with low-bitrate unsupervised representations of speech goes beyond bringing NLP to lowerresource languages. Learned pseudo-text promises to be more flexible than traditional orthography: if the transcription system is the result of learning, it can also change to deal with new varieties and accents, and can learn to capture important linguistic information which is captured in a very limited way by typical writing systems, such as prosody. On the other hand, it promises to be more consistent: some writing systems are complicated by arbitrary exceptions, while other languages lack standardized conventions for spelling. Discovered representations could avoid both of these issues. Unlike traditional phonetic transcription, which uses a fixed, universal set of symbols which can in fact have rather different phonetic values across languages, unit discovery allows for a system to be adapted to the language.
The Zero Resource Speech Challenge has spearheaded efforts to build demonstrably useful unit discovery, as well as stimulating progress in applying these representations to more complex tasks. The major advances in building more realistic auditory-like representations have already borne fruit in recognition and synthesis. As we move toward better evaluations, we look ahead to the possibility of truly textless NLP-and a major key to unlocking cognitive models of human language development and speech perception. | 2022-09-15T15:48:15.326Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "da4f4968c05e6c3e9f6fe779701ee3fe3a7fa84c",
"oa_license": "pd",
"oa_url": "https://hal.archives-ouvertes.fr/hal-03789716/document",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ae5c367d6e0d7019c81c0006ec87d274d8c4c852",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
216129870 | pes2o/s2orc | v3-fos-license | Precaution and prevention of coronavirus disease 2019 infection in the eye
Although current studies suggested that conjunctivitis is not a common presentation of coronavirus disease 2019 (COVID-19), several studies have reported the presence of severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) in ocular secretions. Coronavirus had not yet been successfully cultured from tears or conjunctival swabs in humans, neither SARS-CoV-2 nor SARS-CoV. However, live feline coronavirus has been isolated from conjunctival swabs. In addition, infection of COVID-19 through unprotected eye exposure had been suspected in several articles. Reports of ophthalmologists and otolaryngologists died of COVID-19 also raised concern on ocular transmission. As a result, we strongly suggest that personal protective equipment (PPE) should cover the mouth, nose, and eyes of ophthalmologists, especially when conjunctivitis caused by SARS-CoV-2 is clinically indistinguishable from other viral follicular conjunctivitis.
INTRODUCTION
Coronavirus disease 2019 (COVID-19) is currently a global pandemic affecting 184 countries with over 1.6 million people infected and more than 100 000 deaths. It was first noticed in China in December 2019 by an ophthalmologist, Dr. Li Wenliang, who died of the same disease unfortunately. Although the main route of virus transmission is through respiratory droplets, several studies have raised concerns about infection occurring through unprotected exposure to the eyes. Up until now, whether ocular secretions could be contagious is still controversial. 1,2 This review study is aimed to provide more insights regarding this issue by summarizing currently available information on severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2), and previous knowledge of other coronaviruses either in human or in animals.
In addition, since conjunctivitis is one of the most common diseases in eye clinics, we would like to discuss the difference between different pathogens, such as other coronaviruses and adenoviruses. We would like to know more about the structure, characteristics, and tissue tropism of each virus. By better understanding how they infect host cells and release viral particles, hopefully, we could gain better insights on fighting and preventing this pandemic.
Incidence
The percentage of COVID-19 patients with conjunctivitis or other ocular manifestation is rather low. A large study from China consisting of 1099 laboratory-confirmed COVID-19 cases, only nine patients (0.8%) had conjunctival congestion. 3 On the other hand, in another retrospective case series consisting of 38 clinically confirmed cases from China, 31.6% were found to have ocular manifestations similar to conjunctivitis, which was rather high compared with other studies. 3,4 Ocular symptoms in this study included conjunctival hyperemia, chemosis, epiphora, or increased secretions. Since chemosis in critically ill patients could be a consequence of third space shifting or fluid overload, rather than conjunctivitis, hence, the American Association of Orthodontists (AAO) disagreed with this statement. In this study, the presence of SARS-CoV-2 in both conjunctival and nasopharyngeal swabs was reported in two patients (5.2%). 4,5
Presence of SARS-CoV-2 in ocular secretion
According to recent reports, SARS-CoV-2 can cause conjunctivitis and, possibly, it can be an early sign of infection. In a case series study published in the Journal of Medical Virology, Xia et al 6 30 confirmed COVID-19 patients in China were studied. Only one of them had conjunctivitis, and SARS-CoV-2 RNA was detected by RT-PCR in ocular secretions of the same patient. However, the viability and disease progress of SARS-CoV-2 infection in these patients remained unclear.
In another study from Singapore, 17 laboratory-confirmed COVID-19 patients were recruited. All tear samples were negative for RT-PCR and viral isolation, even when concurrent nasopharyngeal swabs were shown positive. Tears were collected throughout 2 weeks of active infection, suggesting that ocular transmission is unlikely at any stage of infection. But the authors also mentioned limitations such as small sample size, only one patient had ocular symptoms, and unavailable sampling at earlier infection status. 7 Abstract: Although current studies suggested that conjunctivitis is not a common presentation of coronavirus disease 2019 (COVID-19), several studies have reported the presence of severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) in ocular secretions. Coronavirus had not yet been successfully cultured from tears or conjunctival swabs in humans, neither SARS-CoV-2 nor SARS-CoV. However, live feline coronavirus has been isolated from conjunctival swabs. In addition, infection of COVID-19 through unprotected eye exposure had been suspected in several articles. Reports of ophthalmologists and otolaryngologists died of COVID-19 also raised concern on ocular transmission. As a result, we strongly suggest that personal protective equipment (PPE) should cover the mouth, nose, and eyes of ophthalmologists, especially when conjunctivitis caused by SARS-CoV-2 is clinically indistinguishable from other viral follicular conjunctivitis.
Hints from other coronavirus: SARS-CoV
Studies have shown that SARS-CoV and SARS-CoV-2 share similar receptor-binding domain. 8 Transmission of SARS-CoV is believed to be by contact of infectious respiratory droplets or indirect contact of fomites with mucous membrane, namely the eyes, nose, or mouth. 9 A case series report from Singapore on SARS reported the presence of SARS-CoV RNA in tear samples of three out of 36 suspected patients. 10 Yet in another study, the RT-PCR and viral culture were all negative for both tears and conjunctival scrapings in 17 serologically confirmed SARS patients. However, several limitations, including small sample size, one time sampling, rather low testing sensitivity, have been pointed out. Hence, the authors concluded they could not totally exclude the presence of virus in tears. 10
Hints from coronaviruses in animals
Currently available studies on SARS-CoV-2 regarding ocular involvement are not sufficient. Previous investigation on SARS-CoV is also limited as SARS epidemic quickly retreated. Since coronavirus can also cause ocular problems in animals, such as conjunctivitis, anterior uveitis, retinitis, and optic neuritis, it is reasonable to look into the studies of coronavirus in feline and murine models. Although the viability of SARS-CoV RNA in tears remains unanswered in humans, live feline coronavirus (FCoV) has been isolated from conjunctival swabs of cats. 2
COMPARISON OF CONJUNCTIVITIS ASSOCIATED WITH SARS-COV-2, SARS-COV, AND ADENOVIRUS INFECTIONS
SARS-CoV-2 can cause follicular conjunctivitis, possibly through aerosol contact with conjunctiva (Table and Figs. 1 and 2). Acute conjunctivitis is one of the most commonly encountered diseases in the clinic, with virus being the predominant pathogen. And among them, human adenoviruses (HAdVs) take up around 65% to 90% of cases. 14 Adenoviruses are nonenveloped viruses with broad tissue tropism, including mucous membranes such as ocular surface, respiratory, gastrointestinal (GI), and genitourinary (GU) tract. They are resistant to typical disinfection, such as 70% alcohol. 15 On the other hand, enveloped viruses such as coronavirus are more sensitive to simple disinfection, heat, dryness, or extreme pH.
Both SARS-CoV and SARS-CoV-2 belong to lineage B betacoronaviruses and have the same host cell entry receptor, angiotensin-converting enzyme 2 (ACE2). Expression of ACE2 can be found in respiratory and intestinal tracts, renal, cardiac, and immune cells. 16 It has also been demonstrated in aqueous humor and retina, but not in conjunctiva or cornea. This could be the reason for low incidence of conjunctivitis in COVID-19 patients. Regarding its relative, SARS-CoV, a review article in the New England Journal of Medicine reported that 752 SARS patients had no ocular manifestation. 9 As for how SARS-CoV-2 infects people through conjunctiva, the infectious aerosols could be drained into the nasolacrimal duct and then further into the respiratory tract with tears.
Although there is no definite evidence whether adenoviral conjunctivitis is contagious before symptoms onset, Kimura et al 17 examined conjunctival scraping from contralateral eyes of 32 adenoviral conjunctivitis patients for HAdV with immunochromatography and PCR. All samples were negative even in those eyes that later developed conjunctivitis with PCR-proved identical virus with the first eyes. In addition, given the characteristic of nonenveloped structure, viral spreading requires cell lysis. Hence, the possibility of viral spreading is very small before onset of symptoms. This is possibly the reason why we could not find available incidence data for HAdV in Table. Similarly to other enveloped viruses, the release of SARS-CoV-2 does not require the process of host cell lysis. This is less immunogenic compared with the disruption of the host cell. Hence, Sungnak et al 18 speculated this could be the mechanism of transmission in presymptomatic cases. They found the highest expression of ACE2 in nasal epithelial cells and suspected that viral release could be present before clinical symptoms developed.
In conclusion, as an emerging pandemic, the incidence of conjunctivitis in COVID-19 patients is rather low and current evidence suggested that ocular transmission is unlikely. This is contrary to the most common adenoviral conjunctivitis in clinical situations, which is highly contagious and resistant to typical disinfectants. So far, the viability of SARS-CoV-2 in human ocular secretions remains unanswered. Although neither SARS-CoV-2 nor SARS-CoV have been successfully cultured from tears or conjunctival swabs in humans, live coronavirus had been isolated from conjunctival swabs in cats.
The proposed mechanism of SARS-CoV-2 infection via conjunctiva is that respiratory droplets could be drained into the nasolacrimal duct and then further into the respiratory tract with tears. Despite the possibility of viral transmission through tears is rather low, we strongly suggest that personal protective equipment (PPE) should cover the mouth, nose, and eyes of ophthalmologists, just as the American Academy of Ophthalmology has recommended. Because of the close distance between patients and doctors during most ophthalmic exams, even the slightest risk is unbearable. The death of ophthalmologists and otolaryngologists had been reported in several countries. Therefore, extra care is required. | 2020-04-26T13:02:06.151Z | 2020-04-21T00:00:00.000 | {
"year": 2020,
"sha1": "3e550b2877cc21182b9c812a2885732db352c350",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.lww.com/jcma/Fulltext/2020/07000/Precaution_and_prevention_of_coronavirus_disease.9.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c0cd8209801b5fa59796f6a8a06571eabf2ac88",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119151296 | pes2o/s2orc | v3-fos-license | Off-diagonal and pointwise estimates for compact Calderon-Zygmund operators
We prove several off-diagonal and pointwise estimates for singular integral operators that extend compactly on $L^{p}(\mathbb R^{n})$.
Introduction
An operator is said to satisfy an off-diagonal estimate from L p (R n ) into L q (R n ) for p, q > 0 if there exists a function G : [0, ∞) → [0, ∞) vanishing at infinity such that for all Borel sets E, F ⊂ R n and all f ∈ L p (R n ), with implicit constant depending on the operator T and the exponents p, q. Some authors distinguish between properly off-diagonal estimates, when E ∩ F = ∅, and the so-called on-diagonal estimates, when E ∩ F = ∅. However, we will not follow such convention and instead we will always call them off-diagonal estimates.
In the specific case of singular integral operators, the study focuses on the exponents 1 ≤ p = q < ∞. Very often, off-diagonal bounds are considered in one of the two following dual forms: T (χ I )χ (λI) c L 1 (R n ) G(λℓ(I))|I|, 1 |I| I |T (χ (λI) c )(x)|dx G(λℓ(I)) (see [3] and [11]), for any cube I ⊂ R n and λ > 1, where |I| denotes the volume of the cube and λI is a concentric dilation of I. While their use in Analysis is very classical, the interest for this type of inequalities in modern Harmonic Analysis renewed in the nineties after the publication of new proofs of the T(1) Theorem that used the wavelet decomposition approach (see [4] and [8] for example). These proofs were based on the development of estimates of the form (1.1) | T (ψ I ), ψ J | |J| |I| for all cubes I, J ⊂ R n with |J| ≤ |I|, under the appropriate hypothesis on the operator T , the parameter δ > 0 and the functions ψ I , ψ J involved.
The importance of off-diagonal inequalities lays mainly on two facts. On the one side, they are a satisfactory replacement for pointwise estimates of the operator kernel when these are not available or even when the operator kernel is unknown. On the other side, they completely enclose the almost orthogonality properties of the operator. For these reasons, they played a crucial role in the solution of the famous Kato's conjecture [1] about boundedness of square root of elliptic operators and are nowadays extensively used in the study of second order elliptic operators. In the field, these estimates are typically established for one-parameter collections of operators (T t ) t>0 and the function G also depends on the parameter t in an appropriate manner (see [2], [5], [6], [9], [10] and [12]). Finally, it is also worth mentioning that off-diagonal bounds provide very valuable information for the development of efficient algorithms to compress and rapidly evaluate discrete singular operators (see [7] and [14]).
In the project A characterization of compactness for singular integrals I developed, a new T (1) Theorem to characterize not only boundedness but also compactness of Singular Integral operators with K a standard Calderón-Zygmund kernel. The main theorem provides sufficient and necessary conditions for compactness of Calderón-Zygmund operators in terms of the kernel decay and the action of the operator over special families of functions: One of the goals of the current paper is to establish similar type of estimates for singular integral operators that can be extended compactly on L p (R n ) with 1 < p < ∞. In [13], the author proved a characterization of these operators based on a new type of off-diagonal estimates for Calderón-Zygmund operators. Now, in the current paper, we aim to improve these bounds in several ways and also obtain some new estimates. More explicitly, we show in section 3 that, in a broad sense and under the right hypotheses, these operators satisfy similar inequalites to (1.1) but with a new factor F that encodes the extra decay obtained as a consequence of their compactness properties: The focus of this work is actually placed on obtaining a sharp and as detailed as possible description of the function F in the different cases under study. Furthermore, in section 4 we establish pointwise estimates of the action of the operator over compactly supported functions. This allows to claim, in a broad sense as well, that the image of a bump function adapted and supported in a cube behaves as a bump function adapted, although not supported, to the same cube. As before, these estimates explicitly state an extra decay not present in the classical bounds that is again due to the compactness of the operator.
Notation and Definitions
We say that the set is constant when varying the index i. We denote by Q n the family of all cubes in R n . For every cube I ⊂ R n , we denote its centre by c(I) = ((a i + b i )/2) n i=1 , its side length by ℓ(I) = |b i − a i | and its volume by |I| = ℓ(I) n . For any λ > 0, we denote by λI, the cube such that c(λI) = c(I) and |λI| = λ n |I|.
We write | · | p for the l p -norm in R n with 1 ≤ p ≤ ∞ and | · | for the modulus of a complex number. Hopefully, the latter notation will not cause any confusion with the one used for the volume of a cube. We denote by B = [−1/2, 1/2] n and B λ = λB = [−λ/2, λ/2] n .
Given two cubes I, J ⊂ R n , we define I, J as the unique cube such that it contains I ∪ J with the smallest possible side length and whose center has the smallest possible first coordinate. In the last section, this notation will be applied also to points, namely x, y , as if they were considered to be degenerate cubes.
We denote the side length of I, J by diam(I ∪ J). Notice that where dist ∞ (I, J) denotes the set distance between I and J calculated using the norm | · | ∞ . Actually, We define the relative distance between I and J by which is comparable to max(1, n) where n is the smallest number of times the larger cube needs to be shifted a distance equal to its side length so that it contains the smaller one. The following equivalences hold: Remark 2.2. Since any dilation D λ L λ (x) = L(λ −1 x), D λ F (I) = F (λ −1 I) with L and F satisfying (2.1), (2.2) respectively still satisfies the same limits, we will often omit all universal constants appearing in the arguments.
We say that K is a compact Calderón-Zygmund kernel if there exist constants 0 < δ ≤ 1, C > 0 and functions L, S and D satisfying the limits in (2.1), such that We say that K is is a standard Calderón-Zygmund kernel if (2.3) is satisfied with F K ≡ 1.
We first note that, without loss of generality, L and D can be assumed to be non-creasing while S can be assumed to be non-decreasing. This is possible because, otherwise, we can always define which bound above L, S and D respectively, satisfy the limits in (2.1) and are non-creasing or non-decreasing as requested.
On the other hand, we also denote and assume, in a similar way as before, that L 2 and D 2 are non-creasing while S 2 is non-decreasing. Then, as explained in [13], (2.3) is equivalent to the following smoothness condition This resulting parameter δ ′ necessarily satisfies δ ′ < 1 since otherwise the kernel K would be a constant function. The proof of the equivalence between both formulations appears in [13]. However, to increase readability of the current paper, we sketch the proof that (2.3) implies (2.4). For any 0 < ǫ < δ, let δ ′ = δ − ǫ. Then, from (2.3) we can write Then, the functions satisfy all the required limits in (2.1) and As also proved in [13], the smoothness condition (2.3) and the hypothesis lim |t−x|∞→∞ K(t, x) = 0 imply the classical decay condition for all t, x ∈ R n such that t = x. Moreover, it is easy to see that we also get the decay which we will use later. Notice the change in the argument of S, which is now equal to the argument of L. Finally, we define two more sets of auxiliary functions which we will use in the next section. First, We note that for fixed ℓ(I), the Lebesgue Dominated Theorem guarantees thatD satisfies lim |c(I)|∞→∞D ( rdist(I, B)) = 0.
Definition 2.4. Let T : C 0 (R n ) → C 0 (R n ) ′ be a continuous linear operator. We say that T is associated with a Calderón-Zygmund kernel if there exists a function K fulfilling Definition 2.3 such that the dual pairing satisfies the following integral representation for all functions f, g ∈ C 0 (R n ) with disjoint compact supports.
Clearly, the integral converges absolutely since, by (2.5), we have for d = dist(suppf, suppg) > 0, Definition 2.5. Let 0 < p ≤ ∞. We say that a bounded function φ is an L p (R n )-normalized bump function adapted to I with constant C > 0, decay N ∈ N and order 0, if for all x ∈ R n (2.10) We say that a continuous bounded function φ is an L p (R n )-normalized bump function adapted to I with constant C > 0, decay N ∈ N, order 1 and parameter 0 < α ≤ 1, if (2.10) holds and for all t, x ∈ R n where t, x denotes the cube containing the points t and x with the smallest possible side length and whose centre has the smallest possible first coordinate.
Unless otherwise stated, we will assume the bump functions to be L 2 (R n )-normalized.
Definition 2.6. We say that a linear operator T : C 0 (R n ) → C 0 (R n ) ′ satisfies the weak compactness condition, if there exists a bounded function F W satisfying (2.2) and such that for any cube I ⊂ R n and any bump functions φ I , ϕ I adapted to I with constant C > 0, decay N and order 0, we have where the implicit constant only depends on the operator T .
As explained in [13], this definition admits several other reformulations, but they all essentially imply that the dual pairing T (φ I ), ϕ I tends to zero when the cube involved is large, small or far away from the origin.
Definition 2.7. We define CMO(R n ) as the closure in BMO(R n ) of the space of continuous functions vanishing at infinity.
The following theorem, which is the main result in [13], characterizes compactness of Calderón-Zygmund operators. This is the reason why we say that the new off-diagonal bounds appearing in the current paper apply to operators that can be extended compactly on L p (R n ).
Theorem 2.8. Let T be a linear operator associated with a standard Calderón-Zygmund kernel.
Then, T extends to a compact operator on L p (R) for all 1 < p < ∞ if and only if T is associated with a compact Calderón-Zygmund kernel and it satisfies the weak compactness condition and the cancellation conditions T (1), T * (1) ∈ CMO(R).
Off-diagonal estimates for bump functions
In the proof of Theorem 2.8, some off-diagonal estimates were developed. Now, we improve these inequalities in several directions: by extending the result to R n , by weakening the smoothness requirements of the bumps, by shortening the proof and by obtaining a sharper bound for functions with compact support. This is the purpose of the three propositions of this section, which describe the action of a compact singular integral operator over bump functions with or without zero mean properties respectively. Later, in section 4, we will use these bounds to obtain several pointwise bounds and other off-estimates of a more general type.
We first set up some notation that appears in the statements of the three results. We consider K to be a compact Calderón-Zygmund kernel with parameter 0 < δ < 1 and T to be a linear operator with associated kernel K satisfying the weak compactness condition. We denote by I ∧ J and I∨J the smallest and the largest of two given cubes I, J respectively. That is, while I ∧ J = I, I∨J = J, otherwise. We also remind the notation of F K , F W andF K provided in the previous section.
Proposition 3.1. If the special cancellation conditions T (1) = T * (1) = 0 hold then, for all bump functions ψ I , ψ J adapted and supported on I, J respectively, with constant C > 0, order one, parameter α > δ and such that ψ I ∧ J has mean zero, Proposition 3.2. For all bump functions ψ I , ψ J adapted and supported on I, J respectively, with constant C > 0, order one, parameter α > δ and such that ψ I ∧ J has mean zero, we have On the other hand, when rdist(I, J) > 3, inequality (3.1) still holds with the same F (I, J) = F K ( I, J , I ∧ J, I, J ).
For all bump functions ψ I , ψ J adapted and supported on I, J respectively, with constant C > 0 and order zero, we have In all cases, the implicit constants depend on the operator T and the parameters δ and α but they are universal otherwise. Needless to say that the actual value appearing in the condition rdist(I, J) > 3 plays no special role and it could be easily changed by any other value strictly larger than one.
As mentioned before, Proposition 3.1 is an improvement of the analog result in [13]. The result has been extended to non-smooth bump functions of several dimensions. At the same time, the proof has been largely simplified by using the extra hypothesis that the bump functions are compactly supported. Moreover, with this hypothesis, the last factor on the right hand side of the inequality turned out to be strictly smaller than the one appearing in [13]. In fact, when the bump functions are not longer compactly supported, as it happens in [13], the inequality (3.1) holds with a larger factor depending on six different cubes rather than only three cubes. Nevertheless, in both cases, the factors enjoy essentially the same properties and so, each of the two estimates suffices to prove compactness of the operator.
We also note that in Proposition 3.2, the hypotheses that T (1), T * (1) ∈ BMO(R n ) or T (1), T * (1) ∈ CMO(R n ) are not needed. Moreover, in Proposition 3.3, the assumption of T satisfying the special cancellation conditions T (1) = T * (1) = 0 does not lead to any further improvement.
Notation 3.4. For the following three proofs, we provide some common notation. For every cube I ⊂ R n , we denote by Φ I ∈ S(R n ) an L ∞normalized function adapted to I with arbitrary large order and decay such that 0 ≤ Φ I ≤ 1, Φ I = 1 in 2I and Φ I = 0 in (4I) c . This implies As customary, we define the translation and dilation operators by respectively with x, a ∈ R n and λ > 0. We also define w I (x) = 1 + ℓ(I) −1 |x − c(I)| ∞ and for any function ψ = ψ 1 ⊗ ψ 2 of tensor product type, we write Λ(ψ) = T (ψ 1 ), ψ 2 .
Finally, by symmetry we can assume that ℓ(J) ≤ ℓ(I) and so, which, by hypothesis, is supported and adapted to I × J with constant C 2 , decay N, order 1, parameter α > δ and, most importantly, it has mean zero in the variable x.
a) We first assume that 3ℓ(I) < diam(I ∪J) which implies (5I)∩J = ∅ and so, diam On the other hand, 3ℓ( The last inequality implies that the support of ψ is disjoint with the diagonal and so, we can use the Calderón-Zygmund kernel representation to write where the second equality is due to the zero mean of ψ in the variable x. Now, we denote Q I,J = {t ∈ R n : diam(I ∪ J)/2 < |t − c(J)| ∞ ≤ diam(I ∪ J)}. Then, by the smoothness condition (2.4) of a compact Calderón-Zygmund kernel and the monotonicity properties of L, S and D, we bound as follows: as stated. To completely finish this case, we explain in more detail the reasoning used to obtain the bounds for D used in the second and third inequalities above. Since |x| ∞ ≤ (|x − t| ∞ + |x + t| ∞ )/2, we have Then, in the domain of integration, and, since D is non-creasing, we have Then, J , B).
b)
We now assume that diam(I ∪ J) ≤ 3ℓ(I) which implies 1 ≤ rdist(I, J) ≤ 3. In this case, we first show that we can assume ψ(c(J), x) = 0 for any x ∈ R n . This assumption comes from the substitution of ψ(t, x) by where Φ = Φ B as described in Notation 3.4. Then, we only need to prove that the subtracted term satisfies the desired bound. We denoteψ(x) = ψ(c(J), x). Since ψ I and ψ J are adapted to I and J respectively with constant C > 0 and decay N for any N ∈ N, we have We also recall thatψ is supported on J and has mean zero. Now, we write λ = ℓ(I)/ℓ(J) ≥ 1 and take k ∈ N so that 2 k ≤ λ < 2 k+1 . Then, T c(J) D ℓ(I) Φ = T c(J) D λℓ(J) Φ.
To simplify notation, we write Φ 0 = T c(J) D ℓ(I) Φ ∈ S(R n ) and Φ 1 = 1 − Φ 0 . We note that Φ 1 is a smooth bounded function supported on |t − c(J)| ∞ > λℓ(J). By the classical theory, we know that T (1) can be defined as a distribution acting on the space of compactly supported functions with mean zero in the following way where the second equality is due to the mean zero ofψ. Notice that, since |x − c(J)| ∞ ≤ ℓ(J)/2 ≤ ℓ(J)2 k−1 ≤ 2 −1 |t − c(J)| ∞ , the supports of Φ 1 andψ are disjoint and so the integral in the first line converges absolutely. Then, the hypothesis that T (1) = 0 implies Moreover, since 2|x − c(J)| ∞ < |t − c(J)| ∞ , we can use the the smoothness condition (2.4) of a compact Calderón-Zygmund kernel to write Then, by the reasoning applied in the previous case, we have Now, we rewrite the last integral as Then, we write which is the first term in the stated bound. This finishes the justification of the assumption ψ(c(J), x) = 0 for any x ∈ R n . Now, we decompose ψ in the following way: b1) We first prove that ψ in is adapted to J ×J with order zero, decay N and constant C 2 (|J|/|I|) 1 2 + δ n . By the assumption ψ(c(J), x) = 0, the fact that ψ is supported and adapted to I × J with order one and parameter α and that ψ in is supported on 3J × J, we have for all t ∈ 3J and all x ∈ J, since δ < α and |J| ≤ |I|. Notice that we also used |t − c(J)| ∞ ≤ 3ℓ(J)/2. Therefore ψ in is adapted to J × J with order zero and the stated constant and so, by the weak compactness property of T we get which ends this case. b2) We now work with ψ out . In this case, by the extra assumption again and the support of ψ, we have the following decay Due to the support of ψ, we get |t − c(I)| ∞ ≤ ℓ(I)/2 while, by the calculations in (3.4) and the hypothesis of this case, we also have Moreover, due to the support of ψ out , we have |t − c(J)| ∞ ≥ 3ℓ(J) and |x − c(J)| ∞ ≤ ℓ(J)/2. The last two inequalities imply 2|x − c(J) ∞ | < |t − c(J)| ∞ and so, we use the integral representation and the mean zero of ψ out in the variable x to write This, together with the bound of ψ out calculated in (3.6) and the smoothness condition (2.4) of a compact Calderón-Zygmund kernel, allow us to bound in the following way: The first integral can be bounded by |x−c(J)|<ℓ(J)/2 |x − c(J)| δ ∞ dx |J| 1+ δ n and, since δ < α, the second integral is bounded by On the other hand, since |c(I) − c(J)| ∞ ≤ diam(I ∪ J) ≤ 3ℓ(I), we have as before Finally then, Proof of Proposition 3.2. As before, ψ(t, x) = φ I (t)ψ J (x) is supported and adapted to I × J with constant C 2 , decay N, order 1, parameter α > δ and it has mean zero in the variable x. We divide the proof into the same cases as before. a) When 3ℓ(I) < diam(I ∪ J), exactly the same reasoning of case a) in the proof of Proposition 3.2 holds since the only properties needed are the mean zero of ψ J and the smoothness property of the compact Calderón-Zygmund kernel. b) We now assume that diam(I ∪J) ≤ 3ℓ(I). As before, we first show that we can assume ψ(c(J), x) = 0 for any x ∈ R n . This assumption comes again from the substitution of ψ(t, x) by But now we need to prove that the subtracted term satisfies the desired bound without the use of the condition T (1) = 0. We remind the We denoteψ(x) = ψ(c(J), x) which, as before, satisfies the decay and so, it is a bump function supported and adapted to J with order zero and constant C 2 |I| −1/2 . Let J k = 2 k J for k ∈ N, k ≥ 0 and let Φ J k be bump functions L ∞adapted to J k and supported on 4J k as defined in Notation 3.4. We define now ψ 0 = Φ J 0 and ψ k = Φ J k − Φ J k−1 for k ≥ 1 which satisfy k≥0 ψ k (x) = 1 for all x ∈ R n . Therefore, we have with a finite sum due to the compact support of Φ 0 which implies Now, for k = 0, since Φ 0 · |J| −1/2 Φ J 0 is supported on 4J and L 2adapted to J, we can apply weak compactness condition to obtain When k ≥ 1, due to the supports of ψ k andψ, we have that 2 k−1 ℓ(J) < |t − c(J)| ∞ < 2 k+1 ℓ(J) and |x − c(J)| ∞ < ℓ(J)/2 respectively. This implies that 2|x − c(J)| ∞ ≤ ℓ(J) ≤ |t − c(J)| ∞ and so, we can use the integral representation, the mean zero ofψ and the smoothness condition (2.4) of a compact Calderón-Zygmund kernel to write which is the first term of the stated bound. This finishes the justification of the assumption ψ(c(J), x) = 0. From here, the proof that Λ(ψ in ) satisfies the required bounds follows exactly the same steps as the one in cases b1) and b2) in the proof of Proposition 3.1.
Proof of Proposition 3.3. Now, the function ψ(t, x) = φ I (t)ψ J (x) is supported and adapted to I × J with constant C 2 , decay N and order zero but it does not necessarily have mean zero.
a) As before, we first assume that 3ℓ(I) < diam(I ∪ J). By the calculations in the proof of Proposition 3.1, we have Then, the support of ψ is disjoint with the diagonal and we can use the Calderón-Zygmund kernel representation to write Now, with the same notation Q I,J = {t ∈ R n : diam(I ∪ J)/2 < |t − c(J)| ∞ ≤ diam(I ∪ J)} and the kernel decay described in (2.6), we bound as follows: b) We now assume that diam(I ∪ J) ≤ 3ℓ(I) and we decompose ψ in the same way as before: ψ = ψ out + ψ in with ψ in (t, x) = ψ(t, x)Φ 3J (t) and divide the analysis into the same cases. b1) We claim that ψ in is adapted to J × J with order zero and constant C 2 (|J|/|I|) 1 2 . Since ψ is adapted to I ×J and ψ in is supported on 3J × J, we have for all t ∈ 3J and all x ∈ J, This proves the claim and so, by the weak compactness property of T , we get b2) We now work with ψ out for which we have the decay Moreover, the calculations in the analog case b2) of the proof of Proposition 3.1 show that on the support of ψ we have that |t−c(I)| ∞ ≤ ℓ(I)/2 and while due to the support of ψ out , we also have |t − c(J)| ∞ ≥ 3ℓ(J) and |x − c(J)| ∞ ≤ ℓ(J)/2. Then, 2|x − c(J) ∞ | < |t − c(J)| ∞ and we can use the integral representation to bound in the following way: The first term can be bounded by a constant times
Pointwise and off-diagonal estimates for general functions
In this last section, we provide several pointwise estimates of the action of the operator over general functions and over bump functions, Proposition 4.5 and Corollary 4.6 respectively. Moreover, in Proposition 4.7 we prove a new off-diagonal inequality for general functions.
We start with some technical results which, despite being well-known for bounded singular operators, we hereby reproduce here their proofs for compact singular operators in order to highlight the role played by compactness in the gain of decay and smoothness.
Lemma 4.1. Let T be a linear operator associated with a standard Calderón-Zygmund kernel. Let
Let f be an integrable function with compact support in a cube I ⊂ R n . Then, for all x / ∈ 3I there exists the limit of T (f ), Φ x,ǫ when ǫ tends to zero. Definition 4.2. By previous lemma, we can define Proof. We check that ( T (f ), Φ x,ǫ ) ǫ>0 is a Cauchy sequence. Let x ∈ R n \(3I) fixed and we choose ǫ 1 , ǫ 2 < 2ℓ(I)/5.
Then, for all t ∈ suppf we have ℓ(I) < |t − x| ∞ while for all we get y ∈ suppΦ x,ǫ i , |y − x| ∞ ≤ ǫ i /2 < ℓ(I)/2. Both inequalities imply Hence, f (t) and φ x,ǫ i (y) have disjoint compact supports and, by the integral representation, we can write and so, Now, for all y ∈ suppΦ, we have |y| ∞ ≤ 1/2 and so, Therefore, we can apply the smoothness condition of the kernel to bound in the following way: In all forthcoming results, we consider T to be a linear operator associated with a compact Calderón-Zygmund kernel K with parameter 0 < δ < 1. We do not assume on T any other hypotheses like weak boundedness or weak compactness or T (1) belonging to any space in particular. for all x / ∈ 3I. Moreover, T (f ) is Hölder-continuous in R n \(3I) satisfying Remark 4.4. Notice that if S(x) ≤ |x| β ∞ with β > 0, then, T (f ) is Hölder-continuous with parameter δ + β which is better than in the case when T is only a bounded singular integral operator.
Proof. We consider ǫ > 0 small enough so that it satisfies several inequalities stated along the proof. We denote by B ǫ = [−ǫ/2, ǫ/2] n and, given x ∈ R n , we define J = x + B ǫ . Let also ϕ J = C|J| −1 w J (x) −N be a positive bump function L 1 (R n )-adapted to J with order one, decay N and constant C such that ϕ J (x)dx = 1. First, ǫ can be taken so that ℓ(J) ≤ ℓ(I). Moreover, the hypothesis x / ∈ 5I implies that diam(I ∪ J) > 3ℓ(I). Then, from the proof of Proposition 3.3 in its case a), we have On the other hand, for ǫ small enough, we have omitting constants. Now, taking limit when ǫ tends to zero, we get by Lemma 4.3 . Finally then, which proves the first inequality.
We prove now the second one. Let x, x ′ ∈ R n as stated and let c = (x + x ′ )/2. Again, we consider ǫ > 0 to be small enough for our purposes. We define J 1 = x + B ǫ , J 2 = x ′ + B ǫ and the functions ϕ J i for i = 1, 2 as before. Then, the function ϕ J 1 − ϕ J 2 is supported on J 1 , J 2 and it has mean zero.
Corollary 4.6. Let φ I be a bump function adapted and supported to I with constant C > 0, decay N and order zero. Then, in R n \5I, T (φ I ) satisfies the definition of a bump function adapted to I with constant C > 0, decay n, order one and parameter δ, plus an extra factor in decay due to compactness.
Notice that, even though φ I has compact support and decays at infinity as fast as |x| −N for any large N > 0, T (φ I ) does not have in general compact support and its decay is only comparable to |x| −n . Both facts are typical of bounded singular integral operators. However, if the operator is associated with a compact Calderón-Zgymund kernel, the decay of T (φ I ) improves depending on the rate of decay of the factor L(ℓ( I, x )) when x tends to infinity.
On the other hand, note the gain in smoothness with respect bounded singular integrals provided by the factor |x − x ′ | δ ∞ S(|x − x ′ | ∞ ). We show now an off-diagonal estimate for general functions deduced directly from the previous pointwise bound. which, by Hölder, is smaller than the right hand side of the stated inequality.
We end the paper by adding few remarks to the previous proposition. We first note that, in the particular case of f being a bump function, we obtain We also remind that, for bounded but not compact singular integral operators, the analog of Proposition 4.7 implies that for a fixed cube I and f L p (I) ≤ 1, we have lim λ→∞ T (f )χ (λI) c L p (R n ) = 0 with a rate of decay at most of order λ n p ′ . However, for compact singular integral operators, the extra factor stated in Proposition 4.7 ensures that there is always an extra gain in decay. To see this, we note that for all x ∈ (λI) c we have λI ⊂ 3 I, x and so, λℓ(I) ≤ 3ℓ( I, x ). Hence, F K ( I, x ) L(ℓ( I, x )) L(λℓ(I)) and since lim λ→∞ L(λℓ(I)) = 0, the rate of decay is now at worst as fast as λ n p ′ L(λℓ(I)). Finally, since we also have the bound T (f )χ (λI) c L p (R n ) = 0.
The last two properties do not hold in general for bounded singular integral operators. | 2017-07-08T17:58:28.000Z | 2017-07-08T00:00:00.000 | {
"year": 2016,
"sha1": "9357bf2f9760f5c8697541d3c368e59058196cda",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1707.02472",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9357bf2f9760f5c8697541d3c368e59058196cda",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
202749312 | pes2o/s2orc | v3-fos-license | Lipidated Polyaza Crown Ethers as Membrane Anchors for DNA-Controlled Content Mixing between Liposomes
The ability to manipulate and fuse nano-compartmentalized volumes addresses a demand for spatiotemporal control in the field of synthetic biology, for example in the bottom-up construction of (bio)chemical nanoreactors and for the interrogation of enzymatic reactions in confined space. Herein, we mix entrapped sub-attoliter volumes of liposomes (~135 nm diameter) via lipid bilayer fusion, facilitated by the hybridization of membrane-anchored lipidated oligonucleotides. We report on an improved synthesis of the membrane-anchor phosphoramidites that allows for a flexible choice of lipophilic moiety. Lipid-nucleic acid conjugates (LiNAs) with and without triethylene glycol spacers between anchor and the 17 nt binding sequence were synthesized and their fusogenic potential evaluated. A fluorescence-based content mixing assay was employed for kinetic monitoring of fusion of the bulk liposome populations at different temperatures. Data obtained at 50 °C indicated a quantitative conversion of the limiting liposome population into fused liposomes and an unprecedently high initial fusion rate was observed. For most conditions and designs only low leakage during fusion was observed. These results consolidate LiNA-mediated membrane fusion as a robust platform for programming compartmentalized chemical and enzymatic reactions.
Results and Discussion
Synthesis of double-chain anchor building blocks for solid-phase DnA synthesis. We have previously reported on a lipid functionalized macrocyclic anchor building block that was inserted into DNA oligomers to facilitate DNA-mediated assembly of liposomes 3 . Rohr et al. described an early-stage insertion of lipid moieties into the polyaza crown ether scaffold, i.e., at the level of compound 1 (Fig. 3). Herein, we report on an improved synthetic strategy that allows for later-stage lipid anchor functionalization. The syntheses of benzylated diamine 1 and dialdehyde 3 have been described previously 37 . Hence, starting from 1 (see Fig. 3), the final phosphoramidite 5a was obtained over four steps. First, treating 1 with palladium hydroxide in methanol at room temperature provided the deprotected diamine 2. Next, reacting 2 with dialdehyde 3 in a standard reductive amination under dry conditions afforded the desired cyclized polyaza crown ether 4, which was subsequently condensed with hexadecanal to give 5. Finally, phosphitylation of 5 provided the desired lipid anchor building block 5a primed for standard automated solid-phase oligonucleotide synthesis of LiNAs 1-4 ( Fig. 4).
Design, synthesis and characterization of LinA fusogens. Previous studies used a design of DNA oligomers that are able to fold up into a zipper-like geometry (Fig. 4) giving rise to a close distance between bilayers upon hybridization 8,10,34 . In this study we synthesized two pairs of modified DNA oligonucleotides that hybridize in the same manner. The first pair contained a single T overhang and one insertion of the 5a building block within the phosphodiester backbone (designated X E ) at the 5′ or 3′ end, respectively (LiNA-1 and LiNA-2). The single nucleotide overhang was introduced as in previous studies 3,4 to prevent the self-aggregation of LiNA molecules into micelles in aqueous phase, driven by hydrophobic interactions between the anchors 38 . In presence of the extra nucleotide, two negatively charged phosphate groups flank the anchor building block electrostatically and sterically disfavoring micelle formation (absence of foaming of aqueous LiNA stock solutions). The second set contained a triethylene glycol phosphate spacer (P3, see Fig. 4) inserted in between the anchor X E and the DNA-based zipper unit adding approximately 3 nm (1.5 nm for each spacer in the duplex, similar to the ones described for coiled-coil fusogens by Daudey et al.) 39 to the maximal separation between docked bilayers (LiNA-3 and LiNA-4). Previously, using LiNAs based on the 3-amino-1,2-propanediol backbone, the presence of a P3 spacer increased fusion efficiency 12 . In these LiNAs the distance between the 5′-OH and the N-atom bearing the aliphatic anchors was only four bonds, compared to the nine bonds in the present design (X E anchor). We hypothesized that relative to LiNA-1 and LiNA-2, which do not contain spacer P3, less strain-release would be expected upon liposomal fusion mediated by LiNA-3 and LiNA-4. In this study, these two pairs serve to evaluate the effect of linker size and flexibility on LiNA-mediated fusion of liposomes.
The oligonucleotides listed in Fig. 4B were purified on a reverse phase HPLC as previously described 34 . Circular dichroism spectroscopy revealed that both pairs (LiNA-1/2 and LiNA-3/4, Fig. 4B) formed standard B-type DNA duplexes (see the Supplementary Information). Thermal denaturation and circular dichroism experiments of the complementary strands confirmed that each X E or X E -P3-modified DNA could bind to unmodified complementary sequences (Supplementary Information, Table S2 and Figure S1 and S2). The T m of such duplexes Figure 2. Anchor design. Different anchor scaffold structures and proposed modes of anchoring (A) Anchor based on aza-crown ether scaffold has a 6-bond spacing over a rigid aromatic system giving the two lipophilic chains an approximate spacing of 0.7 nm and allowing each chain to interact with a different subset of lipids in the bilayer. (B) In the anchor based on 3-amino-1,2-propanediol scaffold both chains are attached to the same atom, favoring intramolecular interactions of the two chains, effectively resulting in a bigger lipophilic moiety which is expected to interact with the membrane in a concerted manner (C) Molecular representation of average lipid chain distances (red lines) for relaxed palmityl chains in aza crown ether and amino propanediol membrane anchors (hydrogens not shown for clarity) and top view of the aza crown ether macrocycle with hydrogen bonded (green dotted lines) ammonium head group of aminoethanol lipids from the lipid bilayer.
(56 to 58 °C) were comparable to the unmodified reference duplex (55 °C), however, duplexes of LiNA-1/2 and LiNA-3/4 exhibited an increased T m (77 ± 1 °C). This increase was attributed to the hydrophobic interactions between aligned alkyl chains. In the presence of liposomes, the lipid anchors are already embedded into the nonpolar environment of different phospholipid bilayers and cannot exhibit the abovementioned stabilization. When carrying out the fusion experiments at 50 °C, we argue that this is approx. 6-8 °C below the actual T m of the system. No fusion above the T m of the LiNA/DNA duplexes was observed at 60 °C (data not shown).
LinA-Liposome binding using Surface plasmon Resonance (SpR). The spontaneous anchoring of LiNAs into liposome bilayers was assayed using surface plasmon resonance, a method that generates a signal based on the bulk of material present a few hundred nanometers above a gold surface. Liposomes were immobilized onto a sensor chip coated with an alkyl-modified dextran matrix (Biacore L1 sensor chip) in the instrument flow cell (Fig. 5A) 40 . LiNAs were injected slowly (contact time 0-600 s, 2 µl/min, Fig. 5B) giving rise to a fast binding response, saturating the surface of the immobilized liposomes within 60 seconds. After the initial contact time, a continuous flow of 2 µl/min HBS resulted quick removal of loosely associated LiNA, but thereafter the signal approached a new baseline due to the remaining anchored LiNAs (+200-300 response units after 1000 s). LiNA presence was corroborated by testing their ability to form duplexes with complementary DNA strands (non-modified), which were likewise injected slowly (contact time 1600-2100 s, 2 µl/min) giving rise to approximately a doubling of the response at the end of contact time. This is expected for an efficient hybridization as the surface bound mass approximately doubles when LiNAs hybridize with their complementary strand. After contact of the complementary DNA, a slow signal decrease was observed during HBS flow over the surface, which must either be due to de-hybridization of DNA or "de-anchoring" of LiNA/DNA duplexes (anchoring must become less effective after nearly doubling the molecular mass by a second hybridized DNA). The fusion efficiency was assayed by measuring the content mixing (CM) of the two liposome populations using a well-established fluorescence assay. One population encapsulates Sulforhodamine B (SRB) at a self-quenching concentration (20 mM) 21,41 while the other is filled with buffer (unlabeled). In brief, the assay works as follows: upon opening of a fusion pore between docked liposomes, the content volumes rapidly mix and SRB is diluted causing an increase in fluorescence signal intensity (dequenching). SRB efflux and water influx www.nature.com/scientificreports www.nature.com/scientificreports/ (in the following termed "leakage") also lead to signal increase. Such leakage could either occur passively or as a result of membrane lesions during fusion. Thus, we refer to this experiment as "apparent CM" (Fig. 6, green curve) and for each experiment a leakage control was performed in parallel. For these controls both populations contained SRB at the same concentration and content mixing thus gives rise to zero dilution. Any observed signal increase must, in this case, be due to SRB-dilution via leakage processes (Fig. 6, grey curve). In another control, the measurement of the apparent CM was repeated with non-complementary LiNAs (LiNA-1 + LiNA-1, Fig. 6, blue curve). All fusion experiments were carried out at 20, 37 and 50 °C and in independent duplicates. The approx. number of LiNA strands on the SRB-labeled and unlabeled liposomes were 195 and 65, respectively, and the populations were mixed in a 1:3 ratio. This allows each SRB-labeled liposome to statistically fuse with up to three unlabeled ones, diluting encapsulated [SRB] from 20 mM to 10 mM, 6.7 mM and 5 mM, after the first, second and third round, respectively.
LinA-induced liposome fusion and content mixing.
In addition to measuring CM, the particle size distribution was measured at intervals throughout the course of the experiment. This was done by taking aliquots from a fusion experiment using only unlabeled liposomes and diluting these approx. 50-fold in buffer at room temperature. The samples were then analyzed using nanoparticle tracking analysis (NTA, see Figures S4 and S5).
As previously observed, the LiNAs mediated fusion most efficiently at 50 °C as summarized in Fig. 6. The inset illustrates the amount of leakage that contributed to the apparent CM signal, from which it is clear, that in absence of LiNAs the signal increase was due to background processes entirely (see Figure S6 for time course overlay with control experiments). At the same time, the presence of non-complementary LiNAs only gave rise to a minute signal increase (Fig. 6, blue curve) suggesting that engrafting liposomes with polyaza crown ether modified LiNAs had the effect of decreasing passive SRB permeability as observed previously for the anchors with a 3-aminopropane-backbone 12,13 . As hypothesized earlier, this effect is likely due to the charge repulsion between liposomes by the polyanionic DNA on the liposome surface 12,42 .
In the current study, elevated leakage was observed for LiNAs with a P3 spacer (LiNA-3/4), where both leakage and content mixing scaled with temperature, in comparison to LiNA-1/2, where leakage remained low. In previous LiNA systems, based on a simpler anchor structure (3-aminopropane-1,2-diol) 12 , the presence of a P3 spacer did not lead to an increased leakage. Based on this result, we speculate, that the spacer can affect membrane stability.
After correcting for leakage contribution to the apparent CM, the resulting net signal was used to estimate the percentage of full fusion events of SRB-filled liposomes relative to the total number of SRB-filled liposomes, hereafter termed fusion yield. To this end, a calibration curve relating the net I/I 0 to the remaining fraction of liposomal SRB-concentration (χ SRB ) follows: Table S2).
www.nature.com/scientificreports www.nature.com/scientificreports/ where I I / 1 χ χ= is the intensity of a series of samples with different χ SRB (χ SRB = 0.8, 0.5, 0.33; corresponding to [SRB] 16 mM, 10 mM or 6.7 mM, respectively) divided by the intensity at the initial SRB concentration ([SRB] 0 = 20 mM); a and b are linear coefficients and were found to be independent on the temperature at which the samples were recorded (see Methods and Ref. 13 ). The fusion yield was then defined as following: As indicated above, yields >100% can occur, which signifies that labeled liposomes on average fused with more than one unlabeled liposome. The fusion yields at 50 °C are plotted as a function of time in Fig. 7A, and the yields at 30 min are summarized in Fig. 7B.
The temperature dependence of LiNA-induced membrane fusion; ~5% fusion yield at 20 °C, 20-40% at 37 °C and 95-170% at 50 °C (Fig. 7B); is congruent with the trend of increasing spontaneous fusion with temperature 29 . Temperature increase has a two-fold effect: i) increasing the energy of thermal collisions between membranes brought in proximity by the hybridized LiNAs and ii) decreasing the interfacial tension of the membrane due to an increased surface area -each lipid simply has a higher effective surface area 43 . The consequence of this effect was demonstrated by Parolini et al. using giant liposomes that were tethered to each other via long oligonucleotides: microscopy images showed that the contact area between the liposomes increased with temperature, without changing the number of available tethers 44 . For LiNA induced fusion, this finding suggests that higher temperatures support membrane deformation, increasing inter-liposome contact area and the number of LiNA duplexes acting cooperatively, prolonging contact time, again amplifying the number thermal collisions between liposomes. Furthermore, decreased surface tension is supportive of extreme membrane curvature, i.e., negative during fusion stalk formation and highly positive during pore opening 45,46 . A similar trend was observed by Sadek et al. in the membrane fusion mediated by a β-peptide nucleic acid based SNARE-mimic 47 . www.nature.com/scientificreports www.nature.com/scientificreports/ In the present study, the more rigid pair LiNA-1/2 exhibited a higher maximal fusion yield (~170%) when compared to LiNA-3/4 (~95%), a result that might be explained by the shorter intermembrane distance LiNA-1/2 system, than in presence of the two P3 spacers. In contrast, the apparent CM signal was comparable between these www.nature.com/scientificreports www.nature.com/scientificreports/ systems, which prompts us to speculate whether the triethylene glycol linkers in the LiNA-3/4 play a role in the process from docking to fusion. If the linkers are innocent, the observed increase in leakage could be explained by a prolonged docked state in a configuration that is not quite as inducive to fusion than in the case of LiNA-1/2. If, however, the ethylene glycol interacts strongly with the lipid headgroups of the membrane, it might increase fusogenicity and permeability by disturbing the hydration layer around the membrane. We looked at the initial fusion rates (dYield/dt t=0 ) of the studied systems and compared them with other LiNA pairs from previous studies (see Table 1). The data suggests that the addition of a spacer slowed down the observed fusion rate both in case of the aza-crown ether and the aminopropane backbone.
The initial fusion rate for LiNA-1/2 was found to be very high (100% fusion yield within ~2 minutes). In the presence of the P3 linker (LiNA-3/4), the initial rate was slower. A similar trend was observed for LiNAs with the aminopropane backbone, that is, for LiNA-ref1/ref2 and LiNA-ref3/ref4, respectively ( Table 1). The system with P3 and aminopropane backbone (LiNA-ref3/ref4), however, showed the highest fusion yield, i.e. the SRB-labeled population in average fused with 2.8 of 3 equivalents of unlabeled liposomes, as reported earlier, despite the lower initial rate. For the best aza-crown ether anchored setup (LiNA-1/2), the labeled liposomes fused with 1.7 equivalents of unlabeled liposomes on average. Based on these and the current results, we were intrigued to further understand the parameters controlling fusion kinetics. Different anchor designs may benefit from different optimal linker. On the other hand, linker length is inversely proportional to fusion rate. In an elegant array of coiled-coil based fusogens with different linker lengths, Daudey et al 39 . were able to suggest optimal linker lengths for phospholipid and cholesterol anchors, respectively. Also, for cholesterol-anchored peptides, a slightly shorter linker on one of the fusogens resulted in increased initial rate. Our previous work tested many combinations of linker positions, lengths and anchor structure based on the aminopropane design, were similar trends in fusion efficiency were observed. On the other hand, it is more challenging to rationalize the influence of backbone structure as well as the spacing between and structure of anchor-moiety on fusion efficiency. Nonetheless, the fact that structurally rather different anchor designs could both lead to very efficient fusion underlines the robustness of using LiNAs as a tool for programmable liposome fusion.
conclusion
Herein we describe the development of new lipid-nucleic acid conjugates (LiNAs) able to efficiently induce liposomal fusion of inner membrane leaflets and subsequently enable efficient content mixing. An improved synthesis of anchor building blocks based on a polyaza crown ether scaffold with flexible late stage introduction of lipid moieties is reported. LiNAs anchored into the outer liposomes leaflet, these anchors provided strong fusogens. When labeled liposomes were fused with 3:1 excess of unlabeled ones, rapid and efficient fusion was observed. The best system provided complete turnover of the labeled population with low leakage after only 2 min when fusion was carried out at 50 °C. While anchors used in this study are structurally rather different from our previously reported LiNAs, the high fusion efficiency at 37 and 50 °C reported in our earlier study was reproduced. The findings underline LiNA-based fusogens as a robust tool for the fusion of biological membranes. LiNA-1/2 T-X E -17nt 170 ± 25 110 ± 10
General liposome preparation.
To produce unlabeled (DOPC/DOPE/Chol; 2:1:1, molar ratio) or membrane-labeled (additionally 0.25% Lissamine-Rhodamine-DPPE) lipid films, stock solutions in chloroform or methanol were mixed and evaporated under a stream of N 2 and further dried under high vacuum. The lipid films were rehydrated in buffer or SRB solution (below). The suspension was vortexed and extruded 21x through a 100 nm polycarbonate membrane (Whatman Nucleopore Track-Etched Membranes) using a hand-extruder (Avanti Polar Lipids) to form unilamellar liposomes (stored at room temperature and used within 36 h. The mean diameter was determined to be 134 ± 3 nm by Nanoparticle Tracking Analysis (NTA). encapsulation of Sulforhodamine B (SRB). Liposomes for content mixing and leakage experiments were obtained by rehydrating lipid films HBS containing 20 mM SRB, placed in a bath sonicator for 10 min at 50 °C, extruded as above and used within 36 h (mean particle diameter 135 ± 4 nm). To remove unentrapped dye, 50 µL batches of the suspensions were purified by spin-column size exclusion chromatography (GE Healthcare Illustra Microspin S-200 HR Columns) pre-equilibrated with HBS. Relative concentrations after spin-column purification was determined by fluorimetry using liposomes labeled with 0.25 mol% DPPE-Lissamine Rhodamine. After size exclusion, the following aliquots were mixed and filled to 100 µl with HBS: blank (10 µL Triton X-100 (1 wt%)), reference (10 µL of untreated liposome dispersion, 10 µL Triton X-100 (1 wt%)) and sample (10 µL of spin-column eluate, 10 µL Triton X-100 (1 wt%)). Average fluorescence was measured from five replicates (excitation wavelength (λ ex ), 560 nm; emission wavelength (λ em ), 583 nm; emission (em.) filter open, excitation/emission (ex./em). slits 5 nm, PMT 550V) and concentration determined from the average intensity (I) between three independent preparations using the formula: c = (I sample − I blank )/(I reference − I blank ) × c stock. Measurement in presence of detergent Triton X-100 (i.e. measuring a non-liposomal dispersion) gave the best reproducibility when using 384-well plates).
Sulforhodamine B (SRB) content mixing assay. Upon content mixing (CM) with unlabeled liposomes
-or leakage into the outer medium -the dye is diluted, leading to an increase in fluorescence. The measured fluorescence increase is thus based on both content mixing and leakage (Apparent CM) was corrected by a leakage (L) control, where both liposome populations contain the SRB and fluorescence increase must stem from leakage. calibration of SRB assay. A calibration curve with standard samples with different fractions of the starting concentration of entrapped SRB (χ SRB ) were measured. When normalizing the signal to the intensity measured at the starting point for content mixing (i.e., I/I 0 = Iχ/Iχ =1 ) a linear relationship to χ SRB was obtained. The normalized signal was largely independent on temperature (20, 37, 50 °C) and current liposome concentration (80, 138 and 275 µM total lipid concentration). The giving the effective χ SRB was calculated for each sample based on the SRB concentration after lysis in 0.1% w/v Triton X-100 (obtained against the standard curve of unentrapped SRB). The linear regression in the main text was calculated from a concatenate plot of three samples for each χ SRB = 1, 0.8, 0.5, 0.33; corresponding to [SRB] 20 mM,16 mM, 10 mM or 6.7 mM, respectively. | 2019-09-25T14:52:50.350Z | 2019-09-25T00:00:00.000 | {
"year": 2019,
"sha1": "86aa8baa467d95d5f8db798aba532d568bd96b16",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-49862-y.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86aa8baa467d95d5f8db798aba532d568bd96b16",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
95623018 | pes2o/s2orc | v3-fos-license | Empirical correlation of the surface tension versus the viscosity for saturated normal liquids
In 1966 Pelofsky proposed an empirical linear correlation between the natural logarithm of the surface tension and the reciprocal viscosity, which seems to work adequately for a wide range of fluids. In particular, it has been shown that it is useful in the case of n-alkanes and their binary and ternary mixtures. More recently however, it has been found not to work for several ionic liquids unless the reciprocal viscosity is raised to a power. The exponent of this power was fixed to be 0.3, at least for the studied ionic fluids. In the present work, the performance and accuracy of both the original Pelofsky correlation and the modified expression including the exponent are studied for 56 non-ionic fluids of different kinds over a broad range of temperatures. Also, the temperature range is delimited for which each expression reproduces the surface tension values with average absolute deviations below 1%. The needed coefficients are given for both the broad and the delimited temperature range for each expression. Unfortunately, the results show that the value of the exponent in the modified Pelofsky expression is substance-dependent for the normal fluids studied.
ionic liquids. From considering the non-linear behaviour of the fluidity ( 1 ) with temperature, they concluded that it could be appropriately correlated by using the modified expression [10]: where a and b are substance dependent coefficients and is a characteristic exponent. They examined 49 ionic liquids, and found that Eq. (2) fits the temperature-dependent viscosity of those liquids quite accurately with just a single universal exponent 0.300 Since the surface tension of fluids, including ionic liquids, is almost perfectly linear with temperature, and using the P expression of Eq. (1), Ghatee et al. proposed the MP relationship between the two properties as follows [10]: where C and D are substance dependent constants, and takes the universal (for the ionic liquids studied in [10]) value as 0.3. They report an overall AAD value of only 0.098%, which clearly demonstrates the accuracy of Eq. (3) for the cases they considered.
As noted in the Introduction, it is interesting to determine whether the P correlation works appropriately for common fluids, and what its accuracy is when it is applied to predict the surface tension values. Since the P expression includes the natural logarithm of this property, and since surface tension values differ greatly from one substance to another, a good linear relationship does not necessarily mean that low AAD values will be obtained the corresponding predictive expression.
One needs to consider both the applicability of the P expression (i.e., whether such a linear relationship can be established) and its accuracy in predicting surface tension values. We therefore calculate here R 2 (i.e., the square of the linear correlation coefficient which denotes the validity of the linear correlation between ln and 1/ ) and AAD for every fluid in as wide a temperature range as possible. We perform the same study for the case of the MP expression, but in this case also examining whether or not any improvement observed is significant with respect to the original expression which has one less adjustable coefficient. We also delimit the temperature ranges in which each expression gives excellent accuracy (AAD < 1%). These calculations were done for 56 fluids, including inert liquids such as argon, polar liquids such as water, non-polar fluids such as carbon dioxide, and several hydrocarbons and refrigerants.
As indicated above, we used data obtained from the NIST Web Book [15] because they are sufficiently accurate and are publicly and straightforwardly available. The data on the saturation curves are available for a certain temperature range, usually between the triple and the critical points, and limited to a maximum of 201 data points. Since the surface tension is defined as zero at the critical point, we excluded this datum, so that the default number of data for each fluid was 200. Nevertheless, we found that for certain fluids the surface tension and viscosity data are not both available for some low or high temperature ranges. For instance, in the case of the refrigerant R11 there are data for the surface tension near both the triple and the critical points, but not for the viscosity, so that, in this case, the final number of data used was only 175.
We note that, according to Mulero et al. [16], in the cases of ammonia and neon the presently available NIST surface tension data are inadequate, so that we used as a proxy the new correlation recommended in that work to generate the appropriate values.
The 56 fluids studied and their critical point temperatures are listed in Table 1 in alphabetic order for three kinds of substances: refrigerants, hydrocarbons, and other common fluids. The data used start at the temperature T 0 , which is just the triple point temperature except for the fluids marked with an asterisk, and finish at the temperature T f , which is automatically selected by the software in the NIST Web Book and is as near to the critical point temperature as possible. In the table, N is the number of data used for every fluid, and is the value of the exponent in the MP expression (which is exactly 1 in the case of the P correlation).
For every fluid, we checked the performance of the two correlations by calculating the linear coefficient of determination R 2 as the square of the following expression: The fits were made using the "polyfit" command in the Matlab software package (see Ref. [17] for details).
The R 2 value is a measure of how well the linear correlation fits the data. Nevertheless, from a practical point of view it is interesting to know how accurate these expressions are when they are used to predict surface tension values. In general, there is no direct relationship between the value of R 2 and the accuracy of the calculated surface tension. To calculate the AAD we first calculate the percentage deviation (PD) between the values obtained from the correlation by introducing the viscosity as input, ( i ), and the data offered by NIST, i , as follows: where N is the number of data considered for each fluid. We note that a positive PD i value means that the model overestimates the accepted datum, whereas a negative PD i value means that the model underestimates it. Then we calculated the average absolute percentage deviation for every fluid: It has to be borne in mind that, since AAD is a percentage, it is influenced by the higher individual PD values that are usually found at high temperatures, where the surface tension goes to zero and hence the relative deviations tend to increase considerably. In principle, a low AAD value does not mean that the PD values are low over the entire temperature range considered, so that high PDs may be found at particular temperatures, usually near the critical point.
In the following subsections we shall analyse the results considering both the R 2 and the AAD values, presenting several figures by way of illustrative examples. In those figures, not all the N data used will be plotted for the sake of clarity in showing the behaviour of the correlations.
a) Results for a wide temperature range
The R 2 and AAD values for the 56 fluids, obtained using both the P and the MP correlations, Eqs. (1) and (3), are listed in Table 1 The AAD values for the P correlation are less than 2% only for R11, R113, R115, R245ca, and nonane, so that the accuracy of the use of the P expression in reproducing surface tension values is very limited. In particular, AADs greater than 20% are found for water, oxygen, and deuterium oxide, for which compounds the P model is therefore clearly inadequate in the wide temperature range considered.
In the following, we shall examine some examples of the different behaviours. Thus, an example of good performance and accuracy of the P expression is found for R245ca. Figure 1a shows that the P expression can adequately reproduce the data for this fluid with low absolute deviations, with AAD = 0.74%. In any case, as noted above, one needs to bear in mind that near the critical point (i.e., for temperatures near T f = 443.86 K) the PD values increase significantly to even greater than 20%. The PD values are shown in Fig. 1b in which the values greater than 6% are not displayed.
Although for R245ca the MP expression is in principle not needed, the results are slightly improved by using an exponent value of = 1.0090 (see Table 1). The improvement at intermediate and low temperatures can be seen on the right-hand part of Fig. 1b.
When the P correlation is used for low n-alkanes, as was done by Queimada et al. [12], the results are inadequate, with the only exception being nonane. As can be seen in Fig. 2 which agrees with the NIST data in the high temperature range but deviates near the critical point.
The difference can be appreciated in Fig. 3a, where only the values at high temperatures are shown.
One can also see that the P and MP expression agree very well in this temperature range, so that the modification is not needed. As shown in Fig. 3b, the improvement of the MP expression with respect to the two P correlations is significant only at low temperatures, i.e., near the triple point.
The AAD value is only slightly reduced by using the MP expression instead of P. In general, the MP expression reduces the AAD values for all the n-alkanes, with the values obtained ranging from 1.7% for nonane to 4.30% for butane. If more accurate results are needed, the temperature range of validity has to be reduced. We shall consider this in the next subsection.
For some fluids, obtaining good linearity of Ln versus 1/, i.e., obtaining a high R 2 value, does not necessarily mean that the AAD value was low. A clear example is R152a. As can be observed from Table 1, the P expression led to a higher R 2 value than the MP one. When the surface tension is calculated from the viscosity values, the higher absolute deviations are appreciated only at low temperatures, as can be seen on the right-hand side of Fig. 4 (high viscosity values). Indeed, the P expression works better than the MP at these temperatures. Nevertheless, one observes in Table 1 that the MP correlation gives a lower AAD value. This is because AAD is a relative deviation and the PDs are higher at higher temperatures, i.e., for the lower surface tension values (see the previous example in Fig. 1). In order to obtain better results for this and other fluids, it is necessary to reduce the temperature range of application of both the P and the MP correlations.
When the MP correlation is used, the surface tension data are reproduced with AADs below 2% for 13 of the 56 fluids (only for 5 fluids in the case of the P correlation). The highest AAD value was obtained for R236ea (7.28%) despite the corresponding R 2 value not being low (0.997). This also means an improvement with respect to the P correlation, which gave AADs > 10% for 19 fluids (Table 1). For 34 fluids, such as R143a, carbon monoxide, deuterium oxide, hydrogen, oxygen, water, etc., the improvement is very significant. There are 7 fluids, however, for which the MP correlation is not significantly more accurate than the P one: R113, R123, R124, R141b, R218, R245ca (see Fig. 1), RC318, and nonane. Unfortunately, and contrary to the case of the ionic liquids studied by Ghatee et al. [10], the exponent does not take a fixed value, but varies from 0.8354 (dodecane) to 2.3792 (water).
Let us consider now two examples of the performance and accuracy of the MP expression. First, for argon the AAD is clearly reduced and the R 2 value is clearly increased. For this fluid, the data do not behave as exactly linear, although large deviations are only found at high temperatures, i.e., near the critical point. Figure 5 shows the improvement of the MP correlation in reproducing the surface tension data when compared with the P one, especially at low temperatures (high viscosity values). In any case, some high PD values are found at high temperatures, so that the AAD obtained using the MP correlation is still high. As for other fluids, the temperature range of applicability has to be reduced in order for the model to work more accurately.
A clearer example of the improvement reached with the MP correlation is shown in Fig. 6, which is the case of hydrogen. For this fluid, the AAD is reduced from 18.95% with the P model, which is clearly inadequate, to only 2.89% with the MP.
Despite the good performance and accuracy of the MP correlation for several fluids, it cannot be applied with high accuracy over the whole temperature range from T 0 to T f . If AAD values below 1% are needed in this temperature range, then it is adequate only for R115, R245ca, and neon. For the other fluids, the temperature range of applicability has to be reduced. This is considered in the next subsection.
b) Accurate results for a reduced temperature range
As indicate above, we have determined the temperature range at which the obtained AAD is below 1% for the two correlations considered. This temperature range starts at the T 0 value given in Table 1 and finishes at the new temperature values T 1p and T 1m , for the P and MP models, respectively.
These temperature values, expressed in reduced units, i.e., divided by the critical point temperature, are given in Table 2. The new coefficients to be used in Eqs. (1) and (3) are given in Tables 3 and 4.
In the particular case of R245ca, both correlation models give AAD < 1% over the whole temperature range considered in the preceding subsection, so that the T 1p , T 1m , and T f values are identical. The same is the case when the MP model is applied to R115 and neon. For R113, the T 1p and T 1m values are identical and very close to T f , which was to be expected since the AADs over the whole temperature range were both around 1.30%. For R32, the T 1p and T 1m values are identical, but clearly lower than T f . This is because, for the whole temperature range, the AADs are 10.49% and 5.23%, respectively. The same is the case with hexane. In all the other cases, the reduced temperature range for the MP correlation is wider than that corresponding to the P correlation. In some cases, such as R11, R125, hydrogen sulfide, and octane for instance, the difference between the T 1p and T 1m values is very small, so that both correlation models are accurate in this reduced temperature range.
For the rest of the fluids, the improvement of the MP correlation model with respect to the P one is clearer. Let us consider the case of R152a, for instance. The behaviour of the correlations for the whole temperature range is presented in the data of Table 1 and in Fig. 4. Figure 7 shows that the MP correlation is very accurate from the lowest temperature to the point marked with an asterisk, which corresponds to the value at T 1m = 0.91 T c and which is a wider temperature range than that corresponding to the T 1p = 0.727 T c value. As is shown in the figure, when the new coefficients are used, the correlation cannot be extrapolated beyond the indicated temperature range.
There are 13 substances for which the reduced temperature range in which the MP correlation is very accurate is significantly larger than the corresponding P temperature range: R13, R123, R124, R141b, R142b, R218, R227ea, heptane, isobutane, propane, carbon monoxide, hydrogen, and parahydrogen. In the case of hydrogen, the corresponding surface tension and viscosity values for T 1p and T 1m are those shown in Fig. 6. As can be seen, the range of values in which the MP model can be very accurately applied is very wide.
The greatest difference between the reduced temperature ranges are found for R124, isobutane, and propane. For R124 we found a clear improvement at low temperatures (high surface tension and viscosity values) when the correlation models are used in the reduced temperature range. The behaviour at high temperatures is shown in Fig. 8 As an example, in the case of water, the highest PD value is now around -5%. Fig. 10 shows the behaviour and accuracy of the proposed correlation in the reduced temperature range (that of Table 5) when applied to the whole temperature range (that in Table 1). As can be seen, the extrapolations lead to very inadequate results, so that the coefficients listed in Table 5 can be used only for the indicated temperature range.
Conclusions
We (four refrigerants plus nonane), and greater than 20% for water, oxygen, and deuterium oxide. It can therefore be concluded that the performance and the accuracy of the P expression is very limited for the selected fluids and temperature ranges. This conclusion runs contrary to that of Pelofsky [13], the reason being that we have here considered different substances and temperature ranges.
With the use of the MP correlation, the surface tension data were reproduced with AADs below 2% for 13 of the 56 fluids considered, the poorest value being 7.3%. The improvement with the MP correlation was very significant for 34 fluids, but not significant for 7 fluids. Unfortunately, unlike the case of some ionic liquids, the exponent in the MP correlation did not take a fixed value, but varied from 0.8354 to 2.3792.
Since in both cases the AAD values were greater than 1% for most of the fluids, we delimited the temperature ranges to AAD < 1% for each correlation. The required coefficients were given for both the entire and the reduced temperature ranges for each expression. In all cases, the temperature range was reduced by excluding from consideration some high temperatures since at these temperatures the surface tension tends to zero so that the deviations increase significantly in percentage terms.
This reduction of the temperature range was not needed in the case of R245ca, for which both correlations were very accurate over the entire range. In the case of R115 and neon, this was the case only for the MP correlation, and for R113, R32, and hexane, there was no difference between using the P or MP correlation in the reduced temperature range. Thus, for those five fluids the use of the simpler P correlation has to be recommended. In all the other cases, the reduced temperature range for the MP correlation was wider than that for the P correlation, with the difference being particularly notable for 13 substances.
In the cases of water, oxygen, isobutane, and deuterium oxide, the dependence of Lnσ on Although it would be desirable, we have not been able to find any significant relationship between the performance and accuracy of the studied correlations and the molecular structure or properties of the different kinds of fluids. The main difficulty to achieve this is that we have considered just empirical correlations and we have studied different kinds of fluids. An extensive theoretical and computer simulation study on a particular kind of fluids would be needed in order to can establish a sound relationship between the behaviour of the viscosity and the surface tension. Crosses: P correlation proposed by Queimada et al. [12]. The asterisk, *, indicates that the MP correlation can be applied with AAD < 1% at higher surface tension values. The square, , indicates the same but for the P correlation. Table 5; circles: selected NIST data. | 2019-04-05T03:31:15.537Z | 2013-08-25T00:00:00.000 | {
"year": 2016,
"sha1": "38cc7171d0cc5a3f7a96c5ebe5da8b310d2540f6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1606.09102",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "eaf154c0d63d77d1578841e1b546dae743539f27",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
} |
233326129 | pes2o/s2orc | v3-fos-license | Neutralism versus selectionism: Chargaff's second parity rule, revisited
Of Chargaff's four "rules" on DNA base frequencies, the functional interpretation of his second parity rule (PR2) is the most contentious. Thermophile base compositions (GC%) were taken by Galtier and Lobry (1997) as favoring Sueoka's neutral PR2 hypothesis over Forsdyke's selective PR2 hypothesis, namely that mutations improving local within-species recombination efficiency had generated a genome-wide potential for the strands of duplex DNA to separate and initiate recombination through the "kissing" of the tips of stem-loops. However, following Chargaff's GC rule, base composition mainly reflects a species-specific, genome-wide, evolutionary pressure. GC% could not have consistently followed the dictates of temperature, since it plays fundamental roles in both sustaining species integrity and, through primarily neutral genome-wide mutation, fostering speciation. Evidence for a local within-species recombination-initiating role of base order was obtained with a novel technology that masked the contribution of base composition to nucleic acid folding energy. Forsdyke's results were consistent with his PR2 hypothesis, appeared to resolve some root problems in biology and provided a theoretical underpinning for alignment-free taxonomic analyses using relative oligonucleotide frequencies (k-mer analysis). Moreover, consistent with Chargaff's cluster rule, discovery of the thermoadaptive role of the "purine-loading" of open reading frames made less tenable the Galtier-Lobry anti-selectionist arguments. Supplementary Information The online version contains supplementary material available at 10.1007/s10709-021-00119-5.
Introduction
For molecular evolutionists, base composition (GC%), nucleic acid higher order structure and optimal temperature are topics of abiding interest. An exploration of their interrelationships by Galtier and Lobry (1997) had, by 2021, received an impressive number of citations (259) that were generally positive. They had argued that Chargaff's second parity rule (PR2) would be best interpreted in neutral terms as the result of "mutational bias" as had been proposed by Sueoka (1995). However, noting "two distinct interpretations," Galtier and Lobry (1997) wrote: The second interpretation is that PR2 is the result of a "selection pressure favoring mutations that generate complementary oligonucleotides in close proximity, thus creating a potential to form stem-loops" (Forsdyke 1995a). According to this hypothesis, deviations from PR2 are deviations from the ideal case, where the whole genome is involved in secondary structures.
Simply stated, Galtier and Lobry favored a neutral explanation for PR2 over Forsdyke's selectionist explanation. Rather than proving Sueoka's proposal, their paper, through "analyzing the effect of temperature on the G + C content of bacterial genomes" set out to disprove Forsdyke: We have looked for this effect in molecules known to be involved in secondary structures (tRNAs, 5S rRNAs, and the stems of 16S and 23S rRNAs) and in the whole genome. If the second selectionist hypothesis is correct, a high proportion of the genomes should be involved in forming secondary structure, so genomic G+C content should follow the same pattern as that of the folded RNA G+C content.
As long argued by Bernardi (1993) and later recognized by Lobry and Sueoka (2002), the premise that the base compositions (GC%) of thermophile genomes, and of the corresponding mRNAs which function primarily as templates, would need to adapt to high temperatures in the same manner as tRNAs and rRNAs, which function primarily by virtue of their structures, was incorrect. Prompted by a recent commentary (Meyer 2021), after clarification of terminology relating to Chargaff's PR2, and description of structure analysis methodology that relates to Chargaff's GC rule (Forsdyke and Mortimer 2000), the selectionist case is here updated.
Chargaff's second parity rule terminology Sueoka (1995) referred to the equimolar pairing of purines and pyrimidines in duplex DNA (Watson and Crick 1953) as in accordance with the "interstrand base-pairing rule (BPR)" (my italics). However, he then went on to distinguish two forms of intrastrand parity, that he termed "PR1" and "PR2": "This article presents two types of intrastrand parity rules: the type 1 parity rule (PR1) is concerned with base substitution rates within one strand of DNA, and the type 2 parity rule (PR2) is concerned with the base composition at equilibrium within one strand of DNA." Subsequently these terms came to be applied differently. Sueoka's BPR has become known as "Chargaff's first parity rule" (PR1). The base equivalence in single-stranded DNA that Chargaff later discovered, has become known as "Chargaff's second parity rule" (PR2). The numbering reflects discovery chronology, not the number of strands interacting. Given the assumption that PR2 equivalences resemble PR1 as indicating equimolar pairing (A with T and G with C), it follows that PR2 has implications for the possible structures that a single-stranded nucleic acid, be it RNA or DNA, can adopt (Forsdyke and Mortimer 2000).
Nucleic acid structure, speciation and Chargaff's GC rule
Taxonomists construct phylogenetic trees that model species diversification. Organisms showing similar characters are placed close together. Organisms showing different characters are placed at greater distances. Some characters are more useful for tree construction than others. When nucleic acid sequences became available, they were used for tree construction. The closer two sequences, the closer were considered the corresponding organisms. To count the number of base differences between two sequences, various methods for comparative alignments of long strings of bases were introduced. However, the approach was empirical and did not consider the possibility that some aspects of sequences, other than base strings, might better relate to the underlying evolutionary processes that had perhaps initiated, or otherwise fostered, divergence. For example, an early indication of language divergence is a difference in accents. In this case, lining up long texts would necessitate the inclusion of much redundant information. Some measure of accent difference should better display the relationship between the emerging languages because redundancies would be eliminated.
Chargaff recognized that the quantity of G + C relative to that of A + T (expressed as GC%) was a species characteristic-his "GC rule" (Forsdyke and Mortimer 2000). Just as accent or dialect affects a spoken text in its entirety, so base composition (GC%) tends to uniformity either genomewide or in large genomic sectors (Bernardi 1993). Many differences in base composition that accumulate between species over time are selectively neutral. These interspecies mutations influence genome-wide oligonucleotide (k-mer) frequencies, of which base composition (GC%) is an indicator (see later). Changes in these frequencies could, in turn, influence the "kissing" interactions between the loops of stem-loop structures (Kleckner and Weiner 1993), so acting to generate, and/or sustain, members of emerging species by preventing recombination with parental forms (Forsdyke 1996(Forsdyke , 2014(Forsdyke , 2019a. Thus, measurement of base composition (GC%) is a simple method of eliminating the influence of base order that, for displaying a fundamental relationship between species, is redundant information.
However, sometimes the reverse is required. Just as a local arrangement of words conveys specific meaning to a text, so base order should better reflect local non-uniformity (variation) within members of a species. To clarify this, a method for eliminating the influence of base composition is needed. Since the energetics of the folding of a singlestranded nucleic acid into a stem-loop structure depend on both the composition and order of its bases, a localized sequence (e. g. a 200 base "window" in the sequence) that is rich in the strongly-pairing bases G and C, will tend to have a stable structure simply by virtue of its base composition, rather than of its unique base order. This high GC% value can obscure the contribution of the base order-dependent component of the folding energy, which provides a sensitive indicator of local intraspecies pressures for the conservation of function within a population (i.e. a mutated organism is eliminated by natural selection so no longer can be assayed for function in the population). Thus, elimination of the base composition-dependent component should facilitate focus on this local folding.
In studies of RNA virus structure, Le and Maizel (1989) compared calculated structures of natural RNA sequences with the structures of the same sequences that had been shuffled to randomize base order (i.e. the base order-dependent component of the folding energy in the natural sequence was eliminated). They found that base-randomized sequences generally had weaker folding energy values and, on these grounds, they concluded that the natural folding was statistically (and perhaps biologically) significant. However, by subtracting values for shuffled sequences from those for the natural sequence, the Le-Maizel methodology permitted a distinction between the contributions of base composition (a genome-wide function) and base order (a local function). Thus, with a pipeline between the various programs that were offered by the Wisconsin Genetics Computer Group, the base composition and base order-dependent components were separated and individually assessed ("folding of randomized sequence difference" analysis; FORS-D analysis; Forsdyke 1995aForsdyke , b, 2013Forsdyke , 2014Xu et al. 2007;Zhang et al. 2008a).
In practice (Fig. 1), a window of 200 bases is moved stepwise along a natural sequence. A folding program (Zuker 1989) is applied to the sequence in each window to obtain "folding of natural sequence" (FONS) values for each window, to which both base composition and base order will have contributed. The four bases in each sequence window are then shuffled to destroy their order while retaining base composition. The folding energy is then again determined. This shuffle-and-fold "Monte Carlo" procedure is repeated ten times and the average (mean) folding value is taken as the "folding of randomized sequence mean" (FORS-M) value for that window. This reflects the contribution of base composition alone. The base order-dependent component is then derived by subtraction from the FONS value. This is the "folding of randomized sequence difference"(FORS-D) value. Local fluctuations in FONS profiles of genomes are mostly due to changes in the FORS-D component, whereas the FORS-M component (base composition), while making a major contribution to folding energetics, is relatively constant (Zhang et al. 2008b).
This approach was employed by others who, rather than shuffling the four bases, favored retaining some base order information (Workman and Krogh 1999). Accordingly, they shuffled groups of bases (e. g. the sixteen dinucleotides). Following disparagement of the conceptual basis of four base shuffling, which was duly clarified (Forsdyke 2007a), the validity of single base level shuffling is now generally accepted and is being applied routinely to viral genomes (Witteveldt et al. 2014;Andrews et al. 2018;Simmonds 2020). The Monte Carlo procedure can also be simplified to decrease FORS-M computational time (Chen et al. 1990;Washietl et al. 2005). Software ("Bodslp") written by Jiansheng Wu (Zhang et al. 2008a), retains the Monte Carlo approach and was developed by Shungao Xu as "Random Fold-Scan" for Windows-based systems (Xu et al. 2007).
In addition to assisting the study of infectious viruses and protozoa (Xue and Forsdyke 2003), FORS-D analysis proved fruitful when applied to topics such as speciation (Forsdyke 1996;2014;Zhang et al. 2008b), the origin of introns (Forsdyke 1995b(Forsdyke , c, 2013, relating structure to recombination breakpoints and deletions (Zhang et al. 2005a, b), use of a single sequence (rather than alignments) for the determination of positive Darwinian selection (Forsdyke 2007b), and showing that a highly conserved region in HIV-1 genomes associates with an RNA packaging signal (Forsdyke 1995d). The latter is now seen as a potential "Achilles heel" (Ingemarsdotter et al. 2018), so encouraging a similar approach to SARS-CoV-2 (Zhang and Forsdyke 2020).
Update on selectionism: roles of higher K-mers
At the time of the paper of Galtier and Lobry (1997), the neutralist-selectionist controversy was beginning to swing in favor of selectionism, at least among geneticists (Hey 1999): The findings on codon bias, the rediscovery and relevance of the Hill-Robertson effect and the fact that many loci reveal nonneutral local patterns of variation suggest that natural selection is highly pervasive at the DNA level. To a scientist bred on neutrality, the discoveries of recent years present some daunting theoretical and empirical challenges. A rejection of neutrality is not the same thing as an understanding of natural selection, and the discoveries mean that Fig. 1 Determination of the base order-dependent component of stem-loop (fold) potential by subtracting the base compositiondependent component from total stem-loop potential. A natural sequence (horizontal red line at left) when optimally folded (vertical arrow at left) is calculated to have a certain stability (e. g. − 30 kcal/ mol). Its base order is then randomized to produce ten shuffled sequences that share only their base compositions with the originating natural sequence. These are then optimally folded to obtain corresponding stability values. Idiosyncrasies, due to the base order that each randomized sequence has acquired due to the shuffling, are averaged out (at right) to determine the contribution of base composition to the total fold potential. The contribution of base order is determined by subtraction. This figure is with permission reproduced from Forsdyke (2016) evolutionary genetics has become more difficult than it once seemed.
Indeed, the handling editor of the Galtier-Lobry paper, Giorgio Bernardi, supported the selectionist viewpoint (D'Onofio et al. 1999;Bernardi 2000): The low GC levels of some thermophilic bacteria do not contradict, as claimed (Galtier and Lobry 1997), the selectionist interpretation ... . Indeed, different strategies were apparently developed by different organisms to cope with long-term high body temperatures. It is now known that the DNAs of such thermophilic bacteria are very strongly stabilized by particular DNA-binding proteins (Robinson et al. 1998) and that, in turn, their proteins can be stabilized by thermostable chaperoninins (Taguchi et al. 1991).
A further discussion of base composition in thermophilic organisms, noting especially their purine-enrichment-in keeping with Chargaff's cluster rule-was provided by Forsdyke and Mortimer (2000), Hurst and Merchant (2001) and Lambros et al. (2003). These affirmed that a positive correlation between (G + C)% and optimum growth temperature would mainly apply to RNAs whose primary function was structural (ribosomal and transfer RNAs) and would not apply to mRNAs and the corresponding genomic DNA sequences from which they had been transcribed. Lobry and Sueoka (2002) conceded the possibility of error: On the other hand, one may assume that some [base] positions are not free to deviate from PR2 because of selective pressure for some function. For instance, there are 100 copies per enterobacterial genome of palindromic sequences in intergenic spaces [Bachellier et al. 1999], which may be under selective pressure to preserve their palindromic character and therefore follow PR2 (as pure palindromic sequences are effectively base-paired).
In later papers Lobry moved further from neutral theory and also recognized a possible selective basis for purine-enrichment: "Compared to mesophilic species, thermophilic genomes are significantly enriched in purines and in purine-clusters" (i.e. local violations of PR2), for which "a possible explanation" is "the existence of a selective pressure to avoid undesirable RNA-RNA interactions" , as had been suggested by Cristillo et al. (1998). Thus, finding that: "The behavior of the most discriminating codon AGG with respect to G + C content is especially puzzling," they reported that "relatively high AGG frequencies prevent us from excluding the hypothesis of a selective pressure in favor of this codon …. Since AGG is a pure-purinic codon, the latter hypothesis may be linked to the observed purine enrichment in thermophilic and hyperthermophilic species." In this regard it was suggested that those seeking to associate certain amino acids with the high stability of thermophile proteins should consider the possibility that those corresponding to purine-rich codons might be mere placeholders (Forsdyke 2015a;Bize et al. 2021).
Despite Lobry's concession, various papers (Forsdyke 2002(Forsdyke , 2015bForsdyke and Bell 2004), and a textbook treatment (Forsdyke 2016), there is no general agreement on the functional implications of PR2. As part of the 50th anniversary celebration of the Journal of Molecular Evolution, the topic was addressed by Meyer (2021), who noted that "Increasingly it appears that G + C content in genomes may be the result of a combination of neutral and selection processes that are quite subtle (Reichenberger et al. 2015)." Meyer's commentary aimed to "present the motivation behind Galtier and Lobry's original paper and the larger questions addressed by the work, how these questions have evolved over the last two decades, and the impact of Galtier and Lobry's manuscript in fields beyond these questions." Thus, their paper was not chosen for comment simply because it refuted the selectionist viewpoint. It was cited as a helpful resource concerning the influence of optimum growth temperature (OGT) on the GC% of certain RNA species: Despite the specific nature of the hypothesis addressed, the two [major] findings for which this paper is most frequently cited are quite general. The first is the lack of relationship between OGT and genomic G + C content. The second is that G + C content in the stems of the 16S and 23S rRNAs, and generally in the 5S rRNA and tRNAs, does correlate with organismal OGT." Indeed, Galtier and Lobry (1997) cited Forsdyke (1995a, b) which, together with a paper the following year (Forsdyke 1996), provide a grounding for several of "the larger questions" as will be summarized here. Thus, Meyer (2021) mentioned three applications of their two major findings: The two major findings of Galtier and Lobry have spurred significant further work that encompasses a range of different applications that take advantage of the relationships between OGT, structured RNA G + C content, and genomic G + C content. These include: prediction of organism OGT based on 16S rRNA sequence, separation or enrichment of DNA extracted from microbial communities for a particular sub-populations based on G + C content, and computational methods for structured RNA identification." Two of these three applications relate to the work cited by Galtier and Lobry. A grounding for computational methods for structured RNA identification was, as has been discussed above, provided by Forsdyke (1995b). The separation of DNA from community subpopulations, currently known as "metagenomics" (alignment-free k-mer analysis; Forsdyke 2019a; Bize et al. 2021), is grounded on analysis of relative oligonucleotide frequencies (Forsdyke 1995a). The latter paper began by noting that, for example, a high GC% species would have more GC-rich oligonucleotides (e. g. GCG, GAG, GCC) than AT-rich oligonucleotides (e. g. ATA, AGA, ATT). Intriguingly, even if they had equivalent GC% values, the k-mer patterns of phylogenetically close taxa were more similar than those of distant taxa. Would a specific oligonucleotide pattern be a consequence of a primary evolutionary pressure for a high GC%, or vice versa? Which was the cart and which was the horse-mononucleotide or oligonucleotide? Essentially the same question was posed by Blake et al. (1992) when considering the contextdependence of point mutations with special reference to immediate upstream and downstream bases (i.e. k-mers where k = 2 or 3): The classic studies of Benzer (1961), showing the sites of … genes of T4 mutate with different efficiencies, could be seen as reflecting different neigbor effects at the several steps in the fixation of mutations. Benzer designated sites of high efficiency as hotspots. ... An influence of the neighbor environment could also be partially responsible for the biases in neighbor frequences in sequences (Josse et al. 1961). Whether such bias is a cause or an effect of one or more of these events remains to be demonstrated … .
A primary role for higher k-mers was argued on theoretical grounds (Forsdyke and Bell 2004), which were supported by statistical analyses of human genomes that indicated a k-mer optimum of at least 7, and were held to reveal principles that, although undetermined, were "fundamental" (Aggarwala and Voight 2016): Although the underlying mechanisms that determine how nucleotide sequences change over time remain to be addressed, we posit that the features identified from our model provide important clues in elucidating these fundamental principles." Along similar lines, Morozov (2017) reported a k-mer optimality, ranging from k = 5 to k = 7, among a wide variety of species and pondered: Thus, there is no question of whether k-mer distribution … is species-specific or whether the divergence of these distributions correlates with evolutionary distances. However, there is no answer to why it does.
Within-species recombination and between-species antirecombination would seem to be the selective principle they seek (Forsdyke 1996(Forsdyke , 2014(Forsdyke , 2019aBozdag et al. 2021). Supporting recombination, natural selection seeks out higher order k-mers, and the quantity of 1-mers (GC%) is secondary to this. Meyer (2021) rightly asked for the "causes for G + C content variability: neutral processes or natural selection?" A biological grounding for this was provided in terms of primarily neutral variation that can generate k-mer differences that inhibit recombination and initiate sympatric speciation (reproductive isolation) prior to the involvement of natural selection in the speciation process. Yet, following Chargaff's cluster rule, the selective influence of "R-loading" (base skew in favor of A and G; Cristillo et al. 1998) has the potential to locally (in open reading frames) impact GC% to the extent that a reciprocal relationship between GC% and AG% can emerge (Lao and Forsdyke 2000;Mortimer and Forsdyke 2003).
Indeed, citing Bell and Forsdyke (1999); Meyer (2021) noted that "sequences that are actively transcribed also tend to display purine loading." This creates a base-skew in exons which violates PR2 more than in introns. Hence there is usually more DNA secondary structure potential in introns, especially in genes under positive Darwinian selection (Forsdyke 1995c(Forsdyke , 2007b. Consistent with this, in a study of genome k-mer frequencies, Bultrini et al. (2003) showed that the second parity rule is followed more closely in intronic and intergenic DNA than in exonic DNA. They concluded: A very interesting feature of the C. elegans intron vocabulary is its being almost entirely composed of pairs of reverse complementary oligos. ... A symmetrical trend is apparent on a scale of a few kilobases in individual C. elegans introns. This short-range property of introns is not simply due to their symmetrical base composition, since it is drastically reduced in randomized introns. Rather, it results from the preferred use of reverse complementary oligomers ... . It would be tempting to link the above symmetry properties of introns to formation of stem-loop structures.
As for mechanism, Meyer (2021) suggested: "The most satisfying explanations for the maintenance of Chargaff's second rule invoke frequent duplication, inversion, and transposition events in the genome" (Albrecht-Beuhler 2006). These may indeed favor PR2 maintenance of a base relationship that could, however, have originated in the need for recombination-based error-correction that would be expected to have arisen very early in the evolution of living forms ("introns early; " Forsdyke 1995b" Forsdyke , 2013.
Concluding remarks
What Chargaff first described as "regularities" in base composition have since become referred to as his four "rules" -PR1, PR2, the GC rule and his "cluster" rule (Forsdyke and Mortimer 2000;Forsdyke 2016). Among these, functional interpretation of PR2 has proved the most confusing and contentious (see Online Resource 1). Galtier and Lobry (1997) portrayed the problem in stark terms as a contest between neutralist and selectionist viewpoints, centering their case around the papers of Sueoka (1995) and Forsdyke (1995a). They did not clarify Sueoka's terminology and for several years many may have assumed that their thermophile data had settled the issue. Happily, a quarter century later Meyer (2021) reopened the issue for a journal anniversary, touching on the history of a contest between two groups: The neutralists- Galtier, Lobry andSueoka, with Motoo Kimura (1924-1994) of "neutral theory" fame (Hey 1999), close by. The selectionists-Bernardi and Forsdyke and their various coworkers.
Simply stated, Sueoka, whose early studies offered profound insights (Forsdyke 2016(Forsdyke , 2019a, was wrong on this issue; so perhaps we should now be more open to selectionist interpretations. The thermoadaptation issue has been laid to rest. Now the focus is on how "larger questions" that were "ultimately the subject of Galtier and Lobry's paper" have "evolved over the last two decades." Meyer pondered: It is still unclear whether G + C content variation may be generated by neutral processes such as mutational bias or biased gene conversion, or is primarily the result of natural selection. Furthermore, even if such variation is the result of natural selection, is selection acting on the genomic DNA itself, or rather on the molecules (e.g. RNAs and proteins) encoded by the DNA?
The answer appears to be that unlike, for example, improving the utilization of a substrate such as glucose that cannot adapt to improve its metabolism by specific enzymes, DNA can adapt to assist the function of enzymes acting upon it. Thus, both substrate and enzymes acting on that substrate can be targeted by natural selection to improve recombination efficiency-hence genome-wide stem-loop potential. Recombination begins with higher order k-mers in loops seeking complementary k-mers on other loops. Failure to find that complementation can initiate speciation.
Meyer considered the strength of the Galtier-Lobry paper was that it "ultimately seeded several other fruitful areas of research," and spurred interest in "far reaching" questions. The primary conclusion of Galtier and Lobry (1997) was that: Our results do not support the notion that selection pressure induces complementary oligonucleotides in close proximity and therefore numerous secondary structures in prokaryotic DNA, as the genomic G+C content does not behave in the same way as that of folded RNA with respect to optimal growth temperature." Although this may be questionable, some of their findings do support a selectionist PR2 viewpoint. Concerning the loops of stem-loop structures and optimum growth temperature (Topt) they noted: "a weak correlation between the G + C content of 16S and 23S rRNA loops and Topt …. It may be due to … a tertiary structure effect." This implies loop-loop interactions that would require equimolar pairing, at a distance, of purines and pyrimidines, so adding to the local compliance of stems with PR2. Furthermore, Galtier and Lobry (1997) agreed that "there are also thermophilic species with low genomic G + C contents, such as Pyrococcus furiosus (Topt: 97 °C, G + C%: 38; Fiala and Stetter 1986), that indicate no selective advantage of a high genomic G + C content at high temperature." Indeed, Bernardi (1993) had considered "the thermal stabilization of genomes might be due not to an increase in G + C but to other physiological adaptations,".
A hopefully growing consensus viewpoint can be described in the following general terms. Whereas RNA molecules transcribed from DNA may primarily function either structurally (e. g. rRNA, tRNA) or as templates for protein synthesis (mRNA), this division of labor is absent from their genomic source. Entire DNA molecules have both structural and templating functions that must sometimes compete for sequence space-a feature that could explain some properties of introns (Forsdyke 1995b(Forsdyke , c, 2013. The function of a nucleic acid depends both on which bases it contains (composition) and on their order. A distinction can be made since the contribution of base composition to DNA structural information (stem-loop potential) tends to be relatively uniform and genome-wide and can be equated energetically with the mean of several shuffled versions. On the other hand, specific templating information tends to be irregular and localized, and can best be studied energetically without the contribution of base composition. The contribution of base order to structural functions can be assessed as the difference between the calculated structure of a natural nucleic acid with that of the corresponding randomized (bases shuffled) version (Fig. 1).
The genes for rRNAs have close sequence similarities with those of the rRNAs they encode, which have been shaped by natural selection acting primarily in the cytoplasm. Thus, for the genome locations of these rRNA genes, the DNA structure is strongly under cytoplasmic influence. But rRNA genes are few and far between. They are not representative of entire genomic sequences that encode a multiplicity of dispersed, widely varying, RNAs-including mRNAs-the structures of which are less likely to be under cytoplasmic influence. Thus, mRNA structures primarily reflect the potential of the corresponding DNA sequences to adapt to favor recombination (Kleckner and Weiner 1993). | 2021-04-22T06:18:58.826Z | 2021-03-02T00:00:00.000 | {
"year": 2021,
"sha1": "703f4469d8e95352fe50e6dbc75fb61ee81cc99f",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10709-021-00119-5.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b2c7ccf5d47e64515c04c885684aeedc93e40c6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
} |
224999872 | pes2o/s2orc | v3-fos-license | Identification and Categorization of Factors A ff ecting the Adoption of Energy E ffi ciency Measures within Compressed Air Systems
: Understanding the factors driving the implementation of energy e ffi ciency measures in compressed air systems is crucial to improve industrial energy e ffi ciency, given their low implementation rate. Starting from a thorough review of the literature, it is thus clear the need to support companies in the decision-making process by o ff ering an innovative framework encompassing the most relevant factors to be considered when adopting energy e ffi ciency measures in compressed air systems, inclusive of the impacts on the production resources and the operations of a company. The framework, designed following the perspective of the industrial decision-makers, has been validated, both theoretically and empirically, and preliminarily applied to a heterogeneous cluster of manufacturing industries. Results show that, beside operational, energetic, and economic factors, in particular contextual factors such as complexity, compatibility, and observability may highlight critical features of energy e ffi ciency measures whose absence may change the outcome of a decision-making process. Further, greater awareness and knowledge over the important factors given by the implementation of the framework could play an important role in fostering the implementation of energy e ffi ciency measures in compressed air systems. The paper concludes with further research avenues to further promote energy e ffi ciency and sustainability oriented practices in the industrial sector.
Introduction
Industrial energy efficiency is widely recognized as crucial means to mitigate the growing final energy consumption (by more than 25% in the 2018-2040 time span [1]), given that industry is responsible for 35% of global total final energy use [2]. Energy efficiency can also lead to other benefits, such as enhanced security of the energy production systems and a healthier and more comfortable environment [3], plus strategic advantages connected to a less volatile energy market [4], especially in countries strongly dependent on energy imports [5,6]. As discussed by [7,8], previous research has mainly focused on sector-specific energy efficiency measures (EEMs). However, the extreme heterogeneity of the industrial sectors calls for a different approach aimed at promoting specific cross-cutting technologies. Among others, the Compressed Air System (CAS) looks particularly interesting, being widely diffused as ancillary technology within many industrial processes [9] due to its cleanness, practicality, and ease of use [10]. Usually, industrial compressed air (CA) is generated by using electricity as energy source and can account for about 10% of the total electricity bill in some contexts [10]. By taking a life-cycle costs perspective on CAS, the largest portion • installation of new equipment (e.g., ARC 2,4226 "Use/purchase optimum sized compressors", 2,4224 "Upgrade control compressors", 2,4225 "Install common header on compressors"); • optimization of existing equipment (e.g., ARC 2,4231 "Reduce the pressure of compressed air to the minimum required", 2,4235 "Remove or close off unneeded compressed air lines"); • recovery of extant working conditions (e.g., ARC 2,4236 "Eliminate leaks in inert gas and compressed air lines/valves"); • replacement of compressed air medium (e.g., ARC 2,4232 "Eliminate or reduce the compressed air used for cooling, agitating liquids, moving products or drying", 2,4233 "Eliminate permanently the use of compressed air"); • energy recovery (e.g., ARC 2,2434 from either compressors or ARC 2,2435 from air dryers).
Moreover, efficiency in CAS may be reached following three directions: preventing energy losses, minimizing energy input, and recovering energy [33]. The IAC database covers the first two areas, however, the latter is partially lacking since the database only refers to the recovery of thermal energy. Hence, to cover the gap, an additional EEM related to the adoption of energy harvesting units was added to Table 1.
With respect to other literature addressing EEMs in CAS (e.g., Nehler [11]), the IAC has been preferred, given that Nehler [11] has clustered EEMs according to their physical local location to recognize the effect on the system and their interrelations, however leading to a significant overlapping, since multiple EEMs seem to target the same energy efficiency issue. Rather, IAC classification allows assessing EEMs with an industrial decision-maker perspective. In fact, as reported in Table 1, the implementation of those EEMs should consider several additional operational issues (e.g., accessibility, location, noise) and impacts on other production resources (e.g., labor through an impact on maintenance activities and/or safety) that are important for industrial decision-makers and other literature, industrial and scientific. Interestingly, the existence of such implications seems to show the need for academic literature to more thoroughly and systematically address the factors that should be considered when adopting an EEM in CAS. Install compressor air intakes in the coolest location
Installation of new equipment
Aspiring from the coolest location [34], may they be outside [35] or inside the plant [36], could provide multiple benefits, ranging from efficiency up to the regulation range, passing by avoidance of shutdowns, according to the type of compressor installed [37,38].
•
The location may be difficult to access with a consequent negative impact on maintenance practices [37,38]; • continuous air monitoring required (external installation) [36]; • the installation of an additional ventilation system may be required (internal installation) [36].
Installation of new equipment
Applications of compressed air or wear requirements of the components need a certain level of air dryness [44], usually guaranteed by refrigerated dryers, coupled with a moisture separator and condensate traps.
• Different control systems exist, with the optimal one depending on the specific application (e.g., see [29,45,53]); • a reduction in the required number of compressors may be achieved through a central control system [29]; • if a monitoring system is installed with the central control system, benefits in terms of maintenance and unscheduled downtimes may be obtained. The closed-loop configuration represents the best air distribution system layout, saving up to 12% of power requirements [42,48,57]. Moreover, the installation of a common header enables compressors to work together, taking advantage of load sharing.
Installation of new equipment
Use a compressor able to handle the demand of the system at any time with efficient operation, since oversizing is one of the major problems in the supply side of compressed air systems [48].
• The alternatives to CA are vast, ranging from blowers to air amplification high-performance nozzles [29,39,75], each of them characterized by different features; • blowers, for instance, require more space but are easy to implement [76] and are much more efficient for high volume low-pressure applications [50,77]. [29,30,39,40,45,47-51,70, Lowering the inlet temperature may provide multiple benefits to CAS (see ARC 2,4221). Beside moving the compressor air intake, it is possible to obtain a cooling effect of inlet air using a heat exchanger [37,38].
• Heat exchangers are easier to install with respect to a change in the compressor air intake, but they require more space [37,38]. [37,38,57] Compressed air lines should be removed in case of permanent disuse or temporarily closed, e.g., through shut-off valves, when they remain idle for a certain time during the production cycle [50,80,81].
• The disconnection may reduce noise, enhance safety, and save space once occupied by the equipment itself [29,82]; • there may be issues in the accessibility of pipes with consequent hidden costs. [29,42,50,[80][81][82] 2,4236 Eliminate leaks in inert gas and compressed air lines/valves
Recovery of extant working conditions
Leaks are the major single sources of consumption in compressed air systems [35,70]. They can be reduced following operational good practices [49,83] and performing maintenance activities, beside introducing a leak management program [29].
Replacement of compressed air medium
Cooling air at the compressor outlet enables the blowdown collection and the avoidance of heat exchangers in the points of use; different cooling system exists, with the optimal fit depending on the specific case (e.g., see [86,87]).
• Maintenance, operating costs, and installation costs depend on the specific choice; • water usage costs and water waste management costs should be considered when dealing with a cooling system where water is the main medium [88]; • noise may be reduced after the replacement [88]. [39,[86][87][88][89] 2,4238 Do not use compressed air for personal cooling
Replacement of compressed air medium
Personnel cooling describes the self-application, made by operators, of compressed air for ventilation purposes. An efficient and secure alternative is provided by electrical fans [29].
• Enhance personnel safety since the flow of compressed air can inject particles into the human skin [29]. [29] 2,2434 Recover heat from air compressor Energy recovery Up to 93% of the electrical energy used by an industrial air compressor is converted into heat, which can be mostly recovered with a properly designed heat recovery unit [27,42,90] • Maintenance efforts are higher due to the requirements of the added equipment [39,91]; • equipment lifetime may be improved [36]. [27,29,34,36,39,40,42,44,45,51,57,90,91] 2,2435 Recover heat from compressed air dryers Energy recovery As for air compressors, heat can be recovered from dryers. This intervention is one of the most convenient concerning energy efficiency, since the source of energy is often waste [34].
Literature Review, Critiques, and Needs
Section 2 highlighted several EEMs characteristics helpful to identify technical and operative factors that should be assessed when dealing with the adoption of EEMs in CAS. Similarly, assessment factors have been discussed by previous academic literature. A breakthrough contribution is represented by the study by Fleiter et al. [26], who developed a framework based on 12 factors grouped into three categories, namely relative advantage, technical context, and information context. Interestingly, the factors considered refer to the profitability side of the EEMs, but point also toward their complexity, with thus some links to research by Rogers focused on the adoption of innovation into industry [99]. The relative advantage and the complexity indeed represent the only factors, among the ones considered by Rogers [99], which are statistically related to the adoption of interventions, together with the compatibility of an innovation [100], considered however as a "rather broad and subjective characteristic that is heavily dependent on the potential adopter", thus neglected in the analysis by Fleiter et al. [26]. Roberts and Ball [101], referring more generally to sustainability practices (thus with a broader focus than energy efficiency), encompassed most of the aforementioned considerations, defining a framework that also pointed out the importance of including the time dimension in the analysis, which was not included by Fleiter et al. [26]. Similarly, factors for the characterization of EEM were considered by Trianni et al. [7], who maintained the profitability dimension but also the description of the complexity of an EEMs, as suggested by Fleiter et al. [26], through factors such as the activity type, the ease of implementation, and the likelihood of success/acceptance. Noteworthy, both Roberts and Ball [101] and Trianni et al. [7] made a further step preliminarily suggesting to include among the assessment factors also the nonenergy benefits (NEBs), i.e., all the benefits coming from the adoption of an EEM beyond the energy savings, as defined by Mills and Rosenfeld [102], but not explicitly.
However, NEBs represent the positive impacts that EEMs have on the operations and the other production resources. They were considered mainly as additional benefits to stimulate the implementation of industrial energy efficiency, since their value may exceed that of the energy savings [7,103]. However, recent research has pointed out that there may be also negative implications stemming from the adoption (e.g., [103,104]), which should likewise be included in the assessment also as a necessary acknowledgement to gain credibility with the industrial sector [102]. In a nutshell, regardless of being positive or negative, NEBs describe impacts stemming from the EEMs adoption and, as such, they should be assessed during the decision-making process to make a sound decision.
Literature identified NEBs stemming from the adoption of a variety of technologies and EEMs, referring them to a set of categories according to their nature and targeted area (e.g., relative advantage, technical context, information context [26]; complexity, compatibility, observability [99,100]; waste, emission, operation and maintenance, production, working environment, and other [105,106]). In this regard, Table 2 shows the most significant contributions (NEBs encompassed by literature are indicated with an "X"; the green background helps to graphically highlight the areas most frequently covered by the past studies). Unfortunately, the majority of literature over NEBs does look to specific technologies not including CAS (e.g., [107][108][109]), or considers CAS together with other technologies [11]. To the best of our knowledge, only very few studies were conducted targeting CAS specifically. Gordon et al. [49] first attempted to analyze NEBs referring to CAS exclusively, listing a variety of NEBs, ranging from maintenance and insurance and labor costs to improved system performance and workers' safety conditions. More recently, Nehler et al. [27] highlighted a simple list of 34 specific NEBs for CAS, ranked according to their importance as perceived by users and experts, with the top positions occupied by organizational related factors (e.g., commitment from top management; people with real ambition), energy-related factors (cost-reductions resulting from lowered energy use; energy management system; the threat of rising energy prices), and strategic factors (long-term energy strategy). Doyle and Cosgrove [110] further delved into this issue by identifying the benefits stemming from one EEM, i.e., compressed air leaks repair, in terms of reduction of the required working units and the consequent drop in the plant room temperature, which in turn improve the efficiency of CAS. Interestingly, Table 2 shows that, despite referring specifically to CAS, these studies consider about the same NEBs already defined by Worrell et al. [105]. The only exception is represented by the improvements in system performance, which address improved pressure levels, consistency of pressure, and the ability to address spikes in usage [49], which are indeed specific of the technology. On the other hand, if many manuals deal with CAS technology (e.g., [29,39,111]) they refer solely to technical aspects, such as the impact on parameters like pressure or temperature, which are critical for the adoption of the technology, nonetheless representing a limited perspective, not even naming the wider concepts of assessment factor nor NEBs. By analyzing the literature, and in particular the area surrounded by the red line in Table 2, the main literary gap is clearly represented by the lack of study encompassing for the entire range of factors that should be considered by decision-makers during the assessment of EEMs, especially when dealing with CAS. Referring to a single technology is necessary since different technologies require different EEMs, which might provide different NEBs [27] and be characterized by different assessment factors. Moreover, without this specificity, the work might lose the practical interest by decision-makers because it is too general to describe the broadest set of possible industrial contexts where to consider the adoption of EEMs on CAS. Furthermore, it is clear how most studies dealing with assessment factors on CAS, regardless from the addressed technology, do not address the context in which the technology is called to operate, therefore missing a (potentially) crucial element for a complete decision-making. Moreover, it should be noted that most studies are focused on NEBs from the service phase of the equipment, whilst both the drawbacks stemming from the adoption and the implementation phase itself of the EEM have been rarely considered in the analysis [117].
A Novel Framework of Factors for Decision-Making Over CAS EEMs
The framework, designed to provide a holistic perspective for decision-making purposes, has been created by tailoring factors and the broader categories to the specific features of CAS EEMs. The factors, which should be relevant to the adoption of EEMs and, if possible, should avoid overlaps, derive from either a thorough review of the industrial literature about the technology behind single EEMs (Table 1) or from the scientific literature on EEMs characteristic. This dual perspective guarantees the completeness of the analysis, being therefore inclusive of the impacts on the operations and the other productive resources of a company. This completeness was maintained during the following synthesis process, which made it possible to obtain a synthetic framework thanks to the grouping of factors into categories and subcategories. Furthermore, the grouping process was carried out in such a way that the framework obtained corresponds to the perspective adopted by decision-makers regarding the adoption of EEMs to CAS. As summarized in Table 3, 22 factors were identified and organized in three categories, respectively: (i) operative factors, (ii) economic-energetic factors, and (iii) contextual factors, which in turn were divided into three further subcategories, i.e., (i) complexity, (ii) compatibility, and (iii) observability.
Operational Factors
The need for compressed air is primarily defined by end-users' requirements in terms of: • air flow rate [29]; • pressure level [29]; • air temperature [39].
The CAS performance and efficiency do not rely exclusively on such primary factors. Yet, primary factors are strictly interconnected to several secondary ones: among these, we can find heat and thermal capacity, linked to the air temperature, power, work, but also volume, density and mass flow rate of air, directly connected to its pressure and flow rate.
Economic and Energetic Factors
Pay-back time. Pay-back time has been widely recognized as an easy yet indicative factor supporting industrial decision-makers with limited resources [7,118].
Initial expenditure. Regardless of the type of investment [119], the initial expenditure is a crucial factor and may represent a major hurdle hindering EEMs adoption, especially among SMEs, due to their limited capital availability [120,121].
Energy savings. The amount of saved energy is a critical indicator of savings stemming from the adoption of an EEM [7] and it refers to monetary quantification of the physical energy source (either primary or secondary).
Contextual Factors
Other than considering operative and economic-energetic factors, CAS EEMs can be characterized by many factors strongly dependent on the specific industrial context for which they are considered. We took inspiration from the study conducted by Rogers [99] who broadly reviewed the characteristics of innovation in general. Since the adoption of EEMs into a specific context can represent a process innovation, those characteristics were transferred and adapted to CAS EEMs, as detailed in the following.
Complexity Factor
Complexity describes the difficulty one might encounter when adopting an EEM, inversely proportional to the adoption rate of the measure itself [99]. Understanding in which cases the adoption is revealed to be complex is a fundamental passage to characterize it. Literature on innovation refers to the radicalness as an index of complexity, since it is correlated to the degree of change required for the adopters [122]. This is a rather vague definition for the specific study and a potential source of misunderstanding [26,123]. Hence, we decomposed the complexity into factors whose definitions are specifically intended for the analysis of EEMs.
Activity type distinguishes if an EEM constitutes a simple refurbishment or recovery of the existing functions, an optimization in the use of an existing technology, a retrofitting of the equipment or a new energy-efficient equipment installation [7]. Indeed, a simple retrofit is easier than a new investment in equipment [124].
Expertise required refers to the range of skills required for the correct implementation of an EEM. Since different levels of expertise are required for each EEM and considering their variety, the skill range can be wide enough to be hard for firms in finding technology experts, especially for SMEs, where CAS is used almost exclusively as a service [125].
Independency from other components/EEMs refers to the influence of the implementation of an EEM on the existing system, to underline the nature of the impact [26,100,126]. The possible impacts can influence CAS equipment working conditions, other systems or can generate cause-effect relationships with other EEMs, with the magnitude of the influence being inversely proportional to the easiness of understanding the consequences of the installation and predicting the total savings.
Change in maintenance effort. The variation of maintenance requirements as a consequence of the adoption of EEMs has been often considered an important factor by previous literature [102,105,106].
Accessibility. Difficulties in accessing equipment may require higher efforts from personnel or a greater amount of technological resources to carry out operations; this can be even harder for CAS, in which the distribution system is usually difficult to access. Moreover, accessibility may also refer to space unavailability for maintenance procedures when technology add-on measures are installed.
Compatibility Factors
Compatibility explains to which degree EEMs can be adapted to the existing system. According to Rogers [99], it can be referred, among others, to the compatibility with previously introduced ideas, that can be translated into technological compatibility, as suggested by Tornatzky and Klein [100], or to layout features or operating conditions that difficultly fits in the existing system. Nonetheless, despite being relevant for the adoption, compatibility and related factors have not been adequately considered in EEMs literature, being strongly dependent on the adopters' contextual characteristics [26].
Technological compatibility analyzes the technological constraints related to EEMs, pointing out the conditions where their implementation is suggested or should be avoided, highlighting a strict connection to the specific context. Indeed, in several cases, more technologies concur for the adoption of the specific EEM, and the best choice depends on their matching with the existing system, as well as their suitability [127]. Without technological compatibility, the EEMs expected performance may not be guaranteed, with also possible lack of trust for future interventions [128].
Presence of difference pressure loads outlines the existence of different pressure levels at the end-use which may be a source of high inefficiencies and incompatibilities in the system [129]. This may be due to (i) the widespread availability of lamination valves that, although can be easily installed, are meant to disperse the pressure generated; (ii) the generation of a high-pressure point, which is recommended only when a considerable amount of air is required at that pressure.
Adaptability to different conditions may be referred to demand needs as well as to different ambient conditions, which can influence, e.g., the air conditions at the compressor intake (e.g., see IAC ARC 2,4221). It represents a critical factor considering the flexibility of use usually required for CAS [29].
Synergy with other activities. During the EEM implementation, synergies among different EEMs may occur, leading to potential benefits coming from the coordination of multiple activities (e.g., similar interventions that are suggested contemporarily, taking advantage of the same downtime of the equipment [130]). Nonetheless, synergies may also be negative for EEMs adoption [131].
Distance to the electric service. The distance of the point of use to the electric service can be a reason for the low adoption rates of EEMs requiring the technology substitution from compressed air-driven to electric driven devices [132].
Presence of thermal loads. The quality level of the fluid delivered by the heat exchangers from heat recovery units represents the major problems for the low diffusion of this solution throughout CAS. Although the EEM can be theoretically installed for each compressor type (both packaged or not), [29,36], its profitability depends on the fluid quantity and temperature. If the compressor load is variable, heat may be delivered discontinuously in time, potentially representing an issue for the end-use application [36].
Observability Factors
Observability, when referred to innovations, relates to their visibility and the communicability of their effects to others [99]. Concerning CAS EEMs, observability can be translated into focus towards the sensible changes detected in both the CAS and the working environment once the EEM is implemented.
Safety. Since difficulties may arise when handling compressed air for high fluid pressure and high-speed rotating parts, safety requirements are tight, aiming at reducing the accident rates [133].
Air quality. Pollution in an indoor environment is one of the more underestimated problems within a production facility. Paying attention to air quality monitoring and improvement is on the one hand related to enhanced health and performance of operators [106,113]; on the other hand, to improved operating conditions for all the parts in contact with the fluid, thanks to lower values of solid and liquid contaminants.
Wear and tear variation of the equipment is widely considered in scientific literature, mostly with a positive meaning [105]. The same factor can be perceived, in turn, as influencing the lifetime of the equipment [103,113]. For the specific case of CAS, a reduction of wear and tear of the equipment may be obtained because of the lower stress impressed by the fluid, attained with the reduction of pressure or through enhanced control capabilities.
Noise coming from the equipment may affect the working environment and possibly the performance of the operators [102,103,105]. Nonetheless, the quantification of noise variation stemming from the implementation of a CAS EEM can be extremely difficult, being related to several parameters such as e.g., cost of absenteeism, accidents, and variation in workers productivity, that are extremely complex and with impacts measurable almost exclusively in the long-term.
Artificial demand. Air flow demand increases at higher pressure, especially when air is open blown to the atmosphere; hence, the sizing of the system based on the maximum pressure creates an over-pressurization that minimizes efficiency [134]. This further demand, defined as artificial demand, is considered one of the major causes of inefficiencies in compressed air systems. On the other hand, each time an EEM entails a reduction of the CAS pressure level or the reduction of its unregulated use, this affects positively the amount of air being delivered, representing a further benefit of the adoption. [129] Adaptability to different conditions [29] Synergy with other activities [130,131] Distance to the electric service [132] Presence of thermal load [36] Observability [29,134]
Validation of the Framework
The validation of the model, intended to reach the analytical generalization as defined by Yin [138], is performed following two separate steps: theoretical and empirical. The theoretical validation is based on the assessment of the factors that compose the model and their capacity to describe the selected EEMs through the analysis of literature contributions, both scientific and industrial, as discussed in Section 5.1. On the other hand, the empirical validation, structured according to the case study methodology following Yin [138] and Voss et al. [139], is required to validate with industrial decision-makers the framework and its composing elements, basing the analysis on a set of predetermined indicators (Section 5.2). For the purpose of the present study, i.e., understanding the main factors that rule the adoption rate of EEMs in CAS and their influence on the decision-making process, multiple case study is the most appropriate research methodology. Discrete experiments that serve as replications, contrasts, and extension to the emerging theory [138] are considered so that each of the case-studies gives a contribution to the theory development beside emphasizing the rich real-world context in which the phenomena will occur [140]. The combined approach for validation, successfully undertaken by previous research on similar topics ( [7,141]), provides better generalizability of results, avoiding relying uniquely on the data obtained from a limited number of investigations.
Theoretical Validation
The theoretical validation is used (i) to verify the ability of the developed framework in characterizing the EEMs addressing CAS and (ii) to provide a qualitative evaluation of factors, which could result in interesting insights for decision-makers. The process involves a revision of the EEMs highlighted in Section 2 and it is accomplished thanks to a thorough review of the literature performed following the perspective imposed by the factors considered in the model. The results of the theoretical validation are reported in Table 4. In a nutshell, the framework proved to be able to fully describe EEMs in CAS, also supported by the inclusion of a qualitative evaluation of interventions, intended however to provide general guidelines rather than absolute and specific insights.
Empirical Validation
We sampled firms across several sectors, limiting the analysis to SMEs, as discussed in the introduction [161]. In this exploratory phase, different industrial sectors are considered, since the usage of CA may vary according to the application, as well as its energy intensity. Five companies embodying the previously stated criteria were considered for the empirical validation (details provided in Table 5). Food and beverage 100 ÷ 149 ≤50 NEI Quality and energy responsible a The threshold between energy intensive and non-energy intensive companies is defined by the value of energy costs compared to the total turnover; in the present study such value is set at 2% [162].
The interviews followed a semi-structured format [156], to give higher flexibility and customization, being able to encompass a broader set of situations. In each case study, in the first part we collected various information regarding company profile, including sector, size, energy intensity and turnover, the role of the interviewees-ranging from the owner to the maintenance or energy manager-and their status and main responsibilities in the decision-making process over the adoption of CAS EEMs. Moreover, the perceived importance of energy and energy efficiency were investigated, together with the past EEMs implemented. Additionally, the CAS was analyzed to understand the applications and purposes of compressed air usage.
In the second part of the interview, respondents evaluated the proposed set of factors based on four performances, i.e., completeness, usefulness, clearness, and absence of overlapping, exploiting an even Likert scale from 1 (poor) to 4 (excellent) to avoid any neutral output. In particular, the validation process was divided into two separate steps: first, the foundations of the framework were assessed, i.e., its general structure, scope and perspective, as well as categories, subcategories, and factors considered as clusters in their own (top-level analysis). Second, the analysis delved into the investigation of the single elements of the framework, i.e., categories, subcategories, and factors (bottom-level analysis). The dual step process was designed to provide the interviewee with the general picture and only later moving into details, to avoid losing his attention releasing too much information in a single instance. The indicators used for the evaluation are displayed in Table 6, with detailed scores for the five companies reported in Appendix A.
The overall evaluation is extremely positive for each indicator, with no changes in the framework suggested: • usefulness: the framework can provide useful insights to industrial decision-makers when dealing with the adoption of EEMs in CAS; • completeness: all the critical factors are identified, especially those which are usually neglected due to a lack of awareness or specific knowledge about the technology; • clearness: the factors are clearly defined and easy to understand for industrial decision-makers; • absence of overlapping: the framework does not contain any unnecessary repetition. The importance of pointing out all the consequences stemming from the adoption is moreover stressed by the interviewee of company V4, suggesting that technology providers should also use the framework to highlight the consequences when proposing CAS EEMs. On the other hand, as noted by company V5, such increased knowledge might empower industrial decision-makers, since he recognized that usually service providers lean on a greater set of competences, thus limiting the company to implement suggested EEM, rather than proposing EEMs by themselves.
Application of the Model
Multiple case-study with semistructured interviews was selected as research methodology also for the empirical application of the framework into a second sample composed by 11 companies, sampled with the same rationale previously presented in Section 5 (details in Table 7). In order to apply the framework and test its effectiveness, considering the sample heterogeneity, we focused our analysis on the most recommended interventions, by considering the IAC database as reference (Table 8). Considering the timeline of the companies, EEMs are divided into: (i) past EEMs when recommended and backed up by an investment plan but never implemented; (ii) present EEMs if recommended and adopted, so the companies experienced the result; and (iii) future EEMs if not yet recommended or only recently recommended, with no decision about their implementation undertaken.
In Box 1, we reported the application of the framework to a selected company (A5). In the following, we present the results of the application, displayed in Table 9. By looking at the implementation of the proposed framework, it appears clear how the operational factors are always considered during the assessment, with the only exception represented by the temperature, neglected in the assessment conducted by company A1 for the adoption of a controller, which nonetheless did not compromise the result. Referring to the economic-energetic factors, decision-makers stated how important they are for the correct assessment of EEMs, hence are usually the major set of factors considered in the decision-making process.
Nevertheless, the contextual factors pointed out on multiple occasions their capability to highlight critical features whose absence may change the adoption outcome. Particularly, the type of activity, providing information regarding the complexity of an EEM, was considered of primary importance in all the assessments, pointing out the huge perceived differences between the different nature of EEMs. The installation of a new device, or even a retrofit entailing the addition of new equipment, was indeed perceived as a complex operation by A1, which installed control systems and considered the movement of the compressors air intakes in a cooler place, or even by A5, which considered the replacement of the transportation system based on compressed air. On the other hand, completely different perceptions came from the companies which considered an optimization, e.g., companies A3, A6, A8, and A10, where the EEM relates to the repair of leaks. A2 stated how the type of activity was an important factor in his assessment, since the EEM, i.e., the reduction of the pressure level to the minimum required, is a simple optimization which does not imply any structural change in the system, hence requiring only a low level of involvement. Similarly, the expertise required to carry out the adoption is assessed as one of the main factors to be taken into consideration by decision-makers, especially for complex EEMs or in case of lack of knowledge, e.g., for the EEM considered by A9, which would imply the elimination of the compressed air used for dense phase transport but would be completely outsourced because of lack of internal competences. The expertise required guides A2 on the choice of simply consulting the compressor technical manual or contacting a technology expert for the adoption of the planned EEM. Further, in the case of A7, one of the main reasons for not adopting the EEM was the high expertise required, similarly to A5.
The application of the framework is intended to test its ability to work as an assessment tool. Decision-makers are required to indicate the importance factors have in the adoption process, ranging between 'not important' and 'very important'. Eventually, the relevance in using the framework for the decision-making process and the greater awareness gained from it are asked to the respondents, together with the effort required for its usage and its ease of application. The independence from other components or EEMs was highly appreciated by the decision-maker of company A5, who was indeed worried about the high involvement of the transportation system in the production processes. Although the same EEM was considered by A9, the decision-maker was at first unaware about the importance of the factor. Rather, he was aware of the high dependency for what concerns the other EEM adopted by the company, i.e., the installation of control systems (two in the specific case), as he recognized how one may influence the proper working of the other. Regarding the repair of leaks in the compressed air lines, the advantage coming from the increased pressure level, which may end up with the reduction of the number of required compressors, was known to A3, A8, and A10. Differently, A6 was sceptic about this potential influence, thus neglected the factor from the analysis and ended up not adopting the EEM. Similarly, the dependency of the considered EEM was not known by A2, which did not take into account the potential risks related to the reduction of the pressure level for other activities to be performed through the same medium. Likewise, the decision-makers within A1 disregarded to resize the air receivers and the possible installation of the central control for the dryers. In both cases, the assessment resulted in the underestimation of the negative sides of the EEMs which could compromise their adoption.
The variation in maintenance effort is considered by almost all the respondents but it was perceived as critical only when the effort would be increased because of the leaks repair activity, i.e., by A3 and A8, which were considering the EEM for the future. Differently, A10, which performs the same EEM regularly, evaluated the effort as manageable.
The accessibility of CAS was widely considered since some companies had issues in the past. A10, e.g., assessed the accessibility as the most critical factor when dealing with the repair of leaks, together with A6 and A8, since parts of their compressed air lines can either be hard to reach or inaccessible (underground). The criticality of the factor was also pointed out by company A9, where the transport system to be replaced is integrated into the process lines, and A4 and A7.
Moving to the compatibility subcategory, technological compatibility was considered a critical factor by many companies. The choice of the controller, for instance, was strictly constrained by the type of compressor installed, as highlighted by A1 and A9. Technological compatibility was also rated as very important by A2, dealing with the reduction of pressure level of the CAS, since the variation in performance depends on the type of compressor. Eventually, A5 and A9 pointed out how the elimination of compressed air from the transportation system is an EEM which cannot be always applied because of technological constraints. Box 1. Application of the framework to company A5.
Company profile:
• Company A5 is a medium size company, with 105 employees and about €50 million of annual turnover, part of a multinational corporation operating in the food and beverage sector.
•
They are specialized in the production and distribution of canned sea food, with six production lines present in the plant. CA is used in the production lines for cleaning activities on the cans, for cutting fish, for the packaging system, and to drive the transportation lines.
Energy profile:
• Energy consumption is around 1% of the total turnover, which makes it a non-energy intensive company [1]. About 15% of the total energy consumption is related to compressed air, with a total power installed of 162 KW, distributed along four compressors located in two separate compressors rooms. • Company A5 is not certified with ISO 50001.
•
The last energy audit was performed in 2016.
Interviewee profile:
• The interviewee is the site manager, who is moreover in charge of the energy management inside the plant.
•
The decision-making process is performed by the site manager together with his team, composed of four people. They are also responsible for maintaining the correct conditions, aligned with the indications coming from the installed performance measurement system, during the execution of the production and service processes.
EEM profile:
• Company A5 considered the replacement of CA used for the transportation system for cans and aluminum tubes along the production line with a motor driven vacuum system, aiming at enhancing the performance getting rid of a dated technology.
•
The EEM belongs to the past cluster since company A5 eventually did not perform the substitution. The reason lies in the high investment cost and the required shutdown of the entire line which would have meant production disruption, thus losses, since they are continuously operating 24 h per day.
Application of the framework:
Operational factors
Pressure
The requirements to be satisfied in terms of pressure were considered by the decision-maker.
Temperature
Temperature was not perceived as a very influencing factor for the replacement of the CA-based transportation system.
Flow rate
Together with pressure, the flow rate requirement was considered during the decision-making process, being of paramount importance for the operation of the system.
Pay-back time
The importance of the factor was high, although the decision-maker was more susceptible to costs rather than to the extent of the pay-back period.
Initial expenditure
The high investment cost required for the EEM, together with the losses due to the stop of production which would have been necessary to perform the substitution of the transportation system, were the main reasons that led to the nonadoption decision.
Energy savings
Energy savings represent an important factor for the adoption of the EEM, with the decision-makers pointing out the possibility to enhance the energetic performance of the system by replacing a dated technology.
Activity type
The EEM is a new installation.
Expertise required
The installation of the EEM requires the involvement of experts in the substitution process, negatively affecting the decision according to the decision-maker.
Independency from other components/EEMs
Considering the pervasive involvement of the transportation system for the proper operation of the production line, the decision-maker pointed out a high dependency for the EEM.
Change in maintenance effort
No main changes were pointed out by the decision-maker with respect to maintenance efforts.
Accessibility
For the specific location of the CAS and the transportation system in company A5, the accessibility is not a big issue.
Compatibility Technological
The measure cannot be applied on all systems; hence the technological compatibility is a very important factor according to the decision-maker.
Presence of different pressure loads
Generally, the presence of different pressure loads should usually favor the adoption of the vacuum pumps; however, for the specific situation of company A5, pressure loads differences were almost negligible, reducing the weight of the factor.
Adaptability to different conditions
The capacity of the EEM to adapt to different operating conditions does not influence the adoption for the specific case of company A5 since a single vacuum pressure level is required.
Synergy with other activities
Through the exploitation of synergies the installation can be performed when the line is down, taking advantage of a planned production stop; this factor is critical, since for no reason the replacement of the actual transportation system would have been performed in a different time slot, with the risk of influencing and stopping the normal activities.
Distance from the electric service
For the specific situation of company A5 the factor is not critical due to the installation of the compressors in two rooms, close to the electric service.
Presence of thermal load
No thermal loads are present for the specific application.
Safety
The factor is not highly influential for the adoption of the specific EEM according to the decision-maker.
Air quality
The variation in the quality of air was not perceived as a very important factor by the decision-maker.
Wear and tear
The variation in wear and tear of the equipment does not represent a critical factor for the adoption of the specific EEM.
Noise
The interviewee proved to be almost unaware of the potential improvement in noise level and assigned a low weight to the factor.
Artificial demand
The factor is not critical for this EEM according to the decision-maker.
Eventually, the framework proved to be able to outline factors not known to the engineer-ing of company A5, although it should be noted that none of the negative ones had been under-estimated. In turn, more aware of the positive consequences of the adoption, the decision-maker could go back to his steps in case of a new stoppage of the line. He admitted that, despite the massive usage of compressed air and its energy consumption, they are not completely aware of the measure which could fit in their context. For this reason, he considered the developed tool as extremely tailored for their case. Moreover, the user-friendliness and the ease of use were positively rated. Factors considered as important () and very important () by decision-makers and literature; factors important (!) and very important (!!) that should have been considered by decision-makers according to literature, but which were not considered in the decision to adopt EEMs.
The presence of different pressure loads was considered of utmost importance by A3 when dealing with the correct sizing of compressors, since it may influence the decision regarding the number of devices required. However, for the same EEM, A11 did not perceive the criticality of the factor, despite the effective presence of different pressure levels in their lines. The explanation should be researched in the number of pressure reducers installed in the system. Eventually, if the factor had been properly considered, the company would have probably opted for a different and more efficient configuration. Similarly, in A2 the factor was not considered, despite the influence the pressure level has on the heat recovery potential.
The adaptability to different conditions was considered as the most important factor by A1 and A9, both dealing with the adoption of controllers on compressors, which were indeed installed with the specific purpose of changing the operating conditions of the equipment when needed. The factor was, however, underestimated by A1 regarding the assessment of the second EEM, i.e., the displacement of the compressors air intakes in the coolest location, because of a lack of awareness, and this was one of the main reasons hindering the adoption. Moreover, as stated by the decision-maker of company A7, the adaptability to different conditions, related to the variability of requirements in the demand side, is a very important factor when considering the recovery of heat from the compressors.
It should be assessed, however, together with the factor describing the presence of thermal loads, which refers to the availability of the right amount of heat to match the demand side. These are the most important factors to be considered when dealing with that type of EEM according to A7.
The possibility to take advantage of synergies to carry out the installation when the production line is down was considered as a very important point by both A5 and A9 when deciding about the replacement of the old air compressed transportation system with a more efficient technology. Otherwise, this would lead to an additional plant shutdown with related production losses, hence supporting the non-adoption of the EEM. The same factor was rated as critical for the adoption of controllers on compressors carried out by A1 and A9. In particular, the decision-maker of company A1 pointed out that the activity requires a long time to be performed, thus it was done during the summertime when the plant was closed. The synergy is also reported by A1 and A10 considering the displacement of the compressors air intakes in cooler locations.
Regarding the observability factors, all the respondents whose companies performed the repair of leaks in compressed air lines recognized the importance the activity has on the safety.
The air quality was generally not acknowledged as a critical factor, although other authors pointed out its relevance [29]. Companies A1 and A4 considered the displacement of the compressors air inlets from the external environment to the internal one, in a cooler location. Beside a difference in temperature however, the quality of the internal air is usually better: the moisture content is lower, and this may lower the wear of the compressors, extending their lifetime. Differently, for A10 there would be no variation in the air quality but only in air temperature since the EEM would just imply to shift the air inlet indoor.
The variation of CAS wear and tear was considered by A11 in terms of the extended lifetime of the equipment embedded in the installation of the new and correctly sized compressor and, according to the respondents, was a very important factor. Differently, A9 was unaware of the factor when referring to the adoption of a controller, nor A2 when thinking about the reduction of pressure level, although in both cases they agreed on the importance this could have on the decision-making process.
Noise was considered critical by A10 to foster the repair of leaks. A3, A6, and A8, who assessed the same EEM, did not deem the factor important. However, they claimed to perform repair activities as soon as a noise is perceived to limit its effect on the surroundings.
The artificial demand was known and considered very influential only by A3 and A10, both dealing with the repair of leaks. For the same EEM, A6 and A8 did not perceive the criticality. Initially, the decision-maker within A2 did not give much importance to the factor. However, he pointed out that the actual compressed air flow was higher than required because of a poorly sized compressor, and the artificial demand phenomenon was further increasing the gap between supply and demand. Therefore, the consideration of this factor could significantly increase the possibilities of a future adoption of the EEM. Moreover, the influence of the artificial demand also affects the adoption of controllers, as pointed out by the decision-makers of companies A1 and A9.
Overall, regardless of the nature of the EEM, i.e., past, present, or future, the framework proved to be able to provide additional information to industrial decision-makers. For instance, the respondent within A1 pointed out that the increased awareness resulting from the framework application would be probably enough to reconsider in the future the displacement of the compressors air intakes in the coolest location. Moreover, using the framework, the decision-maker of company A9 assessed an EEM he was not aware of. The framework resulted effective in A5 to highlight factors unknown to the decision-maker. However, none of the negatives were underestimated, and ultimately the decision not to adopt was due to the high investment costs and the production disruption to carry out the installation. Similarly, A7 acquired more insights from the framework, but the low amount of achievable savings drove the decision not to implement the considered EEM.
Furthermore, all the respondents particularly appreciated the ease of use of the framework and the low efforts required for its application, in particular for being able to completely define the EEMs encompassing only a limited number of factors.
Discussion
Comparing the result with the existing models, similarities can be found only regarding energetic and economic factors, since the most widespread and universally accepted indicators are utilized (e.g., pay-back time [26,112,163]) to evaluate the investment from an economic point of view, thus making the tool more user-friendly for the final adopters. On the other hand, differences can be found if considering operative factors, although technical information is widely covered by past literature [39]. The reason lies in the restricted focus of this work, i.e., CAS, being specific enough to enable the analysis of specific characteristics of the technology, which has been rarely investigated to this level of detail concerning characterizing factors. As confirmation of the previous statement, Nehler and Rasmussen [107] indicate that the characteristics of factors may depend on the type of EEMs, as already pointed out by Cagno and Trianni [22] referring to barriers to specific EEMs. Less detailed results come from a variety of studies considering compressed air through a multitechnology analysis [103,106], in many cases not even providing a clustering framework of factors [108,109,113]. Differently, more specific focus is provided by the study conducted by Nehler et al. [27], focused on CAS, which includes among the NEBs an improvement in temperature control, hence indicating the criticality of this factor. Moreover, considerations about pressure and flow rate are listed among the impacts perceived by suppliers concerning specific EEMs, as documented by a wealth of technical manuals and industrial literature extensively covering these aspects, despite neither categorizing the factors into an operative framework nor providing additional insights with respect to the mere technical ones. During the interviews conducted on field, these factors were highly appreciated by industrial decision-makers, given the practicality they confer to the tool; it would be indeed unfeasible to discuss the implementation of EEMs within CAS without taking into account such information. Other differences can be found analyzing those factors which introduce the contextual dimension, making the framework flexible enough to be exploited in all the different situations where the industrial decision-maker is required to operate. The first step toward this path was made by Rogers [99], followed by Tornatzky and Klein [100]; both the studies, however, treat compatibility referring to innovation, thus dealing with society in its entireness rather than a specific technology or field. Although the definition of the category can be adapted to the industrial environment, the details depicted by the single factors are here included for the first time. An exception is represented by the observability factors, i.e., safety, air quality, wear and tear, and noise, which are commonly considered in literature [105,106,164], sometimes clustered in a single element describing the whole working environment [7], given the strict relation with many EEMs, regardless of the technology considered. Industrial respondents were generally aware of such characteristics, despite the fact that they were never considered as the most critical elements leading the adoption of EEMs, with the exception being for A4; however, here compressed air belongs to the production process, which may act as a discriminant for the perceived importance of the role of compressed air. This is aligned with the perspective provided by Nehler et al. [27], where the importance of NEBs as a driver for the decision-making process is evaluated: enhancements of the working environment and safety conditions are considered. However, they are perceived as of secondary importance with respect to other advantages, e.g., those directly connected to the reliability and lifetime of the equipment. One reason could be the difficulty of their evaluation and monetization, thus the impossibility to include these considerations in the economic assessment of any investment, which represents a critical step of the decision-making process [165]. Nevertheless, according to Nehler and Rasmussen [107] those characteristics that cannot be evaluated from a monetary perspective, may be considered alongside the proposal in the form of comments. Regarding the remaining operational factor, i.e., artificial demand, given the strict dependency with the specific CA technology, it cannot be found in frameworks related to a broader cluster, such as by Trianni et al. [7]. Nevertheless, it should be noted that almost all the interviewees were aware of this phenomenon, despite the technical nature and difficulty of observation make it hard to be recognized by users without deep expertise in CAS.
Apart from observability factors, the complexity ones are partly included in previous literature, despite being categorized differently (e.g., [26,105,126]). Activity type, for instance, is included by Trianni et al. [7], who confined the definition by Rogers [99] and Tornatzky and Klein [100] to a limited field, i.e., industry, to make it practically exploitable. On the other hand, the willingness to focus on more than a single technology prevented them from analyzing all single factors related to compressed air solely. Interestingly, the present framework specifically included for the first time the difficulty in accessing the distribution system (accessibility factor), despite being deemed as important by any decision-maker interviewed. Further, compatibility issues, except for synergies [131], represent a neglected dimension in scientific literature, despite the fact that they are widely recognized in technical manuals or industrial sources (e.g., [29,127,129]). Once more, since the framework is intended for a practical application into companies, these considerations should be encompassed in the decision-making process, as revealed from the investigation where decision-makers acknowledged that some important factors were not always taken into account. This capability was embedded in the design of the framework, thanks to its focus on the single technology of CAS.
The need of a more specific funneled knowledge over relevant factors for EEMs adoption is partially aligned with the specificity of the characteristics but also to the applicability property discussed by Fleiter et al. [26], provided that the efficiency interventions remain confined to CAS. On the other hand, as demonstrated by the different importance attributed to the observability factors during the interviews, the selected factors should not be independent of the context and the adopting company, as stated by [26], but should include the information; the category contextual factors is considered in the present study to fulfil this necessity. In this regard, future research could explore whether such interdependency could be modulated by the different relationships between CA and the core process of the firms. Relationships may also exist among the various factors included in the framework, which are not completely disconnected from each other, confirming the close interactions CAS have with the operations of a company. For instance, the repair of leakages (ARC 2,4236) would lead to a reduction in pressure requirements, which in turn would affect the noise level and the wear and tear of the equipment. Interestingly, preliminary results of the analysis (e.g., Table 4) may suggest that some relationships exist, although more research is needed to shed some light on this. Indeed, an in-depth study of the impacts between factors could make a further contribution to the discussion about impacts on the operations and the other productive resources of a company.
Conclusions
The willingness to understand the main factors that rule the adoption of EEMs on CAS represents the driver that pushed toward the definition of the present framework. Aiming at providing a systemic view of the adoption, factors referring to the complexity, compatibility, and observability of the results coming from the adoption of EEMs were included in the model, encompassing, among others, the impacts on the operations and the other productive resources of an industrial firm, together with more traditional considerations regarding the operational and the economic and energetic factors. Results from the empirical application show how these features might prove critical in the path for the adoption, sometimes even capable of reversing the outcome, hence confirming the added knowledge brought by the framework.
In this regard, future longitudinal research could explore the change of awareness in decision-makers when assessing EEMs in CAS and other sustainability practices within industrial operations. Moreover, the focus kept on the specific technology of CAS enabled to point out peculiar factors that might be lost approaching the problem through a more holistic perspective, e.g., difficulties in accessing CAS, which was a recurrent topic in the empirical investigations. Nonetheless, despite its non-negligible importance according to the interviewed decision-makers, the factor has never been approached by previous studies.
Using the framework, industrial decision-makers could tackle the perception of uncertainty they have concerning EEMs, beside finding valuable support to overcome the barriers related to risk, imperfect evaluation criteria, and lack of information, which might represent critical issues preventing a sound decision-making process. These barriers might be particularly present in SMEs, generally characterized by less trained or less skilled decision-makers, who may moreover face difficulties in the use of complex or overly detailed models. However, the structuring resulting from the synthesis process to which the framework was subjected made it possible to obtain a complete framework regarding the factors to be considered in the adoption of CAS EEMs, characterized at the same time by a high ease of use. Indeed, as pointed out by the empirical application, the evaluation of the user-friendliness and the effort required for the usage were overall positive, despite the fact that the greatest share of companies in the sample were SMEs. Policy makers, on the other hand, could take advantage of the framework to design tailored policies for enhancing the efficiency of CAS. Moreover, the assessment of the factors that rule the adoption of EEMs on CAS could lead to a deeper understanding of the specific barriers that affect the technology, which might move away with respect to the issue preventing the adoption of other technologies, assigned to different roles in a plant, e.g., electric motor systems. This deeper knowledge would, in turn, create solid foundations on which to lay the basis for the definition of drivers to overcome these barriers, improving the overall efficiency.
In conclusion, we would like to acknowledge some study limitations, starting from the narrowness of the application sample and its heterogeneity with respect to the industrial sectors. Besides, not all sectors are encompassed in the present study, e.g., textile or metal manufacturing are missing. Moreover, limiting the analysis to the technology of CAS did not enable to consider the entire set of impacts the adoption of an EEM has on the other productive resources or on the operations of a firm. Accordingly, future research could move towards this direction, furtherly extending the analysis to include a broader set of heterogeneous EEMs to better assess the impacts of their adoption. Additionally, further research could effectively develop approaches to measure such impacts more quantitatively, linking the impacts on production and operations performance. Furthermore, research could explore what synergies may be explored by integrating the developed framework into a broader set of tools to improve the sustainability performance of industrial enterprises, also connecting it with assessment tools, maturity models, etc. | 2020-10-19T18:06:53.783Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "d58def54c263ab6f24ba04f46a8568d0e28a3eb9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/13/19/5116/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "281165f438978fba7000bc352c42fd862c1be8b0",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
195771973 | pes2o/s2orc | v3-fos-license | Compliance with the Very Integrated Program (VIP) for Smoking Cessation, Nutrition, Physical Activity and Comorbidity Education Among Patients in Treatment for Alcohol and Drug Addiction
Meeting adherence is an important element of compliance in treatment programmes. It is influenced by several factors one being self-efficacy. We aimed to investigate the association between self-efficacy and meeting adherence and other factors of importance for adherence among patients with alcohol and drug addiction who were undergoing an intensive lifestyle intervention. The intervention consisted of a 6-week Very Integrated Programme. High meeting adherence was defined as >75% participation. The association between self-efficacy and meeting adherence were analysed. The qualitative analyses identified themes important for the patients and were performed as text condensation. High self-efficacy was associated with high meeting adherence (ρ = 0.24, p = 0.03). In the multivariate analyses two variables were significant: avoid complications (OR: 0.51, 95% CI: 0.29–0.90) and self-efficacy (OR: 1.28, 95% CI: 1.00–1.63). Reflections on lifestyle change resulted in the themes of Health and Wellbeing, Personal Economy, Acceptance of Change, and Emotions Related to Lifestyle Change. A higher level of self-efficacy was positively associated with meeting adherence. Patients score high on avoiding complications but then adherence to the intervention drops. There was no difference in the reflections on lifestyle change between the group with high adherence and the group with low adherence.
Introduction
Patients with alcohol and drug addiction often have additional risky lifestyle behaviours, e.g., smoking, poor nutrition and physical inactivity, as well as comorbidity; all these have a social gradient and add to the already high substance-induced morbidity and pre-mortality [1][2][3]. Effective health promotion aiming at those factors is therefore relevant to offer to this disadvantaged patient group as an integrated part of addiction treatment. However, lifestyle change is a complex process influenced by many factors related to the programme, the therapist and the individual person.
To meet the multiple needs related to lifestyle change among patients with alcohol and drug addiction, the individually tailored Very Integrated Program (VIP) has been translated from the 6-week intensive Danish Gold Standard Programme (GSP) on smoking cessation intervention with an approximately 30% continuous quit rate [4][5][6] for alcohol intervention, nutritional intervention, physical activity or combinations of these [7].
The intervention employs an Operational Model to facilitate successful lifestyle change, which incorporates the theories of Motivational Interviewing [8], Decisional Balance and the Trans-Theoretical Model of Change [9], all of which are related to the process of behavioural change [10]. A Cochrane review suggest that using only one theoretical approach may be insufficient for patients who are attempting lifestyle changes in relation to clinical treatment and argues that an intervention should include both measures to increase motivation, intensive behavioural support as well as nicotine replacement therapy in the case of smoking cessation to get the best results [11].
The prediction of a successful lifestyle intervention can have a major impact even for patients with substance use and severe mental diseases [12]. Among other issues the meeting adherence has been reported to be a strong predictor for successful outcomes [4,[13][14][15]. Nevertheless, the factors associated with meeting adherence have been less investigated. The level of motivation and self-efficacy tend to be relevant and have been reported to be an important factor for prompting engagement into the process of change [8,9,16]. Although the self-efficacy itself is found not to be a robust predictor among patients with alcohol addiction [17].
According to Albert Bandura self-efficacy is defined as a person's confidence in his or her intrinsic ability to accomplish goals [18] and is often measured by a visual analogue scale or a Likert scale similar to the one widely implemented for pain measurement. Central to the theory of self-efficacy are the person's own expectations of efficacy, which determine the initiation of a coping behaviour, how much effort he or she will put into it, and for how long the effort will be sustained when confronted with barriers and obstructions. Reflections on the advantages and disadvantages of changing [19] may also be of importance for meeting adherence, and individuals with high meeting adherence might hypothetically have different reflections than those with low meeting adherence.
The primary aim of this study was to evaluate the possible association between the patients' self-assessment of importance of avoiding complications, importance of changing lifestyle and their level of self-efficacy and adherence to the VIP intervention among patients undergoing treatment for alcohol and drug addiction. The secondary aims were to identify other possible factors of importance for meeting adherence.
Materials and Methods
This study is a sub-study of a randomised controlled trial which will be reported at a later stage. This study includes the patients in the intervention group of the randomised study. The RCT was a clinical trial called VIP that took place in two addiction centres in Malmö, Sweden ( Figure 1). The aim of the VIP-Study was to assess the effect of adding health promotion activities to treatment of alcohol and drug addiction. Results on the effect of the intervention are planned to be reported at a later stage [20]. The eligible participants were persons aged 18 or over and diagnosed with alcohol or drug addiction according to the ICD-10 criteria by the specialists at the Addiction Centre Malmö (4 units) and the Integrated Community Care Centre (one unit), Psychiatry Skåne, Sweden. The inclusion criteria for VIP was alcohol and drug addiction in accordance to ICD-10 criteria among patients who also had at least one risky lifestyle behaviour (daily smoking, daily physical activity less than 30 min, overweight and risk of malnutrition) and at least one comorbidity (diabetes, cardiovascular, respiratory and liver diseases). In VIP 115 patients were allocated to the intervention group. This present sub-study includes a total of 82 patients with alcohol or drug addiction who had completed The Line Tool for the quantitative study and 37 who completed The Box Tool were included in the qualitative study ( Figure 1). After obtaining informed consent, the VIP intervention group filled in The Line Tool as visual analogue scales [21] (Figure 2a). The lines are known from the motivational interviewing technique, where they have been validated [8]. The scale exists both in a version from 1-10 cm and a version from 0-10 cm. In this study we used the version from 0-10 which is similar to the visual analogue scales which is used in the clinical setting worldwide. The patients also structured their reflections as 1-4 in The Box Tool (Figure 2b).
The VIP intervention was based on the Operational Model and translated from the GSP intervention. GSP includes motivational dialogues, an educational programme with five sessions aimed at risky lifestyle behaviours and an interactive workshop on comorbidity directed at patients and relatives. This programme involves six face-to-face meetings with a trained counsellor, each one being approximately an hour in duration. The Line Tool used in this model made it possible to rank the patients' self-efficacy in achieving the change amongst others, while The Box Tool, besides assessing the advantages and disadvantages of the change, allowed patients to reflect upon the presence of ambivalence towards the change [10]. The patients kept the forms throughout the intervention as a reminder of their reflections. In this way we integrated qualitative research methods in the larger context of the clinical trial. It is argued in the developing debate on process evaluation that this is fruitful because empirical data add to the understanding of how an intervention is experienced by the participants, and how participants interact with the intervention under the influence of other factors such as circumstances, attitudes, beliefs, social norms and resources [22,23]. After obtaining informed consent, the VIP intervention group filled in The Line Tool as visual analogue scales [21] (Figure 2a). The lines are known from the motivational interviewing technique, where they have been validated [8]. The scale exists both in a version from 1-10 cm and a version from 0-10 cm. In this study we used the version from 0-10 which is similar to the visual analogue scales which is used in the clinical setting worldwide. The patients also structured their reflections as 1-4 in The Box Tool (Figure 2b).
The VIP intervention was based on the Operational Model and translated from the GSP intervention. GSP includes motivational dialogues, an educational programme with five sessions aimed at risky lifestyle behaviours and an interactive workshop on comorbidity directed at patients and relatives. This programme involves six face-to-face meetings with a trained counsellor, each one being approximately an hour in duration. The Line Tool used in this model made it possible to rank the patients' self-efficacy in achieving the change amongst others, while The Box Tool, besides assessing the advantages and disadvantages of the change, allowed patients to reflect upon the presence of ambivalence towards the change [10]. The patients kept the forms throughout the intervention as a reminder of their reflections. In this way we integrated qualitative research methods in the larger context of the clinical trial. It is argued in the developing debate on process evaluation that this is fruitful because empirical data add to the understanding of how an intervention is experienced by the participants, and how participants interact with the intervention under the influence of other factors such as circumstances, attitudes, beliefs, social norms and resources [22,23].
Quantitative Data
We collected the following information at inclusion of patients in the VIP study: Age, duration of addiction, gender, socioeconomic and lifestyle factors, comorbidity, self-evaluated quality of life via the Short Form Health Survey (SF-36) [24] (Table 1) and meeting adherence defined as attendance to the meetings. In total the intervention included five meetings over six weeks approximately of onehour duration each. Meeting attendance was recorded by the clinician performing the intervention. Adherence was dichotomised into high (≥75%) and low attendance (<75%) of sessions as high adherence has been shown to triple quit rates [25]. The data for lines came from measures on the VAS scale 0-10 cm (Figure 2a). SF-36 is used to measure the self-reported quality of life [24]. The questions in SF-36 are focused around two main domains, the physical and mental health [24]. For example, the first item reads "In general, would you say your health is…" with the scoring options "Excellent, Very good, Good, Fair, Poor", and item nine a reads "These questions are about how you feel and how things have been with you during the past month. How much time during the past month: Did you feel full of life?" having the answering options "All of the time, Most of the time, A good bit of the time, Some of the time, A little of the time, None of the time" [24].
The reliability and validity of the survey has been tested among many patient populations including patients with alcohol and drug addiction [26,27]. They reported Cronbach's Alpha ≥ 0.7. In a Swedish context it has also proven a high reliability with a Cronbach's Alpha ≥ 0.8. [28].
Quantitative Data
We collected the following information at inclusion of patients in the VIP study: Age, duration of addiction, gender, socioeconomic and lifestyle factors, comorbidity, self-evaluated quality of life via the Short Form Health Survey (SF-36) [24] (Table 1) and meeting adherence defined as attendance to the meetings. In total the intervention included five meetings over six weeks approximately of one-hour duration each. Meeting attendance was recorded by the clinician performing the intervention. Adherence was dichotomised into high (≥75%) and low attendance (<75%) of sessions as high adherence has been shown to triple quit rates [25]. The data for lines came from measures on the VAS scale 0-10 cm (Figure 2a). SF-36 is used to measure the self-reported quality of life [24]. The questions in SF-36 are focused around two main domains, the physical and mental health [24]. For example, the first item reads "In general, would you say your health is . . . " with the scoring options "Excellent, Very good, Good, Fair, Poor", and item nine a reads "These questions are about how you feel and how things have been with you during the past month. How much time during the past month: Did you feel full of life?" having the answering options "All of the time, Most of the time, A good bit of the time, Some of the time, A little of the time, None of the time" [24].
The reliability and validity of the survey has been tested among many patient populations including patients with alcohol and drug addiction [26,27]. They reported Cronbach's Alpha ≥ 0.7. In a Swedish context it has also proven a high reliability with a Cronbach's Alpha ≥ 0.8. [28].
Qualitative Data
The qualitative data consisted of the patients' reflections on advantages and disadvantages regarding lifestyle intervention as described above in the Box Tool (Figure 2b).
Quantitative Analyses
First, we used the Spearman's correlation test (ρ) to examine the association between The Line Tool components measuring the patients' perceived importance of the following categories: avoid complications (Line 1), immediate change (Line 2) and self-efficacy (Line 3) and meeting adherence.
To analyse differences between the groups with high-and low attendance, we conducted univariate analyses and used the Chi-square test for the dichotomous variables and Mann-Whitney U test for continuous variables. The significance level was set at 0.05, two-sided. All the predictive variables such us Complications, Change, Self-efficacy, Age, Years of addiction and Sex were entered together into the multivariable logistic regression analysis for identifying factors of significance using odds ratio (OR) with 95% confidence interval (CI).
For quantitative analyses we used the IBM SPSS Statistics software, version 22 [29].
Qualitative Analyses-Systematic Text Condensation
The handwritten answers of the patients in The Box Tool were converted to the digital form of an Excel sheet and imported into NVivo qualitative data analysis software, version 11 [30] for coding. Some of the answers had no direct relation to the questions and were therefore excluded from the analyses.
The data was analysed using Kirsti Malterud's approach of systematic text condensation [31]. Systematic text condensation consists of 4 steps: Total impression of all answers and identifying preliminary themes, Coding by identifying and sorting meaning units, Condensation into code groups and Synthesising the condensates into a story grounded in the empirical data.
KH and RR independently read the patient reflections thoroughly to get a total impression and to identify preliminary themes. These themes were largely identical such as "Health" and "Economy", but there were also some discrepancies. For example, the preliminary theme of "Social life/family" was only identified by one of the coders and some themes such as "Weight" and "More alert" were included in the overall theme of "Health". Subsequently, the data was systematically reviewed for meaning units, which gave rise to codes that were then separated into thematic groups and a condensate was developed. This was done by RR and then the thematic groups and sub-groups were discussed by KH and RR until consensus was reached. The condensates were then synthesised, and descriptions and concepts were developed by RR to clarify the study question. Since the research objective was to identify important factors for adherence, the data were sorted in high and low adherence and analysed accordingly. The two analyses were then checked for divergent themes.
Qualitative Data: Data Saturation
In qualitative research, the indication of when to stop collecting data is the methodological principle of data saturation; as data is being collected, patterns and themes will emerge and begin to reproduce themselves. When this happens, it is estimated that no further data is needed because the data is "saturated" [33]. In traditional qualitative interview studies, data saturation is expected to be reached after approximately 12 interviews [34]. In our case data was collected from all 37 patients and as expected several major themes were identified in the data.
Ethical Issues
The VIP study has been approved by the Ethical Review Board (Dnr2010/470) in accordance with the Swedish Data Protection Agency and the protocol was registered in clinicaltrials.gov (NCT01414907). The patients were included after informed consent.
Quantitative Data
We found that only self-efficacy (Line 3) was positively associated with meeting adherence, ρ = 0.24, p = 0.03 which was significant. The patients' perceptions of importance for changing lifestyle immediately (Line 2) and avoiding complications (Line 1) was not found to be statistically significant.
In the univariate analyses the only significant difference between the two groups with high and low adherence were the perception of self-efficacy, which remained significant in the multivariate analyses (OR: 1.23; 95% CI: 1.01-1.51). The importance of avoiding complications, although not significant in the univariate analyses, became significant in the multivariable model (OR: 0.51; 95% CI: 0.29-0.90) ( Table 2).
Qualitative Data
We identified 4 major themes and 11 sub-themes (Table 3). There were no substantial differences between the answers of the group with high adherence and the group with low adherence. The same themes and sub-themes were present, and only the subtheme of "Continuing current lifestyle restrains individuals in poor economy" was referred to a little less frequently in the group with high adherence.
Considering the major themes and sub-themes identified through the analysis, there were evident patterns in patient reflections. These patterns drew a common picture of patients, of whom many were very aware of how their lifestyle influenced their health and wellbeing.
They saw improving lifestyle as an advantage because it would improve their health and they were aware that if they continued their current lifestyle it would not only cause poor health and future health risks but possibly also a premature death.
It was also evident for the patients that lifestyle change would have an impact on their personal economy. They expected to be restrained in poor economy if they continued their current lifestyle and to improve their personal economy if they improved their lifestyle.
The patients expressed a strong acceptance of change finding no advantages of continuing their current lifestyle and finding no disadvantages of improving their current lifestyle.
Finally, it became clear that the patients related strong emotions to lifestyle change. They found that maintaining status quo was easier than changing routine. They also found positive social implications of lifestyle improvement and especially emphasised improved contact with their children. They associated their current lifestyle with positive emotions and expected lifestyle improvement to cause negative emotions. Table 3. Reflections on advantages and disadvantages of lifestyle change.
HEALTH AND WELLBEING Improved health is an advantage of improving lifestyle
The patients expressed, that they expect a major advantage of improving their lifestyle to be that they will feel better, that their health will be better, and that their strength will improve-both physically and mentally. As examples they mention getting rid of high blood pressure, asthma will improve, and weight loss. They also expect lifestyle change will help them recover, feel refreshed, get a better self-image and be more positive. They also expect to get a better old age and live longer. "Feel better physically and mentally and improved strength".
Continuing current lifestyle causes poor health and future health risks
The major disadvantage experienced by the patients of their current lifestyle is their deteriorated physical and mental health. They are often ill, they have a cough and high blood pressure. They feel in bad shape; they are overweight and fear not being able to breathe. Their stress and concern increase, and they struggle with anxiety. They are aware of the risk of getting COPD, a stroke, cancer, chronic diseases or problems with their heart. They know, that their current lifestyle does not lead to a long and healthy life. "I do not give myself the chance to live a longer and healthier life".
Continuing current lifestyle might lead to a premature death
The patients are aware that a disadvantage of their current lifestyle is, that it might shorten their lifespan and that their children consequently would be without a parent should they die prematurely. "Running the risk of a premature death".
Themes (Capital Letters), Sub-Themes (Bold), Condensations, Authentic Illustrative Quotations (Italic)
PERSONAL ECONOMY Improving lifestyle will improve personal economy Spending money on other things is seen as an advantage of changing lifestyle by the patients. They expect that it will improve their living conditions, and that they perhaps would have a surplus. "More money over to other things".
Continuing current lifestyle restrains individuals in poor economy
Living with poor economy is experienced as a disadvantage of the patients' current lifestyle. It costs them too much and they live a poor and depleted life. "It costs too much".
ACCEPTANCE OF CHANGE There are no advantages of continuing current lifestyle
The patients find it difficult to see advantages of their current lifestyle. "There are no advantages, actually".
There are no disadvantages of improving lifestyle
The patients see no disadvantages of changing their lifestyle. "There are no disadvantages".
EMOTIONS RELATED TO LIFESTYLE CHANGE Status quo is easier than changing routine
The patients must not change lifestyle and they see this as an advantage. They find it difficult to begin something new and to make an effort to break a habit. They find it convenient not to do anything, not to care or to think about their flaws. Their current lifestyle feels safe to them because some of them fear changes. They know, that changing lifestyle will demand a commitment from them, they think nicotine abstinences will cause anxiety and they will lose the opportunity to sleep all day. "Escape changes and all the effort it implies, peace of mind".
Positive social implications of lifestyle improvement especially improved contact with children
The patients believe, that they will become more social if they change lifestyle. They think of how they could do more together with their children and how they would have a longer future together. They also envision how they might find a partner, how they would not smell of smoke all the time, how they might become a role model for others and support family members in smoking cessation. "My daughter worries so much".
Current lifestyle is associated with positive emotions
The current lifestyle of the patients makes them calm inside and helps them suppress their anxiety. They experience that they become less aggressive, and it makes them feel good. They also think of it as a pleasant social activity. "Makes me calm inside and suppresses my anxiety". Improving lifestyle is expected to cause negative emotions The patients expect lifestyle change to influence their mood negatively. They think they will get a lot of "do's", for example about what they eat and with whom they socialise. In that way, they imagine they would feel their life would be limited compared to their current life. "Bad mood".
Discussion
We found that meeting adherence during the 6-week VIP intervention was positively associated with a high pre-intervention score for self-efficacy. This is similar to results of another study that found that positive expectations had improved effect on treatment retention [35] but on the other side, an older review found that changes in the self-efficacy and the benefit from self-efficacy on outcome was heterogeneous [36]. We also identified several themes in the patients' reflections on the advantages and disadvantages of lifestyle change: Health and Well-being, Personal Economy, Acceptance of Change, and Emotions Related to Lifestyle Change. These themes were identical in the groups with high and low adherence.
It is a surprising result, that patients score high on avoiding complications (Line 1), but then their adherence to the intervention drop, although this result was not significant.
The positive correlation between the patients' expectations of their own efficacy and the duration they adhered to the intervention is in line with Bandura's theory [18], but despite the wide discussion of self-efficacy in the literature, we found no other studies about self-efficacy and integrated health promotion intervention on patients with alcohol and drug addiction. However, our results are to some degree supported by other studies. A previous review about the effect of self-efficacy on various health care interventions concluded that self-efficacy is a powerful predictor for better intervention outcomes, especially regarding smoking secession and, to some extent, physical activity and weight control [37].
A more recent study found that high confidence (equivalent to self-efficacy, Line 3) was associated with a positive outcome regarding both drinking and smoking [38] while another study associated the low self-efficacy with a relapse among drug users [39]. On the other side, an older study found that the lower score of self-efficacy at the baseline was associated with higher rates of abstinence after 6 months among the patients with alcohol addiction, but the same group of patients had two-fold increase in self-efficacy score during this period [40] this was also supported by another study [41]. Nevertheless, a study by Romero et al. reported the intermediate self-efficacy score to be associated with the better outcomes among the patients with the drug addiction, whereas the low and high scores predicted a inferiors results [42].
Other studies have reported contradictory results on participant adherence to a 12-week physical activity programme. However, many other factors, such as health literacy and numeracy, may also impact self-reporting [43], but no clear cut-offs, mediators and moderators for the effect of health literacy have been determined for clinical use [44].
Methodological Implications
Qualitative methods can help us understand the clinical context of complex interventions and identify unexpected mechanisms [45]. In this study the qualitative answers shed more light on the results obtained from the quantitative measures. For example, the patients' responses that they find avoiding complications very important is unfolded in their descriptions of which complications they are afraid to experience and what consequences it will have for their life. However, on this background it is unexpected to observe, that they less frequently come to the meetings.
The way patients interact with the intervention is influenced by their circumstances, attitudes, beliefs, social norms and resources [22,23]. It is interesting that both the group with high and the group with low meeting adherence are aware that continuing their current lifestyle might cause poor health and future health risks including possible premature death and improving their lifestyle will improve their health. A high awareness of these issues does not seem to influence the motivation to attend the meetings of the intervention since assessing it as important to avoid complications of comorbidity on the line scale is associated with lower meeting adherence.
In many ways these results capture the complexity of patient reflections as they stand on the brink of lifestyle change. Advantages and disadvantages coexist. For example, there seems to be an acceptance of change, because when asked, the patients see no advantages of continuing their unhealthy lifestyle and no disadvantages of improving their lifestyle. The patients point to health and wellbeing as major advantages of changing lifestyle and they expect their personal economy to improve. At the same they express strong emotions of barriers to lifestyle change. The routines of their current lifestyle have a strong hold of them, they associate positive emotions to their current lifestyle and expect that changing lifestyle will cause negative emotions. These factors might contribute to an explanation of why an intervention might (not) work, for whom, and under what circumstances [45].
These major themes from the reflections of Health and Wellbeing, Personal Economy, Acceptance of Change, and Emotions Related to Lifestyle Change may also be of importance for the general support and facilitation of the changing process among this group of patients with alcohol and drug addiction. The clear messages on personal economy and the high acceptance of change seem easy to include and revisit to drive support for the patients' process of change.
Clinical Implications
A special focus should be put on how positive emotions such as inner calm and anxiety relief are experienced as an advantage of current lifestyle with addiction and that negative emotions are expected to be a disadvantage of changing this lifestyle. Additionally, the expression of emotions related to changing of lifestyle should be an important element for the staff supporting the change; they should acknowledge both that staying with the status quo is easier than changing routine and that the current lifestyle is associated with the positive emotions while improving lifestyle is expected to cause negative emotions. While acknowledging the expectations of the patients, intervention staff should also rely on evidence in the area.
Research Implications
In addition to co-morbidity the VIP-programme intervenes on the four risk factors of smoking, physical inactivity, overweight and malnutrition. Evidence on the association between risk reduction and positive emotional and mental state already exist in several areas.
Interestingly, a systematic review with meta-analyses from 2014 shows that smoking cessation is associated with reduced depression, anxiety and stress after the period of withdrawal symptoms [46]. Therefore, issues of smoking cessation for patients with a psychiatric diagnosis should not be avoided [47], but on the contrary, encouraged [12].
A review from 2019 shows how exercise has positive effects on brain health including decreased risks of depression and stress in of patients with neurological and mental illnesses [48]. In the general population, physical activity also has a positive effect on anxiety [48] and the evidence is still growing for a general positive effect of physical activity on mental health [49].
We have not been able to identify reviews on the influence of improved nutrition on well-being in this patient group, and this knowledge gap needs to be investigated in future research.
In the VIP-study, the recruited patients were already in treatment for alcohol and drug abuse, and therefore these risk factors were not included in the VIP-programme. However, a trial by Berman et al. shows that participants with a problematic alcohol intake who reduce this intake reported better wellbeing including lower stress, better social life satisfaction and lower rated of depressed mood at 12 months follow up. There was no difference in the group of drug users [50].
Suggestions for Future Research
The results of this study can generate hypotheses for future studies and suggest explanations that can inform future interventions [22] where such hypotheses can be tested. It would be clinically relevant to investigate if seeking to obtain a possible outcome triggers lifestyle change more than seeking to avoid a negative outcome. For example, if seeking to obtain improved health triggers lifestyle change more than seeking to avoid a premature death.
The time perspective is another clinically relevant area for investigation. Studies have shown that patients are motivated to quit drinking in the perioperative period to avoid postoperative complications such as cardiopulmonary complications, bleeding episodes and infections [51]. Our results indicate, that the patients in this study are not motivated to change lifestyle by reducing their risk of complications to their chronic medical condition. To optimise lifestyle intervention, it would be relevant to investigate if decision making of lifestyle change is associated with the expected time perspective (short term or long term) of the benefit of the change.
To summarise we could assume that the surprising quantitative findings reflects the patients' coexistence of positive and negative expectations to lifestyle change. The patients are concerned about complications, but at the same time they do not have the resources handle the expected negative emotions that come with the change and therefore they do not come to the meetings. On the other hand, the ones who rate themselves high on self-efficacy manage to set aside the expected negative emotions and focus on the expected positive outcomes of improved health and economy and therefore come to the meetings to continue their lifestyle change.
Lessons Learned
(1) There is a discrepancy between what patients say and know about the importance of avoiding complications and what they do to avoid them when it comes to adhere to an intervention programme.
(2) Patients have negative expectations towards the emotional side of lifestyle change, though there is evidence for the contrary.
Strengths and Limitations
Among the strengths of this study was the general homogeneity of the included patients who furthermore also received the same VIP intervention irrespective of their self-efficacy score.
Even though 33 (29%) of the patients from the original VIP study were not included in this sub-study due to missing data, the power estimate illustrated that the 82 included patients were sufficient for the quantitative analyses. In the same manner, the analyses for data saturation confirmed that the data from the 37 completed boxes would be sufficient for including into the qualitative analyses.
The box was easy to complete and did not require extensive involvement from the staff. On the other hand, it prompted shorter answers which could be challenging for in depth data extraction. When filling in the box, the patient was supposed to begin with the question "Advantages of the current lifestyle" and finish with answering the question "Advantages of changing". It was supposed to structure the considerations and ambivalence related to the process towards improving lifestyle change [10]. This could also predispose the patients to answer in a certain way. However, since the project staff did not interfere with completing the boxes this possible bias was considered low.
The two statistically significant variables from the quantitative analyses were avoiding complications and self-efficacy. The risk of a type I error could be reduced by repeating the study. On the other hand, the non-significant results of other variables could be attributed to the type II error and may have reached a significant level if the study population was even larger.
The study only included patients from addiction treatment organisations in Malmö, Skåne region, which might have different demographics compared not only with other countries but also with other regions within Sweden. This implies that a similar study conducted in a different location might garner different findings.
In this study meeting adherence was defined in relation to the 6-week intervention. Longer-or shorter-interventions could thus show different results.
Conclusions
We found that when patients score high on wanting to avoid complications, their meeting adherence drops, and a high level of self-efficacy was positively associated with high adherence to the intervention meetings among patients with alcohol and drug addiction. The negative association between avoiding complications and meeting adherence is surprising and should be further investigated in future studies. Recognising the self-efficacy as a predictive factor for better intervention compliance in this group should be accepted with precaution. There was no difference in the identified themes of pros and cons of the patients with high and low adherence. In general, the patients showed solid concerns about their health and wellbeing as well as their personal economy. They were ready for change, but also displayed strong emotions regarding the process of change. | 2019-07-03T13:05:23.207Z | 2019-06-28T00:00:00.000 | {
"year": 2019,
"sha1": "98899395b22875c6f3b9f84b6e40301fe3baaeda",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/ijerph/ijerph-16-02285/article_deploy/ijerph-16-02285.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "98899395b22875c6f3b9f84b6e40301fe3baaeda",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255332230 | pes2o/s2orc | v3-fos-license | External-Cavity Quantum Cascade Laser-Based Gas Sensor for Sulfur Hexafluoride Detection
: The external-cavity quantum cascade laser (ECQCL) is an ideal mid-infrared (MIR) spectral light source for determining large molecular-absorption spectral features with broad transition bands. For this paper, a gas sensor system was developed using a broadband tunable ECQCL and a direct absorption spectroscopy detection scheme with a short path absorption cell of 29.6 cm. For spectral signal detection, a cheap and miniaturized quartz crystal tuning fork-(QCTF) based light detector was used for laser signal detection. The characteristics of the QCTF detector were theoretically simulated and experimentally observed. To demonstrate this sensing technique, sulfur hexafluoride (SF 6 ) was selected as the analyte, which can be used as an effective indicator to identify fault-types of gas-insulated electrical equipment. Preliminary results indicated that a good agreement was obtained between experimentally observed data and reference spectra according to the NIST database and previous publications, and the gas sensor system showed a good linear response to SF 6 gas concentration. Finally, Allan–Werle deviation analysis indicated that detection limits of 1.89 ppm for SF 6 were obtained with a 1 s integration time, which can be further improved to ~0.38 ppm by averaging up to 131 s.
Introduction
Laser spectroscopy-based sensors are based upon the direct and indirect detection of laser light interacting with a target object, which inherently allows for trace gas noninvasive measurements with high precision and high accuracy, as well as fast response.To date, a variety of spectroscopic techniques, from absorbance, reflectance, transmission and scattering, to established methods such as direct absorption spectroscopy (DAS), wavelength modulation spectroscopy (WMS) or calibration-free WMS [1,2], photoacoustic spectroscopy (PAS), multi-pass cell-based laser absorption spectroscopy, cavity-enhanced absorption spectroscopy (CEAS) or integral cavity output spectroscopy (ICOS) or cavity ring-down spectroscopy (CRDS), and Raman spectroscopy have been widely used for most atmospheric molecule (including isotope) detection [3], and the sensitivity achieved by these spectroscopy methods can be comparable to that of traditional mass spectrometry [4].Therefore, laser spectroscopy-based detection techniques and sensors have been attractive and powerful analysis tools for environmental sensing, defense and public security, biomedical, industrial and agricultural applications, etc.Among various laser light sources, quantum cascade lasers (QCLs), initially demonstrated at Bell Labs in 1994, are ideal light sources for spectroscopic applications, especially for external-cavity quantum cascade lasers (ECQCL), which cover almost the entire MIR spectral region (between 2.5 µm and 25 µm), and provide broadly spectral tuning intervals up to several hundred cm −1 by a single ECQCL module; thus, ECQCL-based spectroscopy sensors can be used for simultaneous detection of multiple light gas molecules with narrow-band absorption features or heavy molecules with broadband spectral signatures [5].
Sulfur hexafluoride (SF 6 ) is a colorless and odorless gas with a relative molecular mass of 146.05, which is often used as an insulating gas in gas-insulated electrical equipment in Chemosensors 2023, 11, 30 2 of 10 power systems, such as gas-insulated switchgear, transmission pipes, gas circuit breakers, and other industrial electrical installations, mainly due to its stable chemical properties and excellent insulating and arc-extinguishing abilities.Usually, when insulation faults occur in gas-insulated electrical equipment, such as overheating and partial discharge, SF 6 as an inert gas will decompose and react with other constituents (such as H 2 O, O 2 and other substances containing carbon, hydrogen and oxygen) in insulating gas to generate various so-called fault characteristic gases (such as CO/CO 2 , SO 2 ,/H 2 S/COS, SO 2 F 2 , HF, CF 4 , etc.), which can be used as effective indicators to identify fault types of gas-insulated electrical equipment.Recently, laser spectroscopy-based techniques and sensors have been proven as effective diagnostic tools for the non-destructive detection of fault characteristic gases with a high degree of sensitivity and selectivity.
For this paper, a spectroscopic sensor system based on a widely tunable ECQCL was developed for potential detection of gas-phase sulfur hexafluoride (SF 6 ).For the spectral signal record, a cheap and miniaturized quartz crystal tuning fork-(QCTF) based light detector was integrated for laser signal detection.Preliminary evaluation of the ECQCL gas sensor system is discussed for quantitative and qualitative analysis of the gas-phase of sulfur hexafluoride.To realize spectroscopic detection of the gaseous sulfur hexafluoride (SF 6 ), theoretical simulation was primarily made to select the optimal spectral window.The spectral characteristic of sulfur hexafluoride (SF 6 ) within 500 cm −1 and 4000 cm −1 is simulated and provided in Figure 1 according to the NIST (National Institute of Standards and Technology) database [6].For clarity, the inset is the spectral region within our ECQCL laser tuning range between 1150 cm −1 and 1450 cm −1 .In this figure, the X-coordinate wavenumber (in units of cm −1 ) is defined as the reciprocal of the wavelength (in units of cm).be used for simultaneous detection of multiple light gas molecules with narrow-band absorption features or heavy molecules with broadband spectral signatures [5].Sulfur hexafluoride (SF6) is a colorless and odorless gas with a relative molecular mass of 146.05, which is often used as an insulating gas in gas-insulated electrical equipment in power systems, such as gas-insulated switchgear, transmission pipes, gas circuit breakers, and other industrial electrical installations, mainly due to its stable chemical properties and excellent insulating and arc-extinguishing abilities.Usually, when insulation faults occur in gas-insulated electrical equipment, such as overheating and partial discharge, SF6 as an inert gas will decompose and react with other constituents (such as H2O, O2 and other substances containing carbon, hydrogen and oxygen) in insulating gas to generate various so-called fault characteristic gases (such as CO/CO2, SO2,/H2S/COS, SO2F2, HF, CF4, etc.), which can be used as effective indicators to identify fault types of gas-insulated electrical equipment.Recently, laser spectroscopy-based techniques and sensors have been proven as effective diagnostic tools for the non-destructive detection of fault characteristic gases with a high degree of sensitivity and selectivity.
For this paper, a spectroscopic sensor system based on a widely tunable ECQCL was developed for potential detection of gas-phase sulfur hexafluoride (SF6).For the spectral signal record, a cheap and miniaturized quartz crystal tuning fork-(QCTF) based light detector was integrated for laser signal detection.Preliminary evaluation of the ECQCL gas sensor system is discussed for quantitative and qualitative analysis of the gas-phase of sulfur hexafluoride.To realize spectroscopic detection of the gaseous sulfur hexafluoride (SF6), theoretical simulation was primarily made to select the optimal spectral window.The spectral characteristic of sulfur hexafluoride (SF6) within 500 cm −1 and 4000 cm −1 is simulated and provided in Figure 1 according to the NIST (National Institute of Standards and Technology) database [6].For clarity, the inset is the spectral region within our ECQCL laser tuning range between 1150 cm −1 and 1450 cm −1 .In this figure, the X-coordinate wavenumber (in units of cm −1 ) is defined as the reciprocal of the wavelength (in units of cm).
Theory of Laser Absorption Spectroscopy and Quartz Tuning Fork Detector
When detecting the concentration of chemical gases using optical spectroscopy sensors, the mutual absorption process between incident laser light and gas absorption species satisfies the well-known Lambert-Beer law, which is usually used for theoretical analysis and data processing [7].When an excited laser with an emitting wavelength λ passes through a uniform gas medium, the relationship between the incident light intensity I 0 and transmitted light intensity I can be described with the Lambert-Beer law, and the corresponding mathematical expression is: where α(λ) is the absorption coefficient of a specific substance at a wavelength λ, C is the concentration of the chemical gas to be measured, and L is the effective light path for the interaction between the laser light and the chemical gas.For evenly distributed gas molecules, the absorption coefficient, the absorption line shape, the line strength and the number of molecules satisfy: where S(T) is the gas molecular absorption line intensity at temperature T(K), N(T, P) is the number density of the gas molecule, and φ(λ − λ 0 ) is the gas molecular absorption line shape.Generally, the gas molecule number density N(T, P) can be described as: where N 0 = 2.678 × 10 19 (mol/cm 3 •atm) is the Avogadro constant, T ref is the reference temperature, which is usually taken as 276 K, and T is the actual laboratory temperature.P 0 is the reference pressure, which is usually taken as 1 atm, and P is the actual sample gas pressure.The molecular absorption line shape usually depends on experimental conditions, especially for the broadening effects.Typically, both Doppler and collisional broadening effects are significant, and neither can be neglected, while natural broadening is much less significant than collisional broadening and can be completely neglected.Thus, a representative line shape that is a convolution of Doppler and collisional broadening called the Voigt function is widely used.Moreover, the gas molecular absorption line shape φ(λ − λ 0 ) satisfies the normalization condition: According to the Lambert-Beer law described above, the integral absorbance area A of a single specific molecule transition can be described as: When the relevant experimental conditions (such as temperature T, pressure P, optical path L, and spectral line parameters) are known and the molecule absorption spectrum is measured, the number or concentration of absorption gas molecules can be calculated by combing with a line shape-fitting algorithm and Equation (5).
Moreover, the quartz tuning fork is used as a light detector by employing its resonant effect and piezoelectric effect [8].The high resonant frequency (typically ~32.768 kHz) allows for a good noise suppression effect.To realize the laser spectral signal detection, the excited laser source should be pulsed or modulated for a continuous-wave laser, and a pulsed repetitive rate or modulated frequency will be set to match with the resonant frequency of the quartz tuning fork detector.Because of the same frequency condition, the transmitted light beam can excite the resonant effect of the quartz tuning fork, and the mechanical resonance process of the quartz tuning fork will produce a piezoelectric current due to its piezoelectric effect.The piezoelectric current can be measured and converted into a voltage signal using a low-noise preamplifier.Theoretically, the mechanical model of the quartz tuning fork can be simplified as a second-order damping-mass-spring system; its effective mass m can be expressed by its own density ρ, length l, width w, and thickness h.
The mathematical relationship between mass and geometric parameters is expressed as follows
Moreover, the relationship between the resonance frequency of the quartz tuning fork and the mechanical parameters is: where Y represents Young's modulus of the quartz crystal and k represents the elastic coefficient of the quartz crystal, which is described as: When the relevant parameters of the quartz tuning fork are constant, the corresponding resonance frequency of the quartz tuning fork can be calculated and analyzed using finite element simulation software.Moreover, the quality factor Q (i.e., Q-value) is another key parameter to judge the performance of the quartz tuning fork-based detector, which expresses the loss of vibration energy or the amount of the damping process.The Q-value can be determined from the experimentally measured resonant profile according to the following equation: where ∆ f is the frequency bandwidth (full width at half maximum) at 1/ √ 2 of the maximum signal amplitude, typically a few Hz.
Experimental Details
Figure 2 shows the schematic diagram of the ECQCL and QCTF detector-based gas detection system.The laser source that we used is a pulsed room-temperature (RT) ECQCL (Block Engineering, Southborough, MA, USA) with a tuning range of 1130-1437 cm −1 (or 6.96-8.85µm) with an average output power between 0.5 and 20 mW, since the ECQCL power shows significant dependence on both its pulse repetition rate and pulse width [9]; therefore, the maximum pulse width of 350 ns was selected.However, the pulse repetition rate is set to match the resonant frequency of the QCTF detector for realizing laser signal detection.The ECQCL is automatically controlled by a flexible and user-friendly software interface, an internal trigger controls the pulses at regular intervals, and a sync-out signal can be utilized to trigger other external laboratory equipment.A homemade, glass, gas sample cell with an optical path length of ~29.6 cm was used for spectral measurements.The laser beam is directly coupled into the gas cell and then collimated and focused onto the detector using a CaF 2 lens with a focal length of 50 mm.Unlike the traditional spectroscopy system, a standard mid-infrared MCT (mercury cadmium telluride) detector is commonly used.Here a quartz crystal tuning fork-(QCTF) based photodetector was employed for recording laser spectral signals, mainly based on its piezoelectric effect and resonant properties.Prior to the experiment, details of the QCTF detector characteristics were investigated theoretically and experimentally, which will be described in the next section.Finally, a home-made LabVIEW program-integrated data acquisition (NI USB-6259, 1.25 MHz sampling rate) and signal processing analysis was used for sensor system control and signal acquisition.
Results and Discussion
Before gas detection experiments, the resonant frequency of the QCTF detector is first simulated by establishing a physical model combing using the COMSOL finite element analysis method.The establishment of the QCTF resonant model includes the setting of tuning fork geometric parameters and material parameters.The QCTF fork arm simulated in this experiment is 3.7 mm long and 0.6 mm wide.The entire physical dimensions of the QCTF are 6 mm long, 1.5 mm wide and 0.3 mm thick.The QCTF physical diagram is shown in Figure 3.For theoretical simulation using finite element analysis, the simple mechanical model of the QCTF is established as shown in
Results and Discussion
Before gas detection experiments, the resonant frequency of the QCTF detector is first simulated by establishing a physical model combing using the COMSOL finite element analysis method.The establishment of the QCTF resonant model includes the setting of tuning fork geometric parameters and material parameters.The QCTF fork arm simulated in this experiment is 3.7 mm long and 0.6 mm wide.The entire physical dimensions of the QCTF are 6 mm long, 1.5 mm wide and 0.3 mm thick.The QCTF physical diagram is shown in Figure 3.For theoretical simulation using finite element analysis, the simple mechanical model of the QCTF is established as shown in Figure 4.The main component of the material is SiO 2 ; an elastic modulus of 70 Gpa, a Poisson's ratio of 0.17 and a density of 2300 kg × m −3 are used for calculation.The established QCTF model has a good symmetrical structure and reasonable parameter settings, which makes the established model very close to the experimental conditions.6259, 1.25 MHz sampling rate) and signal processing analysis was used for sensor system control and signal acquisition.
Results and Discussion
Before gas detection experiments, the resonant frequency of the QCTF detector is first simulated by establishing a physical model combing using the COMSOL finite element analysis method.The establishment of the QCTF resonant model includes the setting of tuning fork geometric parameters and material parameters.The QCTF fork arm simulated in this experiment is 3.7 mm long and 0.6 mm wide.The entire physical dimensions of the QCTF are 6 mm long, 1.5 mm wide and 0.3 mm thick.The QCTF physical diagram is shown in Figure 3.For theoretical simulation using finite element analysis, the simple mechanical model of the QCTF is established as shown in In the theoretical simulation, the COMSOL finite element method is used to decompose a series of continuous solution domains into multiple groups of discrete small regions, and the approximate function is used in each small region to represent the unknown field function for solving for the solution domain.The approximate function is generally expressed using the numerical interpolation function of the original unknown field function and its derivatives at each node in a small region.By changing a series of continuous, infinite degrees of freedom problems into a discrete, finite degree of freedom problems, the whole simulation process can be implemented quickly.When simulating the resonant frequency of the QCTF, we found that the vibration models of the QCTF can be divided into symmetrical vibrations and asymmetric vibrations [10], and only the symmetrical vibration model is the effective model for producing the piezoelectric effect; thus, the symmetrical vibration model is investigated in detail.Through the COMSOL finite element analysis of the first six resonant models of the QCTF, it is found that the fourthorder resonant model is the symmetrical vibration; the fork arm of the QCTF first moves outward and then inward, as shown in Figure 5.For this resonant model, the calculated resonant frequency is about 32,406 Hz.After the theoretical simulation, the resonant frequency response characteristics of the QCTF were further investigated experimentally to better understand their photoelectric conversion efficiency.As described above, the QCTF-based photoelectric detector was used for collecting laser beams by employing its resonant effect and piezoelectric effect.In the theoretical simulation, the COMSOL finite element method is used to decompose a series of continuous solution domains into multiple groups of discrete small regions, and the approximate function is used in each small region to represent the unknown field function for solving for the solution domain.The approximate function is generally expressed using the numerical interpolation function of the original unknown field function and its derivatives at each node in a small region.By changing a series of continuous, infinite degrees of freedom problems into a discrete, finite degree of freedom problems, the whole simulation process can be implemented quickly.When simulating the resonant frequency of the QCTF, we found that the vibration models of the QCTF can be divided into symmetrical vibrations and asymmetric vibrations [10], and only the symmetrical vibration model is the effective model for producing the piezoelectric effect; thus, the symmetrical vibration model is investigated in detail.Through the COMSOL finite element analysis of the first six resonant models of the QCTF, it is found that the fourth-order resonant model is the symmetrical vibration; the fork arm of the QCTF first moves outward and then inward, as shown in Figure 5.For this resonant model, the calculated resonant frequency is about 32,406 Hz.In the theoretical simulation, the COMSOL finite element method is used to decompose a series of continuous solution domains into multiple groups of discrete small regions, and the approximate function is used in each small region to represent the unknown field function for solving for the solution domain.The approximate function is generally expressed using the numerical interpolation function of the original unknown field function and its derivatives at each node in a small region.By changing a series of continuous, infinite degrees of freedom problems into a discrete, finite degree of freedom problems, the whole simulation process can be implemented quickly.When simulating the resonant frequency of the QCTF, we found that the vibration models of the QCTF can be divided into symmetrical vibrations and asymmetric vibrations [10], and only the symmetrical vibration model is the effective model for producing the piezoelectric effect; thus, the symmetrical vibration model is investigated in detail.Through the COMSOL finite element analysis of the first six resonant models of the QCTF, it is found that the fourthorder resonant model is the symmetrical vibration; the fork arm of the QCTF first moves outward and then inward, as shown in Figure 5.For this resonant model, the calculated resonant frequency is about 32,406 Hz.After the theoretical simulation, the resonant frequency response characteristics of the QCTF were further investigated experimentally to better understand their photoelectric conversion efficiency.As described above, the QCTF-based photoelectric detector was used for collecting laser beams by employing its resonant effect and piezoelectric effect.After the theoretical simulation, the resonant frequency response characteristics of the QCTF were further investigated experimentally to better understand their photoelectric conversion efficiency.As described above, the QCTF-based photoelectric detector was used for collecting laser beams by employing its resonant effect and piezoelectric effect.The laser signal of the QCTF was first recorded in the time domain, and then its frequency spectrum was calculated using a self-developed fast Fourier transform (FFT) algorithm, and the peak value of frequency spectrum was finally extracted as the signal amplitude.For example, the QCTF signal-processing procedure is demonstrated in Figure 6.According to this signal-processing procedure, the experimentally measured resonant profile of the QCTF detector is shown in Figure 7, and a Lorentzian line-shape mode was used to fit the experimental data.The analysis indicated that the QCTF used in this work had a resonant frequency of 32,753.4Hz in ambient air and a quality factor Q-value of 6468.65.After the theoretical simulation, the resonant frequency response characteristics of the QCTF was further investigated experimentally to better understand its photoelectric conversion efficiency.As described above, the QCTF based photoelectric detector was used for collecting laser beam by employing its resonant effect and piezoelectric effect.The laser signal of the QCTF was first recorded in time domain, and then its frequency spectrum was calculated by a self-developed fast Fourier transform (FFT) algorithm, the peak value of frequency spectrum was finally extracted as the signal amplitude.For example, the QCTF signal prcossing procedure is demonstrated in Figure 6.According to this signal prcossing procedure, the experimentally measured resonant profile of the QCTF detector is shown in Figure 7, and a Lorentzian lineshape mode was used to fit the experimental data.The analysis indicated that the QCTF used in this work had a resonant frequency of 32753.4Hz in ambient air and a quality factor Q value of 6468.65.To further evaluate the gas sensor system, sulfur hexafluoride gas was used for the relevant experiments.Various sulfur hexafluoride samples with different concentrations were prepared for experimental tests.The whole experiment was carried out at room temperature and at a standard atmospheric pressure.At the beginning of the experiment, the gas sample cell was extracted to a vacuum environment to record the background signal for signal normalization processing analysis.Then, the sulfur hexafluoride sample was filled and diluted for signal spectra.In this study, the ECQCL laser emitted from 1130 cm −1 to 1440 cm −1 , with a step frequency of 1 cm −1 /s.Experimental data were synchronously collected with an approximate 1 Hz sampling rate.To improve the spectral signal-to-noise ratio (SNR), the repetition rate of the ECQCL was set to match the QCTF resonance frequency (i.e., 32,753.4Hz) as closely as possible, so as to achieve the maximum output power.According to the parameters mentioned above, it takes about 310 s to finish the whole wavelength tuning scan.Figure 8 (upper panel) shows the absorption spectra of sulfur hexafluoride gas under different concentrations.By comparing this to the reference spectrum in the NIST database as shown in Figure 1, the experimental data indicate that two distinct absorption bands of sulfur hexafluoride near 1257 cm −1 and 1390 cm −1 , respectively, have been confirmed.However, a weak absorption peak near 1355 cm −1 is not found in the experimental data.Note that this weak absorption peak near 1355 cm −1 is also not found in a previous publication reported by Chapados et al. [11].Moreover, the sensor system characteristic of concentration response was also evaluated by selecting the absorption peaks at 1257 cm −1 and 1390 cm −1 , respectively, as shown in Figure 8 (bottom panel).A linear regression algorithm was used to analyze the experimental data.As the theory predicted, a good linear dependence of the absorption signal at 1257 cm −1 and 1390 cm −1 on SF 6 concentration was obtained, with a regression coefficient R 2 of 0.9997 and 0.99264, respectively.The results indicate that the ECQCL gas sensor system is proportional to the concentration of the absorbing molecule, and the calibration curve can be used for determining the gas concentration of an unknown sample.To further evaluate the gas sensor system, sulfur hexafluoride gas was used for the relevant experiments.Various sulfur hexafluoride samples with different concentrations were prepared for experimental tests.The whole experiment was carried out at room temperature and at a standard atmospheric pressure.At the beginning of the experiment, the gas sample cell was extracted to a vacuum environment to record the background signal for signal normalization processing analysis.Then, the sulfur hexafluoride sample was filled and diluted for signal spectra.In this study, the ECQCL laser emitted from 1130 cm −1 to 1440 cm −1 , with a step frequency of 1 cm −1 /s.Experimental data were synchronously collected with an approximate 1 Hz sampling rate.To improve the spectral signal-to-noise ratio (SNR), the repetition rate of the ECQCL was set to match the QCTF resonance frequency (i.e., 32,753.4Hz) as closely as possible, so as to achieve the maximum output power.According to the parameters mentioned above, it takes about 310 s to finish the whole wavelength tuning scan.Figure 8 (upper panel) shows the absorption spectra of sulfur hexafluoride gas under different concentrations.By comparing this to the reference spectrum in the NIST database as shown in Figure 1, the experimental data indicate that two distinct absorption bands of sulfur hexafluoride near 1257 cm −1 and 1390 cm −1 , respectively, have been confirmed.However, a weak absorption peak near 1355 cm −1 is not found in the experimental data.Note that this weak absorption peak near 1355 cm −1 is also not found in a previous publication reported by Chapados et al. [11].Moreover, the sensor system characteristic of concentration response was also evaluated by selecting the absorption peaks at 1257 cm −1 and 1390 cm −1 , respectively, as shown in Figure 8 (bottom panel).A linear regression algorithm was used to analyze the experimental data.As the theory predicted, a good linear dependence of the absorption signal at 1257 cm −1 and 1390 cm −1 on SF6 concentration was obtained, with a regression coefficient R 2 of 0.9997 and 0.99264, respectively.The results indicate that the ECQCL gas sensor system is proportional to the concentration of the absorbing molecule, and the calibration curve can be used for determining the gas concentration of an unknown sample.To further evaluate the gas sensor system, sulfur hexafluoride gas was used for the relevant experiments.Various sulfur hexafluoride samples with different concentrations were prepared for experimental tests.The whole experiment was carried out at room temperature and at a standard atmospheric pressure.At the beginning of the experiment, the gas sample cell was extracted to a vacuum environment to record the background signal for signal normalization processing analysis.Then, the sulfur hexafluoride sample was filled and diluted for signal spectra.In this study, the ECQCL laser emitted from 1130 cm −1 to 1440 cm −1 , with a step frequency of 1 cm −1 /s.Experimental data were synchronously collected with an approximate 1 Hz sampling rate.To improve the spectral signal-to-noise ratio (SNR), the repetition rate of the ECQCL was set to match the QCTF resonance frequency (i.e., 32,753.4Hz) as closely as possible, so as to achieve the maximum output power.According to the parameters mentioned above, it takes about 310 s to finish the whole wavelength tuning scan.Figure 8 (upper panel) shows the absorption spectra of sulfur hexafluoride gas under different concentrations.By comparing this to the reference spectrum in the NIST database as shown in Figure 1, the experimental data indicate that two distinct absorption bands of sulfur hexafluoride near 1257 cm −1 and 1390 cm −1 , respectively, have been confirmed.However, a weak absorption peak near 1355 cm −1 is not found in the experimental data.Note that this weak absorption peak near 1355 cm −1 is also not found in a previous publication reported by Chapados et al. [11].Moreover, the sensor system characteristic of concentration response was also evaluated by selecting the absorption peaks at 1257 cm −1 and 1390 cm −1 , respectively, as shown in Figure 8 (bottom panel).A linear regression algorithm was used to analyze the experimental data.As the theory predicted, a good linear dependence of the absorption signal at 1257 cm −1 and 1390 cm −1 on SF6 concentration was obtained, with a regression coefficient R 2 of 0.9997 and 0.99264, respectively.The results indicate that the ECQCL gas sensor system is proportional to the concentration of the absorbing molecule, and the calibration curve can be used for determining the gas concentration of an unknown sample.To evaluate the stability and the sensitivity of this system, the Allan-Werle deviation analysis was used for experimental data [12,13].It was conducted on the continuous measurement of spectral signal at 1257 cm −1 , as shown in Figure 9a.The Allan-Werle deviation plot presented in Figure 9b indicates that the sensitivity of the developed SF 6 gas sensor system is 1.89 ppm at 1 s averaging time and the measurement sensitivity can be improved to 0.38 ppm with an averaging time of 131 s.To evaluate the stability and the sensitivity of this system, the Allan-Werle deviation analysis was used for experimental data [12,13].It was conducted on the continuous measurement of spectral signal at 1257 cm −1 , as shown in Figure 9a.The Allan-Werle deviation plot presented in Figure 9b indicates that the sensitivity of the developed SF6 gas sensor system is 1.89 ppm at 1 s averaging time and the measurement sensitivity can be improved to 0.38 ppm with an averaging time of 131 s.
Conclusions
For this paper, a gas-sensor system based on a mid-infrared ECQCL laser and a cheap and miniaturized QCTF detector has been developed for identifying the indicator of faulttypes of gas-insulated electrical equipment.To demonstrate this sensing technique, a sulfur hexafluoride (SF 6 ) molecule was selected as the analyte.Preliminary theoretical analysis and experimental evaluation has been performed to measure the sulfur hexafluoride absorption spectroscopy.As a result, a good agreement was obtained between the experimentally observed data and the reference spectra by referring to the NIST database and previous publications.As expected, a good linear dependence of the absorption signal at 1257 cm −1 and 1390 cm −1 of SF 6 concentration was obtained, with a regression coefficient R 2 of 0.9997 and 0.9926, respectively.Finally, Allan-Werle deviation analysis was used for stability and sensitivity assessment, and the result indicated that a detection limit of 1.89 ppm for SF 6 was obtained with a 1 s integration time, which can be further improved to ~0.38 by averaging up to 131 s.Optimization of the experimental scheme is ongoing; the sensitivity can be significantly improved by combing with a multi-pass absorption cell or a high-precision optical cavity [14,15], as well as other high-sensitivity spectroscopy techniques, such as photoacoustic spectroscopy (PAS) [16].Moreover, as can be seen from Figure 1, the stronger absorption spectral region will provide higher detection sensitivity; therefore, the external cavity quantum cascade laser used here can be upgraded to other central wavenumbers (such as 950 cm −1 ) by replacing its internal emitting chip.The developed gas sensor system is expected to be an effective analysis tool for non-destructive detection of the fault characteristic gases with high sensitivity and selectivity.
Figure 1 .
Figure 1.Sulfur hexafluoride absorption spectra simulated using the NIST database.Figure 1. Sulfur hexafluoride absorption spectra simulated using the NIST database.
Figure 1 .
Figure 1.Sulfur hexafluoride absorption spectra simulated using the NIST database.Figure 1. Sulfur hexafluoride absorption spectra simulated using the NIST database.
Chemosensors 2023 ,Figure 2 .
Figure 2. The schematic diagram of the ECQCL and QCTF-based gas detection system.
Figure 4 .
The main component of the material is SiO2; an elastic modulus of 70 Gpa, a Poisson's ratio of 0.17 and a density of 2300 kg × m −3 are used for calculation.The established QCTF model has a good symmetrical structure and reasonable parameter settings, which makes the established model very close to the experimental conditions.
Figure 2 .
Figure 2. The schematic diagram of the ECQCL and QCTF-based gas detection system.
Figure 2 .
Figure 2. The schematic diagram of the ECQCL and QCTF-based gas detection system.
Figure 4 .
The main component of the material is SiO2; an elastic modulus of 70 Gpa, a Poisson's ratio of 0.17 and a density of 2300 kg × m −3 are used for calculation.The established QCTF model has a good symmetrical structure and reasonable parameter settings, which makes the established model very close to the experimental conditions.
Figure 3 .
Figure 3. Structural model diagram of the QCTF.
Figure 3 .
Figure 3. Structural model diagram of the QCTF.
Chemosensors 2023 , 11 Figure 7 .
Figure 7.The measured QCTF resonant profile at ambient air and the Lorentz fitting.
Figure 7 . 11 Figure 7 .
Figure 7.The measured QCTF resonant profile at ambient air and the Lorentz fitting.
Figure 8 .
Figure 8. Experimentally measured SF 6 spectra (upper panel) and signal amplitude at 1250 cm −1 and 1390 cm −1 as a function of sample concentrations (bottom panel), and the corresponding linear fit.
Figure 8 .
Figure 8. Experimentally measured SF6 spectra (upper panel) and signal amplitude at 1250 cm −1 and 1390 cm −1 as a function of sample concentrations (bottom panel), and the corresponding linear fit.
Figure 9 .
Figure 9. (a) Time series concentration of SF6 continuously measured standard gas sample, (b) Allan-Werle deviation as a function of signal averaging time. | 2023-01-01T16:05:39.491Z | 2022-12-30T00:00:00.000 | {
"year": 2022,
"sha1": "c28f3ae20faa343c259f15bce814f151aac2d23a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9040/11/1/30/pdf?version=1672382103",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b78bfb5a2a1d524694c7e2480fe4ea532b0ff2ce",
"s2fieldsofstudy": [
"Chemistry",
"Engineering"
],
"extfieldsofstudy": []
} |
40514684 | pes2o/s2orc | v3-fos-license | Effect of partially replacing a barley-based concentrate with flaxseed-based products on the rumen bacterial population of lactating Holstein dairy cows
Aims: The effects of partial replacement of a barley-based concentrate with flaxseed-based products on the rumen bacterial population of lactating Holstein dairy cows were evaluated. Methods and Results: Treatments fed were CONT, a normal diet that included barley silage, alfalfa hay and a barley-based concentrate that contained no flaxseed or faba beans; FLAX, inclusion of a nonextruded flaxseed-based product containing 55 (cid:1) 0% flaxseed, 37 (cid:1) 8% field peas and 6 (cid:1) 9% alfalfa; EXT, similar to FLAX, but the product was extruded and EXTT, similar to FLAX, but product was extruded and field gene Most abundant Paraprevotellaceae.
Introduction
Enriched unsaturated fatty acid animal products may offer beneficial effects to human health (Lee et al. 2005). Increasing the concentration of these fatty acids in ruminant products, however, is difficult due to intensive biohydrogenation by ruminal micro-organisms (Palmquist et al. 1993;Bauman and Griinari 2001;Jenkins et al. 2008). Thus, there has been significant interest to develop feeding strategies to decrease ruminal biohydrogenation of unsaturated fatty acids while ensuring high availability in the small intestine (Beam et al. 2000). Supplementation with raw flaxseed or extruded flaxseed has been suggested to be an effective strategy to increase the availability of polyunsaturated fatty acids in the small intestine (Litton 2008;He et al. 2012). For example, extruded flaxseed increased the content of alinolenic acid in blood serum or milk of dairy cattle (Kennelly 1996;Oeffner et al. 2013;Moats et al. 2015). Furthermore, the inclusion of dietary tannins in ruminant rations may be an effective approach for mitigating the biohydrogenation of polyunsaturated fatty acids (Vasta et al. 2009). However, dietary fat (Enjalbert et al. 2017) or tannins (Vasta et al. 2010) may have detrimental effects on bacterial taxa within the rumen or may negatively affect animal production performance (Vasta et al. 2010). Nonetheless, most studies evaluating these dietary strategies for mitigating fatty acid biohydrogenation have not determined effects on the broad bacterial community structure of the rumen of lactating Holstein cows. Consequently, the impact of dietary unsaturated fatty acids, feed extrusion and tannins on the broad bacterial population remains poorly understood in vivo. This research gap can now be addressed using molecular techniques in combination with bioinformatics, which have enabled researchers to evaluate the impact of diets on bacterial community structure (Krause et al. 2013;Chaucheyras-Durand and Ossa 2014;Castillo-Lopez et al. 2017). For example, high-throughput DNA sequencing provides new insights into the broad bacterial population of the rumen (Callaway et al. 2010;Aldai et al. 2012;Castillo-Lopez et al. 2014).
In addition, the bacterial population of the rumen influences production performance (Myer et al. 2015), ruminal fermentation (Fernando et al. 2010;Anderson et al. 2016), ruminal pH and fermentation efficiency (Callaway et al. 2010), metabolizable protein supply (Castillo-Lopez et al. 2013) and milk composition in dairy cattle (Jami et al. 2014). Consequently, investigating the effects of diet composition and ingredients being generated by the dairy feeding industry on the rumen bacterial community is essential not only for improving milk yield and composition, but also for preventing negative impacts on rumen function. Therefore, the objective of this study was to evaluate the effect of partial replacement of a barley-based concentrate with different flaxseedbased products on the rumen bacterial community structure of lactating Holstein dairy cows, assessed with highthroughput DNA sequencing. Our hypothesis is that inclusion of flaxseed-based products in dairy rations will shift the abundance of ruminal bacteria.
Animal care and housing, and experimental design
This experiment was conducted at the University of Saskatchewan Rayner Dairy Cattle Research and Teaching Facility (Saskatoon, Saskatchewan, Canada); it was performed in accordance with the guidelines published by the Canadian Council on Animal Care (1997). The protocols used in this study were preapproved by the University of Saskatchewan Animal Care and Use Committee (protocol number 20040048).
A total of eight multiparous, lactating Holstein cows from the University of Saskatchewan Greenbrae herd (mean and SD, 116Á5 AE 17Á5 DIM; 712Á7 AE 92Á3 kg BW) were used in a replicated 4 9 4 Latin square experimental design. Four of these cows were fitted with permanent ruminal cannulae to facilitate ruminal digesta sampling for microbial community analyses. Each experimental period comprised 28 days, which consisted of 26 days for dietary adaptation followed by 2 days for sample collection to provide enough time for animal adaptation to treatment change (Lillis et al. 2011;Boots et al. 2013). Cows were housed in individual tie stalls with continuous access to fresh water and feed except during milking. Animals were milked three times daily at 04:30, 12:30 and 19:00 h in a double six Herringbone parlour (DeLaval International, Peterborough, ON). The individual tie stalls were equipped with rubber mats. In addition, wood shavings were used for bedding and were replaced daily.
Experimental treatments, feed samples and feed chemical analysis
Rations were offered twice daily at 09:30 and 17:00 h for ad libitum access as total mixed rations. Each ration was mixed using a small-batch mixing cart (Data Ranger, American Calan, Northwood, NH). Treatments (DM basis; Table 1) were (i) CONT, a normal diet containing 28Á1% barley silage, 20Á0% alfalfa hay and 51Á8% of a barley-based concentrate that contained no flaxseed or faba beans; (ii) FLAX, inclusion of 11Á4% of a nonextruded flaxseed-based product which contained 55Á0% flaxseed, 37Á8% ground field peas and 6Á9% dehydrated alfalfa; (iii) EXT, inclusion of 11Á4% of an extruded flaxseed-based product which contained 55Á0% flaxseed, 37Á8% ground field peas and 6Á9% dehydrated alfalfa and (iv) EXTT, inclusion of 11Á4% of an extruded flaxseedbased product which contained 55Á0% flaxseed, 37Á8% ground high-tannin faba beans and 6Á9% dehydrated alfalfa.
The barley-based concentrate was partially substituted with the inclusion of the corresponding flaxseed-based product in FLAX, EXT and EXTT, and these products included 0Á4% of mould inhibitor plus vitamin E as antioxidant. High-tannin faba beans corresponded to variety Malik 9-4. Experimental diets were formulated based on two factors: (i) providing similar levels of net energy for lactation and (ii) achieving dietary ether extract levels approaching, but not exceeding 6% (DM basis) for the three flaxseed-containing treatments. All flaxseed-based products were manufactured and supplied by a local company (Oleet Processing Ltd., a division of O&T Farms Ltd., Regina, SK, Canada). Extruded flaxseed-based products were manufactured using a dry extrusion method with a single screw extruder (Model 2500; Insta-Pro International, Urbandale, IA) with barrel temperature averaging 120°C.
Samples of rations, barley silage, alfalfa hay, barleybased concentrate and flaxseed-based products were collected daily from day 21 to 28 of each period and pooled by treatment within each period. Feed samples were stored at À20°C pending analysis for chemical composition. In addition, barley silage samples were collected twice a week during the experiment for microwave DM determination (Valkeners et al. 2008). Briefly, a sample of approximately 100 g was heated in a microwave oven for 4 min. During the second step, drying time was decreased to 30 s, and the second step was repeated until obtaining a constant weight in two consecutive measurements. To avoid burning of the sample, a glass of water was also placed in the microwave. These DM data were used for adjusting the diet DM to ensure proper inclusion of ingredients in each treatment.
Alfalfa hay, barley silage and concentrate samples were dried at 55°C in a forced air oven for 48 h, and orts were freeze dried. Dried feed ingredients and orts were then ground to pass through a 1-mm screen (Christy-Norris mill, Christy and Norris Ltd., Chelmsford, UK) and analysed for chemical composition by an external laboratory (Cumberland Valley Analytical Services, Hagerstown, MD), which included DM (method no. 930.15;AOAC 2000), N (method no. 990.03; Leco FP-528 Nitrogen Combustion Analyzer; Leco Corp., St. Joseph, MI), NDF (Van Soest et al. 1991), starch (Hall 2009(Hall ), ether extract using diethyl ether (method no. 2003AOAC 2006) and ash (method no. 942.05;AOAC 2000). The nutrient composition of each total mixed ration (Table 2) was calculated based on analysis of individual ingredients, barley-based concentrate and flaxseed-based products and the rate of inclusion in the respective treatment. This method of reporting chemical composition of dairy diets is highly recommended, because when sampling total mixed rations for analysis of chemical composition results may be affected by sampling variation (Weiss et al. 2016). Feed fatty acid analysis was conducted at Lipid Analytical Services Ltd. (Guelph, ON, Canada); concentration of fatty acids was expressed as per cent of fatty acids methyl esters. The chemical analyses of individual feed ingredients were then used to calculate the chemical composition of experimental diets. In addition, feed samples were submitted to Lethbridge Research Centre (Lethbridge, AB) for determination of tannins using the acid-butanol assay (Porter et al. 1986).
Sampling of whole ruminal contents for bacterial community analysis
On days 27 and 28 of each experimental period, samples of intact, nonstrained ruminal contents (solid and liquid fractions) were taken using new palpation sleeves for each cow at each sampling time point. In order to obtain representative samples from the rumen of each cow, grab samples were taken from the caudal ventral sac, cranial ventral sac and two samples from the ruminal digesta mat in the dorsal rumen of each animal; samples were collected so that every 6-h interval in a 24-h period was represented. Specifically, these samples were collected at 10:00, 16:00 and 22:00 h on day 27, and 04:00 h on day 28. Within each time point, samples collected from the same cow were pooled, and a 10-ml subsample was Extruded flaxseed-based product with tannins 0 0 0 1 1 Á4 *CONT: a normal diet including barley silage, alfalfa hay and a barley-based concentrate with no flaxseed or faba beans; FLAX: inclusion of 11Á4% of a nonextruded flaxseed-based product containing flaxseed, field peas and alfalfa; EXT: similar to FLAX, but the product was extruded; EXTT: similar to FLAX, but product was extruded and field peas were replaced by high-tannin faba beans. placed in a sterile 15-ml vial and immediately snap frozen at À80°C. Thus, a total of 64 composited ruminal digesta content samples were collected during the trial (four cows 9 four time points 9 four experimental periods).
To obtain digesta samples representative of a 24-h period, at the end of the experiment, these samples were pooled to obtain one sample per cow within each of the four periods as previously outlined and conducted by other researchers for samples collected from cattle for microbial community evaluations (Lillis et al. 2011;Boots et al. 2013); these samples were used for DNA extraction, sequencing and bacterial phylogenetic analysis. (Paz et al. 2016;Xie et al. 2016). The quality of the amplified DNA was verified by resolving on a 1Á5% agarose gel. Amplicons from each sample were pooled in equal amounts using the epMotion M5073 automated system (Eppendorf, Hauppauge, NY) and the resulting pooled library was purified using the Pippin Prep kit (Sage Science, Beverly, MA) according to the manufacturer's instructions, and analysed according to the Bio Analyzer High Sensitive DNA kit (Agilent Technologies, Santa Clara, CA); then, DNA concentration was measured with a Qubit 2Á0 fluorometer (Invitrogen, Carlsbad, CA) and the library was stored at À20°C for later analyses.
High-throughput DNA sequencing and bacterial phylogenetic analysis
The amplicon library was subjected to high-throughput DNA sequencing at the Department of Animal Science of University of Nebraska-Lincoln according to the protocol utilized by Paz et al. (2016) and Xie et al. (2016). Briefly, this method was conducted using the Ion Torrent Personal Genome Machine (PGM; Life Technologies, ND, not detected. *CONT: a normal diet including barley silage, alfalfa hay and a barley-based concentrate with no flaxseed or faba beans; FLAX: inclusion of 11Á4% of a nonextruded flaxseed-based product containing flaxseed, field peas and alfalfa; EXT: similar to FLAX, but the product was extruded; EXTT: similar to FLAX, but product was extruded and field peas were replaced by high-tannin faba beans. †Analysis conducted at Cumberland Valley Analytical Services, Hagerstown, MD. Carlsbad, CA), and applying the Sequencing Kit v2 on a 316 chip according to the manufacturer's instructions; then, the high-quality DNA sequences were binned into their respective samples based on their barcodes. Specifics for the methods used for emulsion PCR, bead deposition and sequencing on the PGM were as described by the manufacturer. The rationale behind the sequencing direction is to minimize sequencing errors. Because there is slightly more sequence variability towards the 518 end compared to the 341 end (Vasileiadis et al. 2012), and because the beginning of sequencing has less errors, sequencing started from the 518R end and moved towards the 341F end (which has less variability). Thus, there was less chance of sequencing errors on the region of the read which has more variability. Paired-end amplicons were not generated during this sequencing run; therefore, only single read sequences were generated and no contig assembly was performed. Sequenced data were deposited in the NCBI Sequence Read Archive under the accession no. SRR6023841. Sequence reads were analysed using published bioinformatics pipelines UPARSE (drive5.com/uparse/; Edgar et al. 2011), QIIME (qiime.org; Caporaso et al. 2010) and MOTHUR (Schloss et al. 2009). Initial quality control of the generated sequences was performed using the Torrent Suite Software ver. 3.6.2 as outlined by Paz et al. (2016), which included trimming of the 3 0 end of sequences that dropped below the average Q15 score over a 30-bp window and removing sequences with unidentified bases (N). Resulting sequences were downloaded from the Torrent Suite and demultiplexed using the QIIME software package (ver. 1.9.1) (Caporaso et al. 2010). During demultiplexing, sequences with an average quality score <25 were removed. Following demultiplexing, universal primers used for sequencing were removed, allowing one mismatch in the 5 0 (518R) primer and two in the 3 0 reverse primer (341F). Sequences shorter than 130 bp were removed and remaining sequences were trimmed to a fixed length of 130 bp (Paz et al. 2016). Quality trimmed sequences were then reverse complemented, screened for chimeric sequences using UCHIME (Edgar et al. 2011), and preclustered using the pseudo-single linkage-clustering algorithm to remove reads that resulted from sequencing errors (Huse et al. 2010). These sequences were then assigned to operational taxonomic units (OTUs) at 97% similarity using UPARSE pipeline (drive5.com/uparse/; Edgar 2011). Sequences from each OTU were then subjected to taxonomic classification using the latest version of the Greengenes taxonomy database (gg_13_5) (Wang et al. 2007). Based on the taxonomy information, any sequences associated with chloroplasts (from plant origin, and thus, most likely from feed) were filtered and discarded. In addition, representative OTU sequences were aligned to the bacterial 16S rRNA gene using the RDP aligner tool available at Michigan State University (https://rdp.cme.msu.edu/tu torials/aligner/RDPtutorial_ALIGNER.html); and those sequences that did not align with the sequenced region were filtered, thus removing OTU sequences that did not align within the expected region.
The OTU table was rarefied across samples to the lowest sample depth (6295 reads) using QIIME. All statistical analyses were performed with samples at an even depth. Furthermore, beta-diversity plots were generated in QIIME to evaluate differences based on sequence similarities and these plots were visualized with the Emperor visualization programme (Vazquez-Baeza et al. 2013). Moreover, alpha diversity estimators (Chao1 and observed species) and diversity index (Shannon) were evaluated for the overall community using QIIME (Caporaso et al. 2010). Good's coverage test was performed to evaluate if adequate sampling depth was achieved. A Venn diagram was constructed to illustrate the relationship and OTU distribution among treatments. To do so, the venn function in the gplots package of R was used (Warnes et al. 2015).
A core microbiome, which was defined as those OTUs present in all animals, was calculated. From the core microbiome, taxa were summarized and plots were generated at the taxonomic levels of phylum, class, order, family and genus, with emphasis on representative OTUs that represented at least 0Á1% of the microbial community in each sample. To minimize animal to animal variation and to represent the shared OTUs within each diet, the core microbiome was analysed. This allows identification of the microbial community influenced by the treatment sorting through animal to animal variation. The hypothesis is that, if the treatment affects the microbial community, this effect should be present across multiple animals on the same treatment. Therefore, the analysis of the core microbiome allows identification of the effects that might otherwise be hidden in the data (Benson et al. 2010;Castillo-Lopez et al. 2014. Compared to other sequencing platforms such as 454 Roche pyrosequencing (Castillo-Lopez et al. 2014) or Illumina (Castillo-Lopez et al. 2017), the use of ion torrent may have some limitations due to sequencing errors (Frey et al. 2014;Salipante et al. 2014). However, the quality control steps outlined by previous researchers such as initial quality control using the ion Torrent Suite Software, screening for chimeric sequences and preclustering using the pseudo-single linkage-clustering algorithm were specifically aimed at removing erroneous reads. Moreover, the core microbiota analysis where only OTUs that were present in all animals were considered should also filter out many OTUs generated due to random sequencing errors as these would not be expected to occur in all animals.
Statistical analysis
Data collected on the abundance of bacterial phyla, families and genera, and sample bacterial richness (Chao1 and observed species) as well as diversity index (Shannon) were analysed using the MIXED procedure of SAS (ver. 9.1; SAS Institute Inc., Cary, NC). Fixed effects included the treatment and period, with cow as the random effect. The statistical model for this experiment was as follows: where Y ijk represents observation ijk, l represents the overall mean, b i represents the random effect of cow i, q j represents the fixed effect of period j and a k represents the fixed effect of treatment k. The residual term e ijk was assumed to be normally, independently and identically distributed, with variance r e 2 . The comparison of treatment means was conducted using the PDIFF option in the LSMEANS statement. In addition, using the CON-TRAST statement, CONT was compared to FLAX+EXT +EXTT, and CONT was compared to each of the other treatments. Furthermore, FLAX was compared to EXT, and EXT was compared to EXTT. Treatment means are presented as least squares means. The largest standard error of the mean (SEM) is reported. Statistical significance was declared when P ≤ 0Á05 and tendencies were discussed when P > 0Á05 and ≤0Á10. The Spearman's correlation analysis was conducted to evaluate associations between dietary composition (content of fat, unsaturated fatty acids and tannins) and bacterial families and genera.
In addition, bacterial community composition differences were evaluated using the weighted UniFrac distance matrix as an input for a permutational multivariate analysis of variance (PERMANOVA) in R using the vegan package (adonis function) (Oksanen et al. 2015), where treatment was used as main effect.
Diets, milk yield and composition
The inclusion of the flaxseed-based products in diets resulted in significant changes (P < 0Á05) in chemical composition; the per cent of unsaturated fatty acids increased by approximately 4%; replacement of field peas with high tannin faba beans resulted in a content of condensed tannins of 1Á17 mg g À1 in EXTT. It is important to note that the level of condensed tannins decreased by 83% after extrusion (from 6Á87 to 1Á17 mg g À1 ).
Condensed tannins were not detected in CONT, FLAX or EXT, as was anticipated based on the ingredient composition of the flaxseed supplements. Production performance data and milk fatty acid composition have been reported (Moats et al. 2015). Briefly, there was a significant decrease (P < 0Á05) in DMI when EXT was fed compared to CONT (25Á9 and 23Á4 kg, respectively); fat corrected milk, however, was not affected and averaged 40Á5 kg. In addition, significant changes (P < 0Á05) were observed in milk fatty acid profile; for example, when the flaxseedbased products were fed, the proportion of total saturated fatty acids decreased by 10Á8%, the proportion of total polyunsaturated fatty acids increased by 0Á60% and the concentration of Omega-3 increased by 0Á51%. Nonetheless, no significant effects of treatment were observed on ruminal pH (6Á03), ruminal digestibility of dry matter (38Á4%), organic matter (40Á7%) and neutral detergent fibre (35Á7%).
Number of sequences, sample richness, diversity index and OTU distribution
Collectively, a total of 356 709 high-quality DNA sequences were obtained after initial quality control and filtering, and were used for downstream analysis. Diversity metric Chao1 was not affected by the inclusion of the flaxseed-based products (P ≥ 0Á38); observed species (P ≥ 0Á34) and the Shannon diversity index (P ≥ 0Á23) remained unaffected as well (Table S1). The Good's coverage test showed that sequencing depth was able to characterize >98% of the bacterial community.
The Venn diagram for OTU distribution revealed that each treatment showed a number of unique OTUs. However, there were 1285 OTUs shared by the four diets, which represented 69Á16% of total OTUs detected. In addition, according to the beta diversity for sequence similarities (Fig. S1) based on principal coordinate analysis, there appear to be two clusters and two sample outliers; however, no apparent clustering of microbial communities by treatment was found, indicating a similar spatial sample heterogeneity among the diets. The bacterial community analysis using PERMANOVA did not display a significant (P = 0Á20) effect on bacterial community composition.
Correlation coefficients
Correlation analysis between dietary components (content of fat, unsaturated fatty acids and tannins) and bacterial families and genera (Table 6) revealed that the content of dietary fat tended (P ≤ 0Á09) to be negatively correlated with the abundance of the families BS11, Christensenellaceae and Clostridiaceae, and was negatively correlated (P < 0Á05) with the genus Coprococcus, but was positively correlated with unclassified Veillonellaceae (P = 0Á02).
Dietary unsaturated fatty acid content tended (P = 0Á07) to be negatively correlated with the family Clostridiaceae, but was positively correlated (P ≤ 0Á04) with the genera unclassified Veillonellaceae and Schwartzia. Dietary tannin content tended (P ≤ 0Á08) to be negatively correlated with the families BS11, Paraprevotellaceae and Christensenellaceae, but was positively correlated (P < 0Á01) with WCHB1-25; in addition, tannin content tended to be negatively correlated (P = 0Á09) with the genus Prevotella, but was positively correlated (P ≤ 0Á02) with Oscillospira and Bulleidia.
Discussion
The gut microbial population influences physiology, metabolism, nutrition and immune function with disruption of this community being linked to gastrointestinal conditions (Guinane and Cotter 2013;Ridaura et al. 2013). In ruminants, gut microbes represent a source of metabolizable protein (Spicer et al. 1986;NRC 2000), they play an essential role in volatile fatty acid *CONT: a normal diet including barley silage, alfalfa hay and a barley-based concentrate with no flaxseed or faba beans; FLAX: inclusion of 11Á4% of a nonextruded flaxseed-based product containing flaxseed, field peas and alfalfa; EXT: similar to FLAX, but the product was extruded; EXTT: similar to FLAX, but product was extruded and field peas were replaced by high-tannin faba beans. †The largest standard error of the mean is reported. ‡1: CONT vs FLAX+EXT+EXTT; 2: CONT vs FLAX; 3: CONT vs EXT; 4: CONT vs EXTT; 5: FLAX vs EXT; 6: EXT vs EXTT.
production and feed digestion (McAllister et al. 1994) and milk composition (Jami et al. 2014). Moreover, the bacterial community is responsible for fatty acid biohydrogenation (Jenkins et al. 2008). Therefore, to effectively develop feeding strategies to enhance production performance or quality of dairy products, researchers must understand the effects of diet composition or biohydrogenation mitigating strategies on the broad ruminal microbiome in vivo. Given the laborious nature of studies involving the evaluation of ruminal fermentation and the rumen microbial community, some of the experiments have been conducted using small number of animals (Lillis et al. 2011;Boots et al. 2013;Mohammed et al. 2014;Denman et al. 2015) through the Latin square design, which is commonly used in cattle nutrition studies (Lillis et al. 2011;Boots et al. 2013), mostly because it is efficient as it generates replication with limited experimental units. However, it should be noted that the potential for carryover effects is one limitation of the design. Especially when evaluating the effects of plant secondary compounds on the microbial community because of their effect on animal physiology and metabolism (Dearing et al. 2005). In addition, once ruminal micro-organisms *CONT: a normal diet including barley silage, alfalfa hay and a barley-based concentrate with no flaxseed or faba beans; FLAX: inclusion of 11Á4% of a nonextruded flaxseed-based product containing flaxseed, field peas and alfalfa; EXT: similar to FLAX, but the product was extruded; EXTT: similar to FLAX, but product was extruded and field peas were replaced by high-tannin faba beans. †The largest standard error of the mean is reported. ‡1: CONT vs FLAX+EXT+EXTT; 2: CONT vs FLAX; 3: CONT vs EXT; 4: CONT vs EXTT; 5: FLAX vs EXT; 6: EXT vs EXTT.
have been treated with adverse plant dietary products, they may be no longer na€ ıve and their reaction may be damped on successive treatments (Kohl and Dearing 2016). In this study, the experiment was designed with 28-day periods as an attempt to minimize potential carryover effects, with the first 26 days serving as a 'washout' period and the final 2 days serving for collection of ruminal digesta for bacterial community analysis.
Increasing the size of the study would improve the experiential precision (Stroup 1999;Kononoff and Hanford 2006). However, despite being relatively small, the use of the Latin square experimental design in this study allowed detection of important and statistically significant differences in rumen fermentation and microbial analysis, as in previous reports using the same design (Lillis et al. 2011;Boots et al. 2013;Ramirez Ramirez et al. 2016a,b).
Feeding flaxseed or flaxseed-based products to dairy cows and effects on the overall rumen bacterial community Feeding flaxseed has been shown to improve milk fatty acid profile without affecting milk production (Oeffner et al. 2013). Current advances in the dairy feeding industry is spurring development of new flaxseed-based feed ingredients to enhance milk fatty acid profile; and the effects of those products on the broad rumen bacterial community of dairy cows must be clearly elucidated. Regardless of treatment, predominant ruminal bacteria agree with previous reports (Petri et al. 2012) showing that major bacterial phyla in the rumen of cattle are Bacteroidetes, Firmicutes and Proteobacteria. Collectively, these phyla represented approximately 96% of the rumen bacterial community in the current study. The influence of diet on the diversity and community composition of ruminal contents has long been recognized (Tajima et al. 2001;Fernando et al. 2010). In this experiment, when compared to the normal diet, the inclusion of raw flaxseed or extruded flaxseed with or without high-tannin faba beans did not cause drastic shifts on the abundance of major ruminal bacterial phyla. Contrasting these observations, Kong et al. (2010) used quantitative fluorescence in situ hybridization and found that inclusion of flaxseed reduced the total abundance of the phyla Bacteroidetes, Firmicutes and Proteobacteria in the rumen of cows fed silage-based diets. The discrepancies between these observations may be related to the chemical composition of the diets and the available substrates for microbial growth. In our study, the main dietary changes involved the increase in ether extract and polyunsaturated fatty acids in diets with flaxseed inclusion; in addition, there was a change in physical processing of flaxseed among diets, with a constant forage base. Thus, substrate availability for bacterial fermentation was relatively similar across treatments suggesting that bacterial phyla and other major taxa distribution may be resilient to changes in physical form of feeds when dietary fat does not exceed 6% in the diet.
Taxonomic analyses at the family and genus levels agree with previous findings using DNA pyrosequencing indicating that the ruminal microbiome is largely composed of the bacterial families Prevotellaceae, Lachnospiraceae and Ruminococcaceae and the genera Prevotella, Succiniclasticum and Ruminococcus (Castillo-Lopez et al. 2014). Prevotella, the largest bacterial genus detected, is composed of versatile organisms that can utilize a variety of nutrients including protein, starch, pectins and hemicellulose (Russell 2002), and has also been reported to predominate in the rumen of cattle being fed forage-based diets supplemented with corn distillers grains (Ramirez Ramirez et al. 2016a,b). In agreement with Kong et al. (2010), this experiment indicated that the fibre digesting genus Fibrobacter accounted for a minor fraction of the bacterial communities across diets. Overall, inclusion of flaxseed-based products did not affect predominant bacterial families and genera, and only affected taxa found in lower proportions, which could partially explain the lack of negative effects on *CONT: a normal diet including barley silage, alfalfa hay and a barley-based concentrate with no flaxseed or faba beans; FLAX: inclusion of 11Á4% of a nonextruded flaxseed-based product containing flaxseed, field peas and alfalfa; EXT: similar to FLAX, but the product was extruded; EXTT: similar to FLAX, but product was extruded and field peas were replaced by high-tannin faba beans.
ruminal digestibility of DM, organic matter and neutral detergent fibre.
Effect of unsaturated fatty acids, extrusion and dietary tannins on rumen bacterial community structure and function
Dietary strategies to improve fatty acid profile of ruminant products have included feeding unsaturated fatty acids, feed extrusion or supplementation with tannins. Although effects on some bacterial taxa (Vasta et al. 2010;Enjalbert et al. 2017) or production performance has been acknowledged, the impact of such strategies on the broad ruminal bacterial communities of dairy cows in vivo is yet to be clearly elucidated.
Ruminal bacteria, specially fibrolytic bacteria, may be negatively affected by dietary fat (Maia et al. 2006;Enjalbert et al. 2017). For example, Maia et al. (2010) reported that unsaturated fatty acids decreased the abundance of Butyrivibrio fibrisolvens in vitro. In addition, negative effects of dietary linseed oil have been reported on the genera Fibrobacter, Prevotella and Ruminococcus (Huws et al. 2014;Enjalbert et al. 2017). In the present experiment, the 4% increment in dietary unsaturated fatty acids was not accompanied by a decrease in the abundance of these genera. This may indicate that the increase in dietary fat and unsaturated fatty acids was not severe enough to exert negatively impacts on those taxa; in agreement with this observation, no negative effects were detected on ruminal fibre digestion across treatments (Moats et al. 2015).
Feed extrusion has been applied to decrease fatty acid saturation because heat denatures the protein matrix surrounding the fat droplets, consequently reducing the access of ruminal bacteria to dietary fat (Kennelly 1996). Moreover, Vasta et al. (2007) and Schofield et al. (2001) suggested that tannins may inhibit the activity of biohydrogenating bacteria because tannins can interfere with bacterial growth. Within the bacterial population residing in the rumen, a number of bacteria that participate in fatty acid saturation have been identified, which include bacteria belonging to the genera Pseudobutyrivibrio et al. 1993) and Lactobacillus (Jenkins et al. 2008;Sakurama et al. 2014). In the present experiment, the abundance of the genera Selenomonas and Butyrivibrio were similar across treatments. However, Buccioni et al. (2014) and Vasta et al. (2010) reported an increase in B. fibrisolvens and a decrease in Butyrivibrio proteoclasticus in the rumen of sheep supplemented with quebracho tannins. This suggests that biohydrogenating bacterial species within the same genus show different degrees of sensitivity to dietary tannins (Nelson et al. 1997;Schofield et al. 2001). In this study, we did not evaluate bacterial species; however, it is possible that the lack of an effect of EXTT on most bacterial taxa may have been due to the lower tannin concentration compared to Vasta et al. (2010) who utilized quebracho-supplemented diets containing 6Á4% tannins. It is important to note that the content of tannins in the extruded product containing high-tannin faba beans was lower than expected, the 83% decrease in tannin content of the extruded product may have been caused by high temperature during the extrusion process (Iram et al. 2014). Thus, it may be beneficial to evaluate extrusion techniques to minimize the loss of tannins in supplements designed for ruminant diets.
A negative correlation does not necessarily indicate a direct cause-effect relationship; however, the negative association found between some bacterial taxa and the content of dietary fat, unsaturated fatty acids or tannins may indicate high sensitivity to these dietary components. For example, the negative correlation between the abundance BS11 and dietary fat may be due to toxic effects of fat on members of this bacterial family (van Lingen et al. 2017). Likewise, high sensitivity to tannins has been reported for Prevotella belonging to Paraprevotellaceae (Li et al. 2015).
Interestingly, when feeding extruded flaxseed to dairy cattle, the content of a-linolenic acid in blood serum and milk tended to increase (Kennelly 1996;Oeffner et al. 2013), and when cows consumed treatments containing the flaxseed-based products utilized in this experiment there was an increase in the concentration of omega-3 and total polyunsaturated fatty acids in omasal digesta and in milk (Moats et al. 2015). Bacterial species were not evaluated in this study; thus, we are unsure whether biohydrogenating micro-organisms were negatively affected. The family Christensenellaceae, which decreased with flaxseed inclusion, has been recently associated with low body mass index and reduced adiposity gain in nonruminants (Goodrich et al. 2014). When cows consumed the extruded flaxseed-based product, body weight was not affected (Moats et al. 2015). Further investigation elucidating the activity and role of members of this bacterial family in the rumen and how they may impact production performance or fatty tissue accretion in dairy cattle is warranted.
In this experiment, the microbial profile of diets fed was not determined. Reports have shown that bacteria found in the diet could potentially affect ruminal microorganisms (Ghorbani et al. 2002;Lettat et al. 2010), others have reported that the survival of some of these bacteria in the rumen is variable (Jeyanathan et al. 2016). More recently, Philippeau et al. (2017) reported that direct-fed microbials did not affect ruminal micro-organisms or volatile fatty acid production. Therefore, it was not possible to determine associations between microorganisms found in the diets, if any, and changes in the rumen microbial profile.
Overall, findings from this study indicate that flaxseedbased products tested were effective for replacing barleybased concentrate in lactating dairy rations without negative effects on predominant rumen bacterial taxa. However, the content of unsaturated fatty acids and tannins in the diets were negatively associated with some bacterial taxa found in lower proportions in the rumen such as Clostridiaceae, BS11, Paraprevotellaceae and Christensenellaceae; nonetheless, production performance and ruminal nutrient digestion were unaffected. The use of high-throughput DNA sequencing contributes to unravel the impact of diet composition on ruminal micro-organisms, strengthening our knowledge not only on dietary intervention methods to mitigate fatty acid biohydrogenation and improve milk quality, but also to prevent negative consequences on the ruminal bacterial population, feed digestion and rumen function.
Supporting Information
Additional Supporting Information may be found in the online version of this article: Figure S1 Beta diversity for bacterial communities in ruminal digesta samples for treatments CONT, a normal diet including barley silage, alfalfa hay and a barley-based concentrate with no flaxseed or faba beans; FLAX, inclusion of 11Á4% of a nonextruded flaxseed-based product containing flaxseed, field peas and alfalfa; EXT, similar to FLAX, but the product was extruded; EXTT, similar to FLAX, but product was extruded and field peas were replaced by high-tannin faba beans.
Table S1 Effect of partially replacing a barley-based concentrate with different flaxseed-based products on bacterial richness estimates and diversity index for ruminal digesta samples from lactating Holstein dairy cows. | 2018-04-03T04:48:27.433Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "52fb8998f6180721dc62b0927004185dfb5a7a5e",
"oa_license": "CCBY",
"oa_url": "https://sfamjournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jam.13630",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f54b7e102e9e05433c7067b939d54075270f475c",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
248484507 | pes2o/s2orc | v3-fos-license | Traumatic Bilateral Brachial Plexus Injury
Traumatic brachial plexus injuries are serious, life-changing injuries that are becoming more common worldwide. A thorough physical examination, as well as radiologic and electrodiagnostic tests, are all part of the initial evaluation. Parameters such as injury patterns, the timing of intervention, patients' expectations, and pre-injury functional level should always be considered. A bilateral brachial plexus injury is a very uncommon occurrence. To our knowledge, only one case of a bilateral brachial plexus injury associated with trauma has been published in recent literature. We present a rare case of a 19-year-old man who sustained a bilateral brachial plexus injury after a motorbike accident. The patient underwent exploration of the left brachial plexus and a modified Oberlin procedure on his left arm. The right plexus injury was managed conservatively. After a follow-up period of 12 months, the patient completely returned to his previous functional level.
Introduction
The brachial plexus is a complex neural network that innervates the arm, shoulder, and upper chest via motor and somatosensory nerves. The brachial plexus is typically made up of the ventral rami of the C5-T1 spinal nerves. These ventral rami represent the roots of the plexus that combine to generate three trunks, six divisions, three cords, and five terminal motor/sensory branches to the upper extremities. In around half of the cases, however, the brachial plexus anatomy can be a variant [1]. As a result, different injury patterns can lead to diverse neurological abnormalities. A bilateral brachial plexus injury is a rare entity, with cases reported in literature associating this condition with prolonged use of crutches, intraoperative shoulder bracing, and complications during childbirth. However, only one case of bilateral traumatic plexopathy has been reported [2]. In an attempt to elucidate the potential mechanisms and demonstrate our therapeutic approach, we present an unusual case of a 19-year-old man who incurred a bilateral brachial plexus injury after a motorcycle accident.
Case Presentation
The patient, a 19-year-old man, was transferred to the Accident and Emergency department of our hospital after a motorbike accident. He was a non-smoker, with no medical comorbidities. He was initially managed according to the Advanced Trauma Life Support (ATLS) guidelines. At initial evaluation, the patient was unable to move his shoulders, elbows, and wrists. Contraction was present on the elbow and wrist flexors and extensors on both upper limbs, with a Medical Research Council scale (MRC scale) score of 1/5 on both arms. Finger flexion and extension were partially limited on both arms with an MRC scale score of 3/5 bilaterally. The patient was maintaining sensation in both upper extremities and the vascular system was intact. Swelling was present on the right forearm, and a 10 cm laceration on the medial side of the patient's left cubital fossa was identified. Initial X-rays showed a right forearm fracture classified as 22C1 according to the Muller AO classification ( Figure 1A) [3]. There were no other concomitant injuries, and the patient was admitted to the orthopedic ward. The next day, open reduction and internal fixation of the right forearm fracture were performed ( Figures 1B-1C), with no nerve injury identified intraoperatively. The ulnar incision was partially closed using a fasciotomy equivalent because of excessive edema. The left cubital fossa trauma was irrigated, debrided, and closed at the same time, without detecting any neurovascular damage. Postoperatively, no improvement in the MRC scale in both upper limbs was identified. On the fourth postoperative day, the patient was rescheduled for ulnar trauma closure and a thorough surgical exploration was performed. No signs of nerve injury were observed. A cervical spine and bilateral brachial plexus MRI with intravenous contrast injection revealed swelling of the C6, C7, and C8 roots of the left brachial plexus with possible disruption of their route and hematoma in the surrounding tissues ( Figure 2A). Thickening of the C6, C7, and C8 nerves was identified on the right brachial plexus ( Figure 2B). The right plexus injury was managed conservatively. The left brachial plexus deficit was surgically treated one month after the initial injury. The patient was placed in a supine position with the arm in lateral abduction and a supraclavicular incision was performed ( Figure 3). A nerve stimulator (Vari Stim® III, nerve locator, Medtronic Xomed, Inc., Jacksonville, FL) was used during the procedure to access proper nerve identification and function. Exploration at the supraclavicular area revealed only swelling of the brachial plexus roots with no signs of rupture. External neurolysis was performed in the upper and middle trunks ( Figure 4A). A modified Oberlin procedure was performed for restoration of elbow flexion [4]. A longitudinal incision was made 4 cm distal to the humeral insertion of the pectoralis major tendon, along the medial aspect of the left arm. Between the biceps and coracobrachialis muscles, the musculocutaneous nerve was reached and the nerve branches to the biceps and brachialis were recognized. Some contraction of the biceps was identified using the nerve stimulator but contraction of the brachialis muscle was absent. The decision was to neurotize only the branch to the brachialis muscle. The ulnar nerve was dissected at the same level and two ulnar nerve fascicles that innervated the flexor carpi ulnaris were dissected. The brachialis motor branch was dissected, and its distal end was twisted medially toward the ulnar nerve. Two ulnar nerve fascicles were chosen and isolated from the rest of the ulnar nerve and then rotated laterally and sutured to the brachialis motor branch with nylon 9-0 suture ( Figures 4B-4C). A modified Oberlin procedure was performed. The brachialis motor branch was dissected, and its distal end was sutured with two ulnar nerve fascicles There were no postoperative complications and the patient was discharged four days after surgery. An intensive rehabilitation program began from the first postoperative week including range-of-motion exercises of the fingers along with transcutaneous electrical nerve stimulation (TENS). After a period of three months, muscle movement against gravity without any resistance (3/5 on the MRC scale) was present on both upper limbs. At the four-month follow-up, the patient's MRC scale score was 4/5. Finally, at the 12month follow-up, unrestricted range of motion was present (5/5 on the MRC scale) on both upper limbs. The patient was able to return to his pre-injury level of function ( Figure 5) with a Disabilities of Arm, Shoulder, and Hand (DASH) score of 2.5 at that time.
Discussion
Brachial plexus injuries are uncommon, with a prevalence of 1.2% in multi-trauma patients [5]. The Swiss study by Narakas [6], who described the 'rule of 7x70%', has traditionally been the standard reference for the etiopathogenesis of brachial plexus injuries. Seventy percent (70%) of brachial plexus lesions were associated with motorcycle or bicycle accidents. Seventy percent (70%) of the cases were polytrauma patients presenting with supraclavicular lesions. At least one root avulsion was identified, with avulsions mostly affecting the lower plexus. Chronic pain was present in 70% of the patients. From an initial dominance of open injuries to the current majority of closed lesions, the etiology of brachial plexus injury has changed. Direct pressure, traction, compression, recurrent microtrauma, and compression or stretchinduced ischemia are some of the potential mechanisms. The main cause of closed injuries is motorcycle accidents while Jain et al. mentioned that the dominant limb is affected more frequently [7]. When the limb is along the chest and the force acts from above, the supraclavicular upper brachial plexus injuries are more possible. The elements of the lower plexus are at risk if the arm is aggressively dragged into abduction and extension. Complete lesions are caused by significant forces or a combination of numerous forces. The upper roots have a significantly higher proclivity for rupture since they are protected from avulsion by dural sleeves and fibrous connections. On the contrary, lower plexus injuries are generally caused by the avulsion of C8-T1 roots [8]. Motor roots are more prone to rupture because they are thinner than sensory roots [9]. Although brachial plexus injuries have been well-described, traumatic bilateral brachial plexus lesions remain a rare and elusive condition. Ramdass et al. reported a case of bilateral traumatic brachial plexus injury in a 33year-old patient after a motorbike accident [2]. In their case, the possible mechanism of injury was traction of the head and forced lateral flexion of the neck. The patient was treated conservatively. Although there was some gradual improvement, the patient retained a characteristic posture with both arms hanging and internally rotated. Bilateral brachial plexus neuropathy has been associated with non-traumatic conditions such as prolonged use of crutches [10], direct plexus pressure caused by immobilization during surgical interventions [11][12][13], and complications during childbirth [14]. In our case, traction, compression, and direct forces applied to the patient's shoulder and neck at the time of injury were potential causes of traumatic bilateral plexus lesions. Meanwhile, the patient's position after the accident and until the time of extrication may predispose them to bilateral plexus neuropathy caused by direct pressure.
In our case, there was high clinical suspicion of bilateral plexus injury based on the patients' clinical presentation. The MRI revealed multiple root swellings on both sides, with possible disruption in the left brachial plexus. Edema in the right plexus roots was less significant and a gradual improvement in the patient's right upper limb muscle strength was observed (from 1/5 to 2/5 on the MRC scale), one month after the injury. Thus, we decided to manage the right plexus injury conservatively. No signs of recovery were identified in the left upper limb one month after the initial injury, with an MRC scale score for elbow and wrist flexors and extensors of 1/5. Patients suffering from stretch and blunt injuries of the brachial plexus are treated conservatively initially. A three-month period of observation can reveal evidence of muscle regeneration. Surgery is recommended in the absence of spontaneous healing after three months [15]. Although our patient was fully informed about the possibility of spontaneous healing after a three-month period, he was unwilling to wait. Based on the patient's age and expectations for earlier results along with no signs of recovery at one month, we decided to operate earlier than indicated. External neurolysis and nerve transfer at the left brachial plexus offered the patient an excellent result, and he was able to return to his previous level of activity.
Conclusions
A bilateral brachial plexus injury is a very uncommon but extremely significant medical condition. A high level of suspicion should arise when managing polytrauma patients, especially after motor vehicle accidents. Meticulous clinical examination, MRI, and electrodiagnostic tests are the main diagnostic tools. Understanding the etiopathogenesis and mechanisms behind this complex entity can help in choosing the most appropriate treatment.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-05-02T15:02:59.395Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "94d6eeeca69aa660bb750154ff5000c5042b0acc",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/93971-traumatic-bilateral-brachial-plexus-injury.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "253f5ec929b25355dc1975137d44a77b6a337da7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
227178764 | pes2o/s2orc | v3-fos-license | Vulnerable learners in the age of COVID-19: A scoping review
This scoping review provides an overview of COVID-19 approaches to managing unanticipated school closures and available literature related to young people learning outside-of-school. A range of material has been drawn upon to highlight educational issues of this learning context, including psychosocial and emotional repercussions. Globally, while some countries opted for a mass school shut-down, many schools remained open for students from disadvantaged backgrounds. This partial closure not only enabled learning in smaller targeted groups but also offered a safe sanctuary for those who needed a regulated and secure environment. In Australia, if full school closures were to be enforced over a long period, a significant proportion of students from more vulnerable backgrounds would likely experience persistent disadvantage through a range of barriers: long-term educational disengagement, digital exclusion, poor technology management, and increased psychosocial challenges. This scoping review combines research on technology availability and learning, with analysis of the long-term educational impacts of navigating the COVID-19 disruption.
Introduction
The coronavirus pandemic has created huge worldwide repercussions, impacting economies, health sectors, and education systems, as noted by the United Nations Educational, Scientific and Cultural Organization (UNESCO 2020a,b,c). To address the significant disruption to learning and educational environments, UNESCO (2020b) swiftly developed ten key recommendations to ensure that learning across the globe remained relatively uninterrupted during the COVID-19 crisis. These recommendations span the whole learning sphere, including the well-being and educational needs of learners, as well as the emotional health of educators and the need for common directions/guidelines for educational institutions. Nationally, the Australian federal government responded to calls for mass school closures by commissioning research to inform the community of the effects for vulnerable young people. 1 The way in which various countries and educational systems responded to this global pandemic provided the impetus for this scoping review with particular reference to how mass school closures could impact our most vulnerable learners. The responses to the pandemic have been evolving and fluid; however, what has become clear is that the COVID-19 pandemic has exacerbated the multiple and profound educational divisions that already exist globally (Beaunoyer et al. 2020;Chandra et al. 2020). This article will consider these divisions and also reflect upon strategies that were employed during school closures to counter the long-term and detrimental consequences arising from this health and economic crisis (Beaunoyer et al. 2020;Chandra et al. 2020).
In an attempt to contain the COVID-19 virus and reduce its spread globally, 191 countries instigated nationwide school closures (UNESCO 2020c), affecting approximately 91.3% of students, around 1.5 billion students worldwide. In response to these school closures, many countries adopted online modalities to ensure continuity in learning; however, this adjustment heralded growth in concerns over how students from socially and financially disadvantaged backgrounds would be educationally, emotionally, and socially impacted by online learning. Questions arose regarding the best ways to support vulnerable students to continue their education. This scoping review serves as an informed basis to understand the educational impacts of the disruption caused by COVID-19, specifically for young people who may be living in disadvantaged contexts within Australia. In a situation characterised by constant change, this scoping review provides a baseline of the literature and research moving forward, synthesising currently available information from a range of sources. This article commences a dialogue about how unforeseen and global crises might be managed moving forward. In saying that, the authors are mindful that drawing recommendations from a transnational organisation such as UNESCO can be problematic. Any policy borrowing or lending that results in an unproblematic application of policy to differing educational contexts needs to be challenged, as policy should always be informed 1 3 Vulnerable learners in the age of COVID-19: A scoping review by the 'local' rather than the global, drawing upon place-based knowledges rather than general assumptions (Portnoi 2016). However, at the time of writing this paper, the UNESCO report provided the most accessible macro-level analysis of how to address the educational challenges engendered by This review marks a starting point from which informed conversations can evolve, a means to consider both what is already known about educational differences across student populations as well as the ways in which policy decisions to manage COVID-19 disruption influenced teaching and learning contexts. The review highlights the major implications of being educationally at risk in Australia and details the issues impacting our more vulnerable student populations within the existing national school system. It explores possible consequences for vulnerable students of a mass school closure across Australia. The strategies and approaches adopted by other countries to manage these issues will also be outlined, including emerging good practice in what became the first global mass school closure of its time.
In Australia, certain individuals or groups experience vulnerability; this varies according to personal context and depends on the circumstances surrounding the individual. Notably, the social and economic conditions in which people live, learn, work, and play are the social determinants of health and educational disadvantage (Reay 2017). According to research, the more individuals are exposed to different forms of disadvantage, both social and material, the poorer the health and developmental outcomes, especially related to education (Goldfeld et al. 2018). Young people who have been exposed to more disadvantage than their peers have been described in terms of vulnerability (Arora et al. 2015). Vulnerability can result from forms of stratification and further contextualised in terms of the unfulfilment of basic rights (Skinner et al. 2006). Notably, receiving an education is one of the fundamental human rights for all children (UNGA 1959). For this paper, the term vulnerability has been deliberately chosen over disadvantage to recognise the multiple external factors that can impact on the life course of an individual learner.
Vulnerability is a multidimensional construct that is "embedded in complex social relations and processes" (Hilhorst and Bankoff 2006, p. 5). Therefore, vulnerability positions individuals in relation to each other, within broader systems of social disadvantage. For example, social vulnerability refers to the resilience of communities when confronted by external stressors such as the complex and cascading effects from COVID-19 disruption. It involves varying levels of access to resources such as information, knowledge, and technology, in order to prepare for, cope with, and recover from external stressors. A large contributor to social vulnerability is social stratification, particularly evident in the Australian education system where students from more materially wealthy backgrounds tend to go to high fee-paying schools (Perry and McConney 2013). Conversely, low-fee-paying schools are often less well resourced and may have student populations that encounter varying levels of economic vulnerability with low or limited household incomes (Perry 2018). Such economic vulnerability can also have detrimental impacts on educational outcomes, leading to criticisms that broad-based testing, such as PISA, is not an educational measure but an economic one (Niyozov and Hughes 2019).
Being mindful of context is key to understanding the specific needs of communities that are materially disadvantaged. Factors such as overcrowding, poor health, difficulties with community safety, and unemployment may impact more significantly on learners within low socioeconomic status (SES) communities (Griggs et al. 2008;Pinoncely 2016). Indicators of vulnerability must then consider a diversity of issues including the difficulties associated with coping with change, being unfamiliar with the cultural or social capitals valued in mainstream educational settings or experiencing lower levels of community interconnection, trust, and resource sharing (Alwang et al. 2001;Larsen 2013). Such risk factors can entrench communities in poverty and social disadvantage (Vinson et al. 2015). As a result, vulnerable young people can be exposed to an increased range of social, emotional and behavioural issues (Edward and Baxter 2013) which negatively impact on the ways in which these learners navigate and engage with the learning environment, when compared to their higher SES peers (Edward and Baxter 2013). The challenges wrought by COVID-19 have only added to the already identified risks for vulnerable young people.
Importantly though, when considering social contexts and different communities, it is vital not to slip unintentionally, into discourses of deficit. As Oikonomidoy (2015) explains, if we focus only on 'existing macro-level categories' (p. 110) such as race, gender, or class, in explorations of human behaviour, then we run the risk of assuming a level of powerlessness or interdependence in individual actors. While vulnerable young people may be more at risk educationally, it is important not to problematise the individual or assume that this is collectively a group in need of assistance. Instead, theorists such as Yosso (2005) and Lareau (2011) identify how diverse forms of capitals and strengths both inform and underpin actions in different community settings. For example, Yosso refines and expands Bourdieuian notions of cultural capital to propose a strengths-based model (Community Cultural Wealth) that identifies forms of capital that are often unrecognised within the educational landscape, yet arguably provide rich foundations that students can build upon to enact success. Unfortunately, such cultural strengths may be undervalued in most formal educational settings, which creates perceptions of learners as deficient in requisite skills and needing to be 'filled up' with appropriate knowledges. A process that essentially ignores or undervalues the existing 'experiential capitals' (O'Shea 2018) held by individuals, regardless of background or wealth.
Research across the Australian educational sector has indicated that a "cycle of intergenerational disadvantage" can be seen repeating itself in the lives of many young people from low socioeconomic backgrounds (Mission Australia 2017, p. 1). Family characteristics of each student within individual schools are recorded on enrolment and contribute to the Index of Community Socio-Educational Advantage (ICSEA) for each school. Illustratively, the school ICSEA indicates how socially segregated Australian schooling has become (Kenway 2013;Perry 2018;Perry and McConney 2013). The ICSEA is calculated using Australian Bureau of Statistics (ABS) data and draws on education, occupation, income, ethnicity, and location of student household (Australian Curriculum, Assessment and Reporting Authority 2015). Nationally, the mean ICSEA score is 1000, and one standard deviation from the mean is equal to 100. Schools with extreme 1 3 Vulnerable learners in the age of COVID-19: A scoping review disadvantage are scored around 700 with elite schools scoring up to 1300. When learning outcomes are measured between schools above and below the mean ICSEA, a highly differentiated system emerges whereby students in schools situated in low socioeconomic areas experience barriers to access academic curriculum, learning resources (especially technology-related), and quality pedagogy to support and encourage high academic expectations (Lamb et al. 2001;Naylor and James 2015;Vernon et al. 2019). Educational attainment levels (measured through PISA and other national assessment schemes) and school completion rates are consistently lower in high schools situated in low socioeconomic areas, resulting in fewer students transitioning to university, and higher levels of youth unemployment (Hérault and Kalb 2009;Mission Australia 2017;Vernon et al. 2019). Similarly, this situation is also reported in other countries, including the UK (Reay 2016(Reay , 2017. Data from the ABS show that as of 2019 there were 3,948,811 students enrolled in 9503 schools, with 2,263,207 primary students and 1,680,504 secondary students. Mass school closures thus have the potential to impact nearly four million students (ABS 2019; Drane et al. 2020). The potential effects are varied, contingent upon the social and economic capacities of schools themselves and their student characteristics. Certainly, the gap for students from more socially disadvantaged backgrounds may be exacerbated by the very speedy global mass movement to online and off-site education (Chandra et al. 2020). As schools (and universities) increasingly closed their physical locations, these differences in resources, and student characteristics, resulted in differences in the learning opportunities afforded to, or accessed by, young people, with increased risk for students in households experiencing economic disadvantage (Chandra et al. 2020). Within Australia, the lowest quintile for family income comprises 20% of the total student population, which equates to approximately 800,000 school students (ABS 2019). Therefore, a significant proportion of Australian students from more vulnerable backgrounds were impacted when mass school closures occurred at the height of the COVID-19 lockdown (Drane et al. 2020). For schools providing for students experiencing socio-educational disadvantage, the division of resources and capacity to cope with change is further exacerbated when considering situational characteristics, whether students are in government or non-government schools, metropolitan or country schools, as well as each state's education governance. As a case study, a closer examination of schools in the Department of Education-Western Australia (DOE-WA) revealed schools to be spatially diverse and very heterogeneous, with significant differences in context and socioeconomic status dependent upon the characteristics of individual regions (see Table 1; DOE-WA 2020). For example, the proportion of students in the lowest quintile in the northern metropolitan region is 1.6%. In contrast, for students in schools in the south metropolitan region, 6.9% of students are in the lowest quintile. Additionally, most of these students are in government schools with ICSEAs well below the average of 1000 (DOE-WA 2020).
Although this case study highlights the Western Australian context, other states and countries have significant differences between regions with clusters containing high proportions of students from low SES backgrounds (Chandra et al. 2020; 1 3 Vulnerable learners in the age of COVID-19: A scoping review Perry 2018). As one school principal summed up after just three weeks of school closure in Irish schools, the potential impacts on vulnerable school students were both diverse and profound: Our school is in an area of severe disadvantage. Many of our children's homes would suffer from food poverty. The children come to school hungry. Many attend the after-school project where they get their dinner, this too has closed. Never mind the effect on their education these children could be starving and spending more time in homes with addiction and violence. School is their safe place. (Burke and Dempsey 2020, p. 42) Undoubtedly, those students already experiencing economic or other forms of vulnerability must be carefully considered for any future school closure within Australia. Fortunately, Australian educators were in a position to consider common learnings derived from countries that had school closures for extended periods and had already introduced a united national shift to learning in the home (e.g. Burke and Dempsey 2020). Many different factors impact on the transition to remote or off-site learning for all learners, especially given the existing contexts and differences noted for students from more vulnerable backgrounds, with the risk of broad educational disadvantage being particularly influential. The risks and barriers for students who could not attend school due to the pandemic are considered in the next section, along with details of how countries addressed the impacts of school closure to mitigate the contexts and complexities of students' environments.
Risk of long-term educational disengagement
Central to effective learning, student success, and well-being is student engagement, including behavioural, emotional and cognitive engagement (Fredericks et al. 2004).
Within the school community, the teacher-student relationship most often forms the basis for student engagement (Wang and Eccles 2012). As face-to-face teaching fosters (or impedes) the teacher-student relationship, all three components of student engagement, namely behavioural, emotional and cognitive, are impacted upon during mass school closures. Positive face-to-face interactions with teachers and peers at school increases the subjective valuing of students' learning (Wang and Eccles 2012). This is why immediate and constructive teacher feedback in an online environment is vital to stabilise a student's engagement with their education. This necessary feedback may be absent for those learners who have limited or no access to computers, or the Internet, and are isolated in their homes, restricting their ability to connect online and "see" their teachers (Chandra et al. 2020). Therefore, disasters (such as COVID-19), which displace children from their schools, will likely impede the teacher's ability to provide the requisite social support needed for all their students (Motti-Stefanidi 2019). Teacher social support includes interactions that convey appreciation, respect, and caring within the teacher-student interactions leading to engagement (Wang and Holcombe 2010). When students work collaboratively in the school environment with a well-developed sense of their student voice, they can expand their knowledge base through positive cognitive and emotional interactions (Cunninghame et al. 2020;Järvelä et al. 2016). Again, for students learning in an isolated environment, and unable to access video conferencing resources, the absence of these positive interactions and feedback from their school community may influence both their engagement and emotional well-being. The extent to which schools maintained positive teacher-student relationships despite the fluctuating danger from COVID-19 impacted upon student engagement within schools. To ensure more students could get the most out of their education, many schools adapted to the challenges of COVID-19 by creatively engaging with their school communities to maintain important ceremonies during the pandemic (Masten and Motti-Stefanidi 2020). For many students, engagement includes developing the desire for further study and the expectation among students to transition from high school to higher education (Cunninghame et al. 2020).
Disengaged students can experience adverse academic and social outcomes, such as lower achievement and disruptive behaviour (Simpkins et al. 2015). Students from disadvantaged backgrounds are reported as being more likely to experience markers of disengagement, such as daily absence, disruptive behaviour, and poor school connectedness (Hancock and Zubrick 2015). School connection is a protective factor for many students and is associated with a reduction in risk-taking behaviours, as well as increases in school attendance and academic achievement (Simpkins et al. 2015). Many young people in more vulnerable contexts already have a precarious relationship with education (Harwood et al. 2017), so there was a strong possibility that these cohorts may further disengage from learning if the curriculum content was only provided online (Burke and Dempsey 2020). In summary, a loss of school connectedness, due to school closures, may exacerbate the risk of educational disengagement, especially for vulnerable young people. This is compounded for those children in care, or those moving between households or locations, as often school is the only constant in their lives. Without the presence of routine or essential pastoral care-due to school closures-these young people may permanently disengage from learning (Baker 2020).
One response to this risk of educational disengagement is to sustain meaningful communication between schools and families. One recent report on the social and relational impacts of national school closures in Ireland, based upon survey responses derived from over 2800 school leaders and principals, found that a number of school leaders identified how proactively seeking feedback from parents concerning the educational and emotional needs of the student within the family was one crucial way to sustain connection between schools and their communities (Burke and Dempsey 2020). In Ireland, many schools outlined the need for a more collective approach to learning, including involving family members in the co-design of learning tasks and activities. This echoes the students as partners approach employed within the Australian university sector (Matthews 2016;O'Shea et al. 2020), to work productively with parents and children to design and develop curriculum that is manageable within the home environment and responsive to the learner's needs. The students as partners approach identifies the expertise or cultural strengths of the learners themselves and focusses on learning as collaboration both within and outside the classroom. This is a relational approach to learning ascertained by a joint ownership of the learning process, negotiating the act of teaching as doing with rather than doing to. In the case of schools, involving parents in this process allows a more comprehensive and holistic partnership to evolve.
To facilitate this co-operation, the Spanish educational system made several communication platforms and apps available (e.g. Edugestio) which enabled all parties (teachers, parents/caregivers, and students) to co-create the learning process (UNESCO 2020d). Such initiatives echo two of UNESCO's ten recommendations to ensure learning remains uninterrupted during the pandemic (UNESCO 2020b, p. 1), namely "prioritise solutions to address psychosocial challenges before teaching; create communities and enhance connection". In the face of continued uncertainty, strengthening the partnerships which were created during the COVID-19 school disruptions and leveraging the capacities within the communities will go a long way to ensuring increased flexibility and adaptability in our schools, so they are ready for future unanticipated changes or disruptions.
Emotional well-being and anxiety
As schools made the shift to online learning, supporting students' social and emotional well-being became imperative (Brown et al. 2020). Although some students were faced with online learning issues attributed to technology use, there were also emotional challenges associated with change, including a shift to off-site learning. The emotional consequences related to school closures cannot be underestimated. Anxiety disorders are the most common mental, emotional and behavioural problems among young Australians (Australian Institute of Health and Welfare 2016). Specifically, 13.9% of children and adolescents experience a mental health disorder, including 6.9% experiencing anxiety (Australian Institute of Health and Welfare 2016). As students lost school connectedness due to being physically distanced from school, or having to maintain a social distance from teachers and peers, there was a sense that adults and/or peers in their school were no longer concerned about them as an individual or concerned about their learning. As such, psychological distress such as anxiety and depression has increased during the COVID-19 disruption (Holmes et al. 2020;Pikulski et al. 2020).
Further implications of learning off-site relate to children's emotional safety, as schools may provide a safe and nurturing haven for many students; physical and social isolation may deny them this emotional refuge. Many parents have also experienced psychological distress from personal disruptions due to COVID-19, such as unemployment and financial strain, and for some parents this may also be coupled with ineffective coping mechanisms, further exacerbating psychological distress (Caplan and Schooler 2007;Puterman et al. 2009). Adding to the burden of this fraught situation is an expectation that parents assume the role of educator within the household, a role that many parents may not be equipped for physically or emotionally (Burke and Dempsey 2020; UNESCO 2020b). Proactive emotional support for the families most impacted by this situation involves managing emotional, financial and logistical challenges in a multisystem approach across the community to support vulnerable families (Masten and Motti-Stefanidi 2020).
Children already impacted by poverty and facing the challenge of navigating a pandemic required communities to step up to supply support for families and this has been demonstrated across Australia (e.g. Western Australian police, prisoners, and local businesses supplied furniture and computers to Mt Barker Community College students so they could study at home; Makse 2020). The multisystem mobilisation that occurred between schools and their communities during the COVID-19 pandemic was able to support vulnerable families. Leveraging these systems and processes in the future will build resilience and capacity to cope with future disruption (Chandra et al. 2020).
UNESCO (2020b, p. 1) highlights the importance of addressing the psychosocial challenges associated with the pandemic and recommends that this take priority over teaching, describing the necessity to "ensure regular human interactions, enable social caring measures, and address possible psychosocial challenges that students may face when they are isolated". Strategies could include utilising existing student mentoring programs already established across the Australian university sector. For example, the Australian Indigenous Mentoring Experience (AIME) program has successfully mentored young people in both primary and high school settings for over a decade (O'Shea et al. 2016); and e-mentoring programs, established in many universities, have supported prospective or commencing students (Jardine et al. 2016). Through existing university and community mentors, additional support and advice may be offered to school students online or via telephone. Another strategy may involve undergraduate (student) teachers who may have their intern practicums on hold due to COVID-19, and who might assist schools in an online mentoring capacity, with reciprocal benefits possible for both school students and university undergraduate teachers, as suggested by Sonnemann and Goss (2020).
Digital inclusion
Digital inclusion is based on the premise that all individuals and communities, including those most disadvantaged, have access to, and use of, communication technologies (Thomas et al. 2019). A lack of digital skills and digital access can have a negative impact on learning (Chandra et al. 2020). As evident in the Australian Digital Inclusion Index (Thomas et al. 2019) which measures digital inclusion in three discrete ways (access, affordability, and digital ability), a digital divide exists between students from low and high socio-educational backgrounds. Notably, inadequate technology access negatively impacts students with different levels of access to financial resources (Thomas et al. 2019). This index indicates gradual growth across the three dimensions in Australia; however, digital inclusion remains consistently low in households of lower income (Thomas et al. 2019).
According to the ABS (2018), on average, 13.2% of Australian households do not have access to the Internet. More than 90% of households in advantaged areas have an Internet connection, and less than 40% of households in disadvantaged geographical areas are connected (See Fig. 1; Drane et al. 2020). Approximately 471,600 households from the lowest quintile of household income (i.e. the lowest 20% of the population) have no access to the Internet, and approximately 621,800 households in the lowest quintile do not have access to a laptop or desktop computer. Insufficient access and connectivity make it difficult for students to continue their learning online (See Fig. 2; Chandra et al. 2020;Drane et al. 2020). The digital inclusion report also highlights that the proportion of income required for Internet expenditure has increased faster than actual increases in income (Thomas et al. 2019). This difference has profound negative implications for those on lower or fixed incomes (Thomas et al. 2019), and this is particularly concerning in a time when an estimated 10% of the population has lost income since the pandemic onset. This report also indicates that the amount of household income spent on Internet services has also increased from 1.00% in 2014 to 1.18% in 2019 (Thomas et al. 2019) further impacting the ability of households to maintain Internet costs if there has been a loss of income due to the pandemic.
Equally important is the fact that households experiencing financial hardship may be restricted to accessing the Internet solely via mobile-only plans (rather than fixedline). These mobile-only plans typically have lower download limits, and once these limits are exceeded, additional costs are accrued, resulting in students having limited Internet download for educational purposes. The use of such mobile-only plans is reported in 30.7% of households in the lowest income quartile (Thomas et al. 2019). The digital divide is a multifaceted concept characterised not only by differences in hardware ownership (i.e. laptops, computers) but also by differences in access and connectivity to the Internet as well as the cost, which places an additional burden on households already experiencing financial strain. Moreover, the lack of economic capital within the households puts students at higher risk of experiencing technology-related issues impacting their online learning from home.
To redress the digital divide during the COVID-19 pandemic, many countries recognised the necessity of a variety of media supporting student learning during school closure (Chandra et al. 2020;NZ Government 2020). UNESCO (2020c) has reported that countries impacted by the Ebola crisis 2014 facilitated learning environments via a range of mediums, including online avenues, as well as radio and television. Since March 2020, countries have adopted different strategies to support learning. For example, in Portugal, the government endorsed a partnership involving schools and post office services to ensure the timely delivery of hard copy teaching resources to homes (UNESCO 2020d). The New Zealand (NZ) government provided educational content via two television channels, combined with learning resources available in both hard and soft copy (NZ Government 2020). The NZ government also provided NZ$87.7 million in funding towards this endeavour (NZ Government 2020). Similarly, the Queensland government announced on 12 April 2020, in response to poor Internet connectivity, that curriculum would be taught via television, especially in rural and remote regions (Moore 2020). The programming included content to engage with parents to assist them in home-schooling their children. It is the combined interconnected processes and systems that are notable in these examples.
Clearly, rather than an exclusive Internet reliance, creative use of alternative teaching mediums is required. Such a variety of approaches may also allow for different forms of learning engagement by students. Such initiatives are also in line with another of UNESCO's recommendations: "examine the (technology) readiness and choose the most relevant tools" (UNESCO 2020b, p. 1).
In addition to differing teaching mediums, several countries have implemented loans of electronic equipment, such as laptops or tablets (Chandra et al. 2020). To assist with Internet access, pre-paid wireless Internet was also supplied in some locations (Chang and Yano 2020). The United Arab Emirates, in an attempt to assist the practical application of online teaching, opened a technical support hotline for teachers and students devised to offer free support for individuals facing difficulties with technology (UNESCO 2020d). In Italy, family members in isolation have been offered online courses aimed at relationship management (UNESCO 2020d). These practices reflect the UNESCO recommendations of "ensure inclusion of distance learning programs" and "provide support to teachers and parents on the use of digital tools" (UNESCO 2020b, p. 1).
Technology use
Technology is rapidly evolving and therefore requires continual learning and skill development. Australian students have been varyingly exposed to technology integration. A number of misconceptions exist around the extent of technology competencies of students more broadly. While young people are often assumed to be digitally savvy, their technology use at home is typically for personal use and not for learning purposes (Margaryan et al. 2011;Wang et al. 2014). As technological skills vary among young people, many may not meet the required level of proficiency for learning online. Notwithstanding the discourse around the expertise of the net generation (Tapscott 1998) or young digital natives (Prensky 2001a), who have grown up with technology enmeshed within their daily lives, researchers (Bennett et al. 2008;Margaryan et al. 2011) have argued that this notion of a digital native is not commonplace. Instead, young people employ technology in ways impacted by a range of resources or capacities, including their financial, social and cultural capacity to meet their needs (Bennett et al. 2008).
Despite the conjecture that young people know how to use technology, many school-age learners may not have high levels of self-confidence to use a digital platform for learning or equally may not have acquired the necessary skills to use technology in critical ways (Thompson 2013;Wang et al. 2014;Waycott et al. 2010). Indeed, those students who can selectively access and assess technology content, that is to use technology critically, are also more likely to be students from more materially resourced backgrounds (Perotta 2013; Warschauer and Matuchniak 2010). Further, with the increased use of technology for online learning during the pandemic, there may be greater exposure to inappropriate material as well as an increased risk of cyberbullying (See Fig. 3; Drane et al. 2020). For example, 47,100 families in the lower quintile for household income have reported that their children have been previously exposed to inappropriate online material (ABS 2018;See Fig. 3;Drane et al. 2020).
A survey conducted with principals/leaders after a three-week school closure reported that flexibility in both application and structure of content varied according to different circumstances. Participants recommended avoiding a one-stop-shop approach to teaching content but rather recognised that bespoke strategies may be necessary to combine modalities and also reflect school priorities (Burke and Dempsey 2020). For example, in this period of the pandemic, explicitly articulating learning goals and objectives in teaching remotely is imperative for schools-for example, is the intention to teach new material or simply revise existing content? Similarly, alternative approaches to providing learning experiences should be considered as these may often be the most appropriate for those households with limited connectivity, for example, instead of using a computer, a smartphone can be used to read an email, or an instructional task can be adapted through the repurposing of resources commonly found in the home.
Several teachers and principals identified that national guidelines are required that clearly explain the expectations of schools and families over the time of closure (Burke and Dempsey 2020). These centrally developed guidelines (government or peak teaching body) must aim to address any underlying fears that schools or teachers are somehow failing their students during this crisis (Burke and Dempsey 2020). These approaches are reflected in UNESCO recommendations: "blend appropriate approaches and limit the number of applications and platforms and; develop distance learning rules and monitor students' learning process" (UNESCO 2020b, p. 1).
Conclusion
Globally, we have entered a highly complex and evolving time in the provision of quality teaching and learning. As governments directed students to stay at home, schools were required to change their practices to cater for students that were now learning solely online. A range of strategies had to be quickly implemented by schools to ensure all students were safe and supported in their studies (DESE 2020). Although schools have been faced with a level of disruption not seen in generations, unlike the past, many (but not all) students have had access to technology to continue their education online. The pandemic has shown us that online education is possible. However, for students who are unable to access, or sustain the necessary engagement in online learning, the support of other learning options is essential to ensure equity for all students. Until a vaccine for COVID-19 is found, disruptions to systems that support development and well-being will remain, including uncertainties within our education system. If future disruption calls for mass school closures, then we must learn from the impact of this initial phase of COVID-19 and have systems in place to operate effectively despite being in crisis. Failure to learn from this pandemic risks exacerbating existing educational inequities and subjecting students in Australia, particularly those in vulnerable settings, to an increased risk of adverse social, emotional and behavioural outcomes. Indeed, in Australia, businesses, governments, communities, and parents have realised the intrinsic value of childcare and schools to the operation of all facets of society.
In response to mass school closures, UNESCO provided a number of recommendations to limit disruptions to education (UNESCO 2020b), and there was global evidence that countries were adopting some of these recommendations. These recommendations underpinned approaches to mass school closures that underline inclusivity, appropriate use of technology with varying modalities, the provision of support for both teachers and families, as well as the importance of creating communities that facilitate learning. However, these are recommendations only and need to be further contextualised by place-based and local knowledges of different settings. Therefore, future directions must plan for the education systems, schools, teachers, parents, and students to be prepared at multiple levels for collaborative disaster response. This must include an alternative to being physically present on the school campus, and policymakers and practitioners must ensure equity in the provision of education for all students.
Young people require a sense of stability amid rapid change to help them process, adjust, and develop new strategies for coping with emerging and fluid contexts. Attending school provides such a level of stability for many children. As we gradually move forward out of the pandemic, there is a clear need to nurture our future generations to build capacity for the disasters that will likely come again, but which we cannot anticipate. One approach that could commence immediately is governments and policymakers enlisting the talents and advocacy of youth to enable young people to set their own vision for disaster preparedness. This will be a first step to building resilience and adaptability skills to equip young people to be prepared for future crises. Proactive and multifaceted responses can best address the educational needs of our diverse student populations and also avoid widening existing educational disparities. As governments plan and prepare for future disaster responses, they must re-examine resource allocations to schools to ensure all students have equality of access to resources especially related to technology. In the short term, however, it may be prudent for schools at the local level to both recognise and tap into their communities' capacities, forming partnerships to seek solutions to issues that unfairly impact on the more vulnerable members of society.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission Lynette Vernon is a senior research fellow in the School of Education at Edith Cowan University and an adjunct researcher with the National Centre for Student Equity in Higher Education at Curtin University. She was a secondary science teacher for over 20 years, retraining to complete her Ph.D. in psychology at Murdoch University. She directed the Murdoch Aspirations and Pathways for University project (MAP4U) working with high schools to support students' aspirations. Her research interests are in developmental psychology, especially related to technology use, sleep, and their impact on academic attainment and well-being.
Sarah O'Shea is a Professor of Higher Education and Director of the National Centre for Student Equity in Higher Education (NCSEHE) which is located at Curtin University. She has spent over twenty-five years working to effect change within the higher education (HE) sector through research that focuses on the access and participation of students from identified equity groups. Her institutional and nationally funded research studies advance understanding of how under-represented student cohorts enact success within university, navigate transition into this environment, manage competing identities, and negotiate aspirations for self and others. | 2020-11-28T05:04:45.224Z | 2020-11-27T00:00:00.000 | {
"year": 2020,
"sha1": "44f4286d960290d65eea067842bc1ea564ec9544",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13384-020-00409-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "44f4286d960290d65eea067842bc1ea564ec9544",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
119276904 | pes2o/s2orc | v3-fos-license | Geothermal Casimir phenomena for the sphere-plate and cylinder-plate configurations
We investigate the nontrivial interplay between geometry and temperature in the Casimir effect for the sphere-plate and cylinder-plate configurations. At low temperature, the thermal contribution to the Casimir force is dominated by this interplay, implying that standard approximation techniques such as the PFA are inapplicable even in the limit of small surface separation. Thermal fluctuations on scales of the thermal wavelength lead to a delocalization of the thermal force density at low temperatures. As a consequence, the temperature dependence strongly differs from naive expectations. Most prominently, thermal forces can develop non-monotonic behavior below a critical temperature. We perform a comprehensive study of such geothermal phenomena in these Casimir geometries, using analytical and numerical worldline techniques for Dirichlet scalar fluctuations.
We investigate the nontrivial interplay between geometry and temperature in the Casimir effect for the sphere-plate and cylinder-plate configurations. At low temperature, the thermal contribution to the Casimir force is dominated by this interplay, implying that standard approximation techniques such as the PFA are inapplicable even in the limit of small surface separation. Thermal fluctuations on scales of the thermal wavelength lead to a delocalization of the thermal force density at low temperatures. As a consequence, the temperature dependence strongly differs from naive expectations. Most prominently, thermal forces can develop non-monotonic behavior below a critical temperature. We perform a comprehensive study of such geothermal phenomena in these Casimir geometries, using analytical and numerical worldline techniques for Dirichlet scalar fluctuations.
I. INTRODUCTION
The Casimir effect [1], inspiring many branches of physics [2,3], features a decisive geometry dependence: the fluctuation-induced interaction between test bodies or surfaces depends on their shape and orientation. This is because the Casimir effect arises from the fluctuation spectrum in presence of the surfaces relative to the vacuum fluctuations. The spectral properties in turn are a direct consequence of the geometry.
This geometry dependence becomes even more pronounced at finite temperature T : thermal fluctuations can predominantly be associated with a characteristic length scale, the thermal wavelength λ T ∼ c/(k B T ). Thermal fluctuations contribute to the Casimir force, whenever the scale set by the thermal wavelength is commensurate with a mode of the fluctuation spectrum as defined by the geometry. Therefore, thermal corrections to the zero-temperature Casimir effect generally cannot be described by universal additive terms or other simple recipes but require a careful analysis of the interplay between geometry and temperature, as first anticipated in [4].
This "geothermal" interplay has first been verified in paradigmatic perpendicular-plates [5] or general inclinedplates configurations [6]. Further evidence for the experimentally relevant sphere-plate configuration has been provided recently in [7][8][9][10]. Typical low-temperature dependencies in these open geometries obey power laws with characteristic exponents that are particular for the geometry. Most importantly, these power laws disagree with predictions from standard local approximation techniques such as the proximity force approximation (PFA) [11] -even in the limit of vanishing surface separation. This is in contrast to zero-temperature forces which are often well described by the PFA in this limit [12].
In this work, we perform a comprehensive study of the geometry-temperature interplay for the sphere-plate and cylinder-plate configuration. We study the Casimir forces induced by fluctuations of a scalar field obeying Dirichlet boundary conditions on the surfaces in order to explore the geothermal interplay in a most transparent fashion. Moreover, we use the worldline approach to the Casimir effect [13] which on the one hand provides for a highly intuitive picture of the fluctuations, and on the other hand facilitates analytical as well as numerical computations from first principles [14][15][16][17].
For instance, the failure of local or additive approximation techniques can directly be inferred from the temperature dependence of the force density: the latter tends to delocalize for decreasing temperatures on scales of the thermal wavelength [8]. Local approximation techniques may only be useful at finite temperature if the strict weak-coupling limit is taken [18], or in the hightemperature limit.
In the present work, we analyze the thermal force density distributions, compute thermal forces for a wide nonperturbative range of parameters, and determine asymptotic limits. This facilitates a careful comparison with local approximation techniques, and, most importantly, yields new and unexpected results for the geometry dependence of thermal forces. For instance, the pure thermal force, i.e., the thermal contribution to the Casimir force, reveals a non-monotonic behavior below a critical temperature for the sphere-plate and cylinder-plate case [19]: the attractive thermal force can increase for increasing distances. This anomalous feature is triggered by a reweighting of relevant fluctuations on the scale of the thermal wavelength -a phenomenon which becomes transparent within the worldline picture of the Casimir effect. Whereas these non-monotonic features already occur for a simple Dirichlet scalar model, nonmonotonicities can also arise from a competition between TE and TM modes of electromagnetic fluctuations in configurations with side walls [20,21].
While there are a number of impressive verifications of the zero-temperature Casimir force [22], a comparison between theory and thermal force measurements suffers from the interplay between dielectric material properties and finite temperature [23], still being a subject of intense theoretical investigations [24][25][26][27][28][29]. In view of the geothermal interplay, we expect that the full resolution of this issue requires the comprehensive treatment of geometry, temperature and material properties, possibly also including edge effects [6,[30][31][32][33]. First results on the sphere-plate configuration using scattering theory and specific dielectric models demonstrate this nontrivial interplay [7,10,34].
As a crucial ingredient for such an analysis, fieldtheoretical methods for Casimir phenomena have to be used that can deal with arbitrary Casimir geometries. In addition to the worldline methods [13,[35][36][37][38] used in this work, a variety of approaches has been developed in recent years, such as a functional integral approach [39][40][41] and scattering theory [42][43][44][45][46][47][48][49][50]. An extension of these methods to finite temperature is usually straightforward and highly worthwhile in view of the geometrytemperature interplay.
Our paper is organized as follows: after a brief account of the worldline approach to the Casimir effect in Sect. II, the sphere-plate and cylinder-plate configurations are studied at zero temperature in Sect. III. In addition to making contact with the literature, we perform the worldline computation directly for the force instead of the interaction energy. Section IV contains all our main results on the finite-temperature case. Our conclusions are summarized in Sect. V. For reasons of comparison, the proximity-force approximation for the sphere-plate and cylinder-plate case is worked out in detail in appendix A. In addition to explicit formulas which have not comprehensively appeared in the literature so far, we relate the PFA to an approximate treatment of the worldline path integral which helps to understand the differences between the exact and approximate treatments.
II. WORLDLINE APPROACH TO THE CASIMIR EFFECT
We start with a short reminder of the worldline approach to the Casimir effect for a massless Dirichlet scalar in 4 dimensions; for details, see [6,13,37]. For a configuration Σ consisting of two rigid objects with surfaces Σ 1 and Σ 2 , the worldline representation of the Casimir interaction energy in D = 4 dimensional spacetime reads The worldline functional Θ Σ [x(τ )] is 1 if the worldline x(τ ) intersects both objects Σ = Σ 1 ∪ Σ 2 , and is zero otherwise. The expectation value in Eq. (1) is taken with respect to an ensemble of 3-dimensional closed worldlines with a common center of mass x CM and obeying a Gaußian velocity distribution. For static Casimir configurations the time component cancels out at zero temperature. Equation (1) has an intuitive interpretation: All worldlines intersecting both surfaces violate Dirichlet boundary conditions and are removed from the ensemble of allowed fluctuations, contributing to the negative Casimir interaction energy. During the T integration, the extent of a worldline is scaled by √ T . Large propertimes T correspond to IR fluctuations, small T to UV fluctuations.
Introducing finite temperature T = 1/β by the Matsubara formalism is equivalent to compactifying Euclidean time on the interval [0, β]. The Casimir free energy corresponding to Eq. (1) now becomes For numerical purposes, it is convenient to remove the T dependence from the velocity distribution by the rescaling whereγ = dγ(t)/dt. Now, the Θ function reads more explicitly The worldline integrals are evaluated numerically by Monte Carlo methods, i.e, the path integral is approximated by a sum over a finite ensemble of n L worldlines. Each worldline γ(t) is furthermore discretized by a finite set of N points per loop (ppl). To generate discretized worldlines with Gaußian velocity distribution the v-loop algorithm was used in this work [13,51]. In the remainder, we apply the worldline method to the sphere-plate and cylinder-plate Casimir configurations.
III. CASIMIR EFFECT AT ZERO TEMPERATURE
Let us first study the sphere-plate and cylinder-plate geometries at zero temperature. Here, we make contact with earlier results for the Casimir effect of a cylinder and sphere above a plate [17,36,37,[42][43][44]. Moreover, we generalize the worldline method to directly compute the Casimir force instead of the energy which leads again to significant simplifications compared to previous energy calculations. The method will be generalized to finite temperature in the next section.
A. Sphere above a plate
We start the configuration of a sphere above a plate. The sphere of radius R is centered around the origin x = 0. The infinitely extended plate lies in the z = −(a + R) plane, where a is the minimal distance between both objects, see Fig. 1. Since the configuration has a rotational symmetry with respect to the z axis, the three dimensional x CM integration reduces to a two dimensional one. The Casimir energy (1) reads where we have switched to cylindrical coordinates (r, z CM ) with Here Θ S and Θ P account for the intersection of a worldline x CM + √ T γ with the sphere and the plate, respectively. Notice that Θ S is independent of a, whereas Θ P reads where γ zmin denotes the worldline's extremal extent into the negative z direction.
As we are interested in calculating the Casimir force, F c = −dE c /da, the derivative acting only on Θ P produces a δ function which eliminates the z CM integral. The Casimir force thus simplifies to Here, we have introduced The transition from worldline calculations of the force does not only lead to technical simplifications. Also, the classification of relevant worldlines changes slightly: for the Casimir energy in Eq. (1) the worldlines are scaled by the propertime √ T with respect to their center of mass which is finally integrated over. For a given center of mass, all points on a worldline x CM + √ T γ(t i ) lie on rays originating from the center of mass. These rays are traced out by the T integral running from T = 0 to T = ∞.
By contrast, the Casimir force in Eq. (8) results from worldlines which are attached to the point x on the plate. For a given point x, all points on a worldline x+ √ T γ(t i ) lie on rays which now originate from x. Again these rays are traced out by the T integral. Now, the plate is always touched by by construction for all values of T , the remaining problem being the detection of intersection events with the sphere. Adapting methods from [37], it is clear that only those points of a worldline lying on the rays intersecting the sphere eventually pass through the sphere for some values of T . Let { γ(t k )} denote the set of points on such rays intersecting the sphere, with k labeling these rays for a discretized worldline. Those values of propertime T for which this point lies exactly on the sphere can be obtained from the equation During the propertime integration the worldline always touches the plate, while all its points move on rays passing through (r, 0, −a − R). Only points lying inside the cone will pass through the sphere.
Equation (10) has two solutions,
For T ∈ (T − k , T + k ) the point x + √ T γ(t k ) lies inside the sphere. The point x can be viewed as a tip of a cone that wraps around the sphere with the opening angle 2α, with sin(α) = R/| x|. The value of the square root in Eq. (11) varies between zero and R. The square root is zero if the ray merely touches the sphere, and R if the ray lies on the cone's axis, i.e., if it coincides with the direction spanned by x.
For a given r, the worldline intersects the sphere if the propertime T is in one of the intervals bounded by Eq. (11) for all possible values of k. Denoting these intervals by The total support of the propertime integral then is The r dependence of this support arises from the fact that the set of k rays lying inside the cone depends on the position r where the worldline is attached to the plate. The Casimir force (6) now reads The most time-consuming part of the algorithm is the determination of S(r) if the distance between the sphere and plate is small. To reduce the computational time, it is advisable to reduce the N points per worldline to the subset of k < N points on the above mentioned rays intersecting the sphere. For a given r, all points on rays outside the cone can immediately be dropped. Furthermore in the process of taking the r integral from zero to infinity, the opening angle of the cone shrinks. All points on rays which leave the cone through its upper half can then be dropped completely from the calculation, as they will never enter the cone again. Only rays below the cone, i.e., between the cone and the plate, can enter the cone for larger values of r. With these optimizations and with one integral less, the computational time for Casimir force calculations is significantly reduced compared with those of the Casimir energies studied in previous worldline investigations. These simplification facilitate to extend the previously studied parameter range to even larger a/R ratios with higher statistics.
B. Cylinder above a plate
In many respects, the cylinder-plate configuration is "in between" the sphere-plate configuration and the classic parallel-plates case. This also holds for the experimental realization: the effort of keeping the cylinder parallel to the plate is less than it is the case for two parallel plates [52]; for the sphere-plate case, this issue is simply absent. As a clear benefit, the force can, in principle, be made arbitrarily large, since it is proportional to the length of the cylinder.
The geometry of the cylinder-plate configuration can be parameterized analogously to the preceding sphereplate case: we consider the symmetry axis of a cylinder of an (infinite) length L y and radius R to coincide with the y axis. The infinite plate lies in the z = −(R + a) plane, with a being the distance between the cylinder and the plate.
The Casimir force can be obtained directly from Eq. (1), where use the fact that the Θ Σ [x(τ )] functional factorizes (cf. Eq. (6)) Here Θ Cyl and Θ P account for the intersection of a worldline x CM + √ T γ with the cylinder and the plate, respectively. Again, only Θ P depends on a and is given in Eq. (7).
The y integral in the Casimir energy (1) is now trivial due to translational symmetry. The Casimir force can then be obtained directly from Eq. (8) and reads where r = |x CM | and The force is normalized to the zeroth-order PFA formula. We observe an excellent agreement with the exact asymptotic solutions for small a [44] and for large a [42] up to a = 100. For larger a, the number of N points per loop has to be increased far beyond N = 2 · 10 7 used for this plot; otherwise, the sphere falls through the rough mesh provided by the insufficiently discretized worldline, leading to a systematically underestimated force as is visible here for a > 100 (pink triangles).
As in the case of the sphere, the worldlines x + √ T γ are attached to the plate at the point x. The only difference is that the worldlines are now 2 dimensional -a fact which reduces the computational cost. Only those points of a worldline lying on the rays intersecting the cylinder pass through the latter for some values of T . The construction of the support of the T integral is identical to that for the sphere-plate case, such that the total Casimir force on the cylinder can be written as in Eq. (13) C. Zero-temperature results for the Casimir force It is instructive to compare our results not only with analytic estimates, but also with the much simpler proximity force approximation (PFA). The latter is used by default for the data analysis of geometry corrections in most experiments. It derives from a classical reasoning for generalizing the parallel-plate case; thus, deviations of the exact result from the PFA estimate also parameterize genuine geometry-induced quantum behavior.
Roughly speaking, the PFA subdivides the surfaces into small surface elements, applies the parallel-plate force or energy law to pairs of surface elements and integrates the resulting force density. The PFA is inherently ambiguous as the measure for this final integration is not unique: possible alternatives are the surface measures of one of the involved surfaces or any intermediate auxiliary surface. Later on, we will refer to the "sphere-based" or "plate-based" PFA as two generic options for the integration measure. The PFA for the present configuration is discussed in detail in Appendix A.
The concept of the PFA can also be translated into the worldline picture: as an approximation to the ensemble of complicated multidimensional worldlines, we may reduce the worldlines to one dimensional straight lines. The length of a line then corresponds to the average extent of a worldline into a certain relevant direction in a given geometry. This picture also explains the occurrence of deviations from the PFA as well as the sign of these deviations in the Dirichlet case: due to the spatial extent of the worldlines, they generically intersect both boundaries for smaller values of T than simple straight lines. As small propertimes yield quantitatively larger contributions, this property then results in a greater force. More precisely, the size squared of a worldline is proportional to the propertime parameter which is in the denominator of the worldline formula, see Eq. (1). This explains why worldline results are typically underestimated by the PFA for small separations of the objects. For very small separations, the upper bound of the propertime integration can effectively be set to infinity, whereas the lower bound is a measure for the first (proper-)time, when a worldline intersects both objects.
The Casimir force for the sphere and cylinder is compared to the PFA estimates in Fig. 2 and 3 respectively. For similar comparisons for the Casimir energy, see [37]. We have normalized the force to the leading-order PFA, which is exact in the limit of vanishing separation a. The systematic study of the PFA, also from the worldline point of view, is summarized in Appendix A. We observe that the normalized force obtained with worldline numerics does not lie inside the range spanned by the ambiguity of the PFA estimates. Most prominently, the sign of the deviations from the a → 0 limit is different in the Dirichlet scalar case, as can be understood in the worldline picture described above. These observations have been frequently made in the literature before [13,36,42,43,53].
In order to obtain Fig. 2 we have used ensembles with up to n L = 1.6 · 10 6 and N = 2 · 10 7 . At very small distances a the number of points per loop is not very important, since part of the systematic error is reduced by normalizing to the leading order result; thus, even N = 5000 is sufficient for example for a = 0.0333 at a precision level of 0.1%. On the other hand at a = 100 the number of points per loop used was 1.5 · 10 7 . For larger distances the number of points per loop has to be increased far beyond 2 · 10 7 , since, as shown in Fig. 2. Even such high resolution is not sufficient to resolve the small sphere for larger distances.
Already anticipating our results for finite temperature, this observation gives us a rough estimate for the validity limits at small temperatures. Below, we observe that for a/R ≪ 1 the maximum of the thermal contribution to the force density at low temperatures T < 1/R lies outside the sphere on scales r ∼ 1/T . From the fact that ensembles with N = 1.5 · 10 7 are reliable for those cases where the dominant contribution to the force density lies within r 100, we conclude that temperatures above T 0.01/R are accessible also in the limit a → 0.
From an algorithmic point of view, the sphere-plate and cylinder-plate configurations differ with respect to computational efficiency also beyond the trivial dimensional factors: for a sphere at large separations, a large fraction of points of a worldline can be dropped right from the beginning, as they never see the sphere, i.e, they never lie on a ray inside the cone. The situation is different for a cylinder. Dealing with a two dimensional problem, we use two dimensional worldlines and the number of points per worldline which now have to lie in a wedge is higher than those lying in a cone for the sphere-plate case. Using comparable worldlines with a large number of points per loop, we thus expect the worldline numerics to break down at far larger distances a than in the case of a sphere. This is indeed the case as is visible in Fig. 3.
For the Fig. 3 we have also used ensembles with up to n L = 1.1 · 10 6 and N = 2 · 10 7 . At a = 100, the number of points per loop used was 3 · 10 6 , and increased to N = 1·10 7 for a = 333 and up to N = 2·10 7 for a = 1000. As expected, the required number of points per loop for a certain a is less here than in the case of a sphere. Even at such large separations as a = 1000, we observe an excellent agreement with [43]. The corresponding estimate for the validity limits at small temperatures then is T > 0.001/R.
A. General considerations
At finite temperature T = 1/β, the free energy can be decomposed into its zero-temperature part E c (0) and finite-temperature correction ∆E c (T ), The same relation holds for the Casimir Within the worldline representation of the free energy (2), the finite-temperature correction is purely driven by the worldlines with nonzero winding number n. Most importantly, the complicated geometry-dependent part of the calculation remains the same for zero or finite temperature.
Let us first perform a general analysis of the thermal correction for a generic Casimir configuration following an argument given in [8]. We start from the assumption that the Casimir free energy can be expanded in terms of the dimensionless product aT , No negative exponents should be present in Eq. (19), since the thermal part of the energy disappears as T → 0. Generically, the T = 0 Casimir energy E c (0) diverges for surfaces approaching contact a → 0. From Eq. (19), we would naively expect the same for the thermal correction. If, however, sufficiently many of the first c i 's in Eq. (19) vanish, then the thermal part of the Casimir energy is well behaved and without any divergence for a → 0. This turns out to be the case for two parallel plates (c 1 = c 2 = 0, and E c (0) ∼ 1/a 3 ) and for inclined plates (c 1 = 0, and E c (0) ∼ 1/a 2 ) [31]. Consequently, an extreme simplification arises: the low-temperature limit of the thermal correction can be obtained by first taking the formal limit a = 0. This was first observed in [8] and then successfully applied in [9].
In the following, we argue that there is no divergence in the local thermal force density in the limit a → 0 for general geometries. For a generic geometry, the adivergent part can only arise from the regions of contact as a → 0. The divergence for these regions at T = 0 is due to the diverging propertime integral over 1/T 1+D/2 which is bounded from below by ∼ a 2 . This is because for worldlines smaller than a the worldline functional is always zero. At finite temperature the divergence in the thermal correction for a → 0 is removed since one integrates now over exp(−n 2 β 2 /4T )/T 1+D/2 , which is zero for every n > 0 in the limit T → 0. The only nonanalyticity could arise from the infinite sum. That this is not the case can directly be verified: instead of integrating over the support S, we integrate over T from zero to 22) for perpendicular plates (dashed blue line), cylinder above plate (dotted dashed line) and sphere above plate (solid red line) both of radius R = 1 in the zero-distance limit a → 0 for T = 1. The sphere-plate curve represents the radial density including the radial measure factor ∼ 2πr. The thermal force densities of cylinder and perpendicular plates at r = 0 are equal to the force density of two parallel plates, π 2 /90 ≈ 0.1097. The thermal force density in the sphere-plate case has a maximum of ≈ 2π × π 2 /90, where the factor 2π arises from the cylindrical measure. Note that a considerable fraction of the force density lies outside the sphere which only extends to r = 1. As the temperature drops, the maximum moves monotonously to the right.
infinity, yielding
For finite temperature T > 0, Eq. (20) is a finite upper bound for the original local thermal force density. This procedure corresponds to substituting the critical regions of contact by broader (and infinitely extended) parallel plates, see [8]. The thermal contribution is estimated from above by flattening the surfaces in the contact region. The local thermal contribution to the Casimir force of the original configuration is clearly smaller than the finite thermal contribution of parallel plates. As the latter does not lead to divergences for a → 0, there can also be no divergence for the general curved case arising from the contact regions. Of course, infinite geometries may still experience an infinite thermal force, as it is the case for two infinitely extended parallel plates, but the local thermal contribution to the force density will be finite. From a practical viewpoint, taking the limit a → 0 first simplifies the calculations considerably.
Another important feature of low-temperature contributions to the Casimir effect is the spread of the thermal force density over regions of size ∼ 1/T even for very small separations a. This phenomenon has first been demonstrated for the configuration of two perpendicular plates at a distance a [8]. (In this configuration, the sphere in Fig. 1 is replaced by a vertical semi-infinite plate extending along the positive z axis and an edge at z = 0.) The thermal force density ∆f c (r, T ) = f c (r, T ) − f c (r, 0) for this case as a function of the coordinate r on the infinite surface measuring the distance from the edge (i.e., the contact point at a = 0) can indeed be obtained analytically on the worldline from the thermal force, Here, λ 1 is a worldline parameter measuring the extent of half a unit worldline, i.e., the distance measured in x direction from the left end to the center of mass. It is clear from Fig. 1 that the lower bound in the T integral in Eq. (21) is given by r 2 /λ 2 1 : this is the minimal scaling value for which the worldline intersects the semi-infinite vertical plate. From Eq. (21), we read off the following force density: Analytic results for the thermal force can be obtained by rescaling the radial coordinate r → λ 1 r per worldline and using λ 1 = π/2. The thermal force between the perpendicular plates in the limit a → 0 upon integration then yields ∆F c (T ) = −ζ(3) L y T 3 /4π in agreement with [5].
The perpendicular-plates configuration is special as it features a scale invariance in the a → 0 limit: Eq. (22) remains invariant under T → T α, r → r/α and ∆f c → ∆f c /α 4 for arbitrary α. As a consequence, knowing (22) for a single temperature value, say T = 1/a, is sufficient to infer its form for all other T . Equation (22) is shown for T = 1/R, R = 1 in Fig. 4. For r < 1/T , the force density stays nearly constant, corresponding to the first term in (22). It rapidly approaches zero for r > 1/T . From this, we draw the important conclusion that the region of constant force density in r direction can be made arbitrarily large by choosing sufficiently low T . Similar consequences arise for temperature effects in other geometries. We plot the thermal force densities for the sphere-plate and cylinder-plate configuration in Fig. 4. The thermal force density for a cylinder above a plate at a = 0 has a shape similar to the one of two inclined plates, whereas the radial force density of a sphere above a plate exhibits a maximum due to the cylindric measure factor r, see Fig. 4. Although these force densities are not scale invariant due to the additional dimensionful scale R (sphere radius), its maximum nevertheless moves away from the sphere as the temperature drops. We conclude that no local approximate tools such as the PFA will be able to predict the correct thermal force in particular at low temperatures. The fact that the force densities for sphere and cylinder are not scale invariant leads to different temperature behaviors for T < 1/R and T > 1/R even in the limit a → 0.
B. Sphere above a plate
Let us start with the expansion of the thermal force for a ≪ R and for small temperature T ≪ 1/R. Following our general argument given above, no singularities in a appears in the limit a → 0. Also, we expect that the thermal force decreases with decreasing R. This motivates an expansion of the thermal force with only positive exponents for a and R. Assuming integer exponents, dimensional analysis permits From our numerical results in the limit a → 0, we observe a T 4 behavior of the thermal force, see Fig. 5. We conclude that c 0 ≈ 0 is negligible with respect to c 2 in the regime T > 0.01 where numerical data is available.
In fact, we conjecture that vanishes identically c 0 = 0; if so, also c 1 vanishes, since the configuration would otherwise be more sensitive to temperatures at small a than at For 0.03 < T < 0.1, we observe a linear behavior which can be fitted to ∆Fc(a = 0, T ) ≈ −3.96 R 2 T 4 + 11.66 R 3 T 5 . We have used 40 000 loops with 2 · 10 6 ppl each. For T < 0.03, the number of points per loop used is not sufficient to resolve the sphere properly, inducing systematic errors (black triangles). a = 0. Our conjecture is supported by the following argument based on scaling properties: the dimensionless ratio of the thermal correction and zero-temperature force has to be invariant under the rescaling The same holds for the ratio of ∆F c (a, T ) at a = 0 and the zero-temperature force at a = 0. For a ≪ R, we can use the PFA for the zero-temperature force, which to leading order yields ∼ R/a 3 . If c 0 = 0, this leading ratio would be ∼ c 0 (aT ) 3 which is invariant under the rescaling (24); in addition, this ratio would be invariant under (24) with R fixed. If c 0 = 0, then this ratio is ∼ c 2 Ra 3 T 4 which is invariant only under the full transformation (24). The result that for c 0 = 0 the thermal correction would exhibit the same R dependence as the zero-temperature force for small distances a ≪ R is counterintuitive: whereas the radial force density in the small-distance limit at T = 0 is peaked right under the sphere near r ≃ 0, the thermal correction arises from contributions at much larger r, cf. Fig. 4. As a simple estimate, we expect that the thermal correction is proportional to an effective area of the sphere, ≈ (a + R)2R + πR 2 /2, as seen by the worldlines. This estimate then is compatible with c 0 = c 1 = 0 and c 2 /c 3 ≈ 1.8.
The question arises why the PFA approximation yields a T 3 behavior despite the additional scale R. The reason is that R appears only in the combination r 2 /R in the force density, such that T → αT , r → r/α 2 leaves the force density invariant up to a multiplicative constant.
Let us return to Eq. (23). For c 2 and c 3 , we obtain numerically (see Figs. 6 and 7) These numbers can be confirmed by the exact T -matrix representation [54]. Note that both coefficients have the same sign, implying that the absolute value of the thermal correction to the Casimir force increases with increasing a for sufficiently small a and T . This apparently anomalous behavior can be understood in geometric terms within the worldline picture [19].
The system has a critical temperature T cr ≃ 0.34(1)/R: For T > T cr , the thermal force decreases monotonically for increasing sphere-plate separation a in accordance with standard expectations. For smaller temperatures T < T cr , the thermal force first increases for increasing separation, develops a maximum and then approaches zero as a → ∞. The peak position is shifted to larger a values for increasing thermal wavelength, i.e., decreasing temperature. In all cases, the force remains attractive, see Fig. 8. As an example, room temperature T = 300K corresponds to the critical temperature for spheres of radius R ≃ 2.6µm. For larger spheres, room temperature is above the critical temperature such that the thermal force is monotonic. For smaller spheres, the thermal force is non-monotonic at room temperature. If, for instance, T = 70K and R = 1.6µm, the thermal force increases up to a ≃ 9µm.
The high-temperature limit T ≫ 1/R agrees with the PFA prediction for a → 0 and reads In the limit a → 0, the PFA yields Eq. (26) for all T . This is because geometrically the leading-order PFA corresponds to approximating the sphere by a paraboloid, which is a scale-invariant configuration at a = 0. At finite a, the scale invariance is broken and a term ∼ +R π 3 aT 4 /45 appears on the right-hand side of Eq. (26) at low temperature in the leading-order PFA. By contrast, we observe that the true a → 0 limit is characterized by a T 4 behavior for small T and T 3 behavior for large T . Also, the sign of the correction at finite a is different: the full worldline result predicts an increase whereas the PFA correction reduces the absolute value of the force, see Fig. 7.
It is interesting to compare our results to another PFA scheme beyond the leading-order PFA: the plate based PFA. This scheme is not scale invariant at a = 0, as the low-temperature limit for a ≪ R is also quartic and given by Equation (27), in fact, corresponds to the thermal force density of two parallel plates integrated over the area of the region below the sphere, πR 2 . Numerically, the corresponding worldline coefficient is more than ten times larger than the PFA prefactor π 3 /90 ≈ 0.345. In Eq. (27), the low-T behavior at finite a is exponentially suppressed, implying that the plate based PFA prediction for c 3 is zero -which is again in contradiction with our worldline analysis. The formulae (26) and (27) are derived in the Appendix. The thermal force at a = 0 is shown together with the PFA predictions in Fig. 5.
We now turn to the high-temperature limit, in a strict sense corresponding to T ≫ 1/a and T ≫ 1/R. The second requirement is automatically fulfilled in the smalldistance limit a ≪ R. Quantitatively, it turns out that the high-temperature regime is already approached for T ≫ 1/a and T ≪ 1/R.
A special case arises for a → 0, where the hightemperature limit agrees with the PFA prediction Eq. (26) in the leading order. For a > 0, the hightemperature limit is linear in T and the total force becomes classical, i.e., independent of c. This behavior is rather universal being a simple consequence of dimensional reduction in high-temperature field theories, or equivalently, of the linear high-temperature asymptotics of bosonic thermal fluctuations [6,[55][56][57][58][59]. In order to find the high-temperature limit, we perform the Poisson summation of the winding sum. The Poisson summation for an appropriate function f reads wheref is the Fourier transform (including a 1/ √ 2π prefactor) of f . Applying Eq. (28) to the winding sum, we For finite a, the propertime integral is bounded from below and the last term is exponentially vanishing as T → ∞. Evaluating the worldline integrals for the first two terms, we obtain The evaluation of F (a) is analogous to Eq. (12), where the support S(r) is the same as in the T = 0 case, see Eqs. (11) and (12). The Casimir force remains attractive also for high temperatures. The function F c (a), normalized to the PFA prediction High-temperature coefficient Fc(a) for the sphereplate configuration normalized to the corresponding PFA coefficient. At large a, the normalized coefficient is conjectured to approach a constant. For small a, the behavior is well described by 1 + (0.14 ± 0.015) a. For a > 100, systematic errors similar to those of Fig. 2 set in (black triangles); at finite temperature, these errors are even more pronounced due to a softer propertime exponent 5/2. is shown in Fig. 9. The function a 2 F c (a) is monotonically increasing on 0 < a < 100 (similar to a 3 F c (a)). At small a, we obtain In analogy to the zero-temperature force, we conjecture that also a 2 F c (a) remains monotonically increasing and finally approaches a constant for a → ∞. A consequence of this conjecture is that the high-temperature limit then has a simple form, T ≫ 1/a, without any relation to R. Indeed, demanding for a fixed T , the limit a → ∞ corresponds immediately to T ≫ 1/a, since a 3 F c (a) itself approaches a constant. Our numerical data shown in Fig. 9 is indeed compatible with this conjecture. However, the large-a limit is difficult to assess due to the onset of systematic errors for a > 100. Comparing Fig. 2 and 9, we notice that the zerotemperature force F c (a) and the high temperature coefficient F c (a) behave similarly. This is not surprising since F c (a) in D = 4 Minkowski space corresponds to F c (a) in D = 3 Euclidean space due to dimensional reduction in the high-temperature limit. For finite a, the hightemperature limit is already well reached for T 1/2a. In the PFA approximation, the weaker thermal force at not too small temperatures is normalized by the weaker zero-temperature force, leading to an accidental cancel- should be compared with the PFA estimate (lines). We observe that this ratio of thermal to zero-temperature force is surprisingly well described by the PFA for a wide parameter range, especially in the high-temperature regime. This happens because both Fc(a) and Fc(a) increase with respect to the PFA with roughly the same rate, see Fig. 2 and Fig. 9.
lation, such that for T 1/2a independently of R. A comparison between the full worldline result and the PFA for the normalized force is shown in Fig. 10 for various a and T . Since for small separations a < R, the PFA is a reasonable approximation already at medium temperature 1/2a > T > 1/2R, see Fig. 5, we observe that the ratio between thermal Casimir force and zero-temperature result is surprisingly well described by the PFA for quite a wide parameter range. We stress that the PFA is inapplicable for each quantity alone.
C. Cylinder above a plate
Analogous to the sphere-plate case, we start with the expansion of the thermal force at low temperature T and for a ≪ R as in Eq. (24). Again, we allow only for positive exponents for a and R. Even though √ R terms appear in an a/R ≪ 1 expansion at zero temperature, our numerical results at small finite temperatures, somewhat surprisingly, are consistent with an expansion of the type The potential leading-order terms c 0 √ RT 7/2 and c 1 √ aT 7/2 are expected to be zero similar to the sphere- The worldline result and the PFA predictions are normalized to the leading-order PFA Eq. (38). The leading-order PFA predicts a T 7/2 behavior of the thermal force for all T . On the other hand, worldline numerical data is compatible with a T 4 behavior for small T ≪ 1/R and a T 3 behavior for large T > 1/R. This is also observed in the plate-based PFA, whereas the cylinder based PFA goes as T 4 ln(T ) for small T . All predictions agree in the high-temperature limit. The worldline result enters the blue area for small T into, which is the region spanned by the different PFA approximations.
plate case, see above, since the configuration of a cylinder above a plate is not invariant unter a → a/α and T → αT . We have no evidence for a term ∼ √ aRT 4 , which would lead to a nonanalytic increase of the force. Thus at small temperatures, the powers of T are found to be integers in leading order. Similar to the sphereplate case, we expect the low-temperature contributions to the thermal force to be proportional to the effective area ≈ L y (2R + a) seen by the distant worldlines. This results in the rough estimate c 2 /c 3 ≈ 2, which also implies that both coefficients have the same sign.
In the limit a → 0, our data in the regime T > 0.01 is compatible with a T 4 behavior of the thermal force. For c 2 and c 3 , we obtain (see Figs. 12 and 13) c 2 ≈ −1.007 (7), c 3 ≈ −0.41(4).
As in the case of the sphere, both coefficients have the same sign, i.e., the absolute value of the thermal Casimir force increases with increasing a for sufficiently small a and T < T cr . For the critical temperature, we obtain T cr ≈ 0.31(1)/R. As in the case of a sphere, the thermal force decreases monotonically with increasing a for T > T cr ; below the critical temperature, the thermal force first increases up to a maximum and then decreases again approaching zero for a → ∞. The position of the maximum depends on T and increases with inverse temperature, see Fig. 14. In both cases, however, the thermal force remains attractive. Low-temperature behavior of −∆Fc(T )/T 4 for a cylinder above a plate for R = 1 in the limit a → 0. For 0.02 < T < 0.06, we observe a linear behavior. In the range 0.03 < T < 0.05, our data can be fitted to the form ∆Fc(a = 0, T )/Ly ≈ −1.0065 RT 4 + 3.163 R 2 T 5 . For T < 0.02, systematic errors due to worldline discretization artifacts lead to a fast decrease of the data (black triangles). We have used 44 000 loops with 2 · 10 6 ppl each.
The high-temperature limit T ≫ 1/R agrees with the PFA prediction in the limit a → 0 as expected, As for the sphere, the PFA predicts the same force law (38) in the limit a → 0 for all T . At finite a, the scale invariance is broken and a term ∼ +0.185a √ RT 9/2 appears on the right-hand side of Eq. (38) in leading-order PFA at low temperature. By contrast, we observe different power laws for different temperatures in the limit a → 0: a T 4 behavior for small T and T 7/2 behavior for large T . Also, the sign of the finite-a correction of the full result is opposite to that of the PFA, see Fig. 13, all of which is reminiscent to the sphere-plate case.
Incidentally, the beyond-leading-order PFA schemes reflect the correct behavior much better. We observe that the cylinder-based PFA turns out to be the better approximation (as for the sphere-based PFA in the preceding section). This is opposite to the zero-temperature case. For the plate-based and cylinder-based PFA, we obtain The plate-based result is equal to the thermal force of two parallel plates integrated over an area 2RL y . The platebased coefficient is more than four times smaller than the worldline coefficient, whereas the leading coefficient of the cylinder-based formula becomes arbitrarily large as T → 0. The formulae (39) and (40) are derived in the appendix. The thermal contribution to the force in the limit a → 0 is shown together with the PFA predictions in Fig. 11.
Let us now investigate the high-temperature limit, which can be obtained by Poisson summation of the winding-number sum as in Eq. (29). A special case arises in the limit a → 0, where the high-temperature limit corresponds to the PFA prediction Eq. (38) in leading order. For a > 0, the high-temperature limit is again linear in T and the total force "classical", i.e., independent of ( c), as in Eq. (30), we obtain where the support S(r) is the same as in the T = 0 case, see Eq. (17). The Casimir force remains attractive also for high temperatures. The function F c (a), normalized to the PFA prediction is shown in Fig. 15. The function a 5/2 F c (a) is monotonically increasing for 0 < a < 1000 and is reminiscent to a 7/2 F c (a). At small a, we obtain At large a, we find using Eq. (34) and the analytical zerotemperature law [43], We can compare our results with those of an analytical result [43] in the limit R ≪ H = R + a. The leadingorder thermal contribution to the Casimir force in this computation based on scattering theory reads where the integrand has been approximated to leading order in ln −1 (qR). Here, st(q/T ) is a 2πT periodic sawtooth function which in the range from 0 to 2πT is given by −q/(2πT ) + 1/2. The authors of [43] have given a simple estimate of the integral for the limit R ≪ 1/2πT by replacing ln(qR) by ln(2πRT ) and carrying out the resulting integral. We compare our worldline results with Eq. (46) as well as with the simple estimate in Fig. 16 Here, we propose another estimate which is valid for arbitrary T > 1/(R+a). In this case, the sawtooth function is approximately constant for q < 1/(R + a). We approximate the logarithm by inserting the value q 0 for which q exp(−2q(R + a)) is maximal: q 0 = 1/2(R + a). In turn for T < 1/(R+a), the logarithm can be approximated by insertion of the value q 0 where q exp(−2q(R+a))st(q) has its first maximum: q 0 = πT /2. We choose the first maximum, as the integrand is oscillating for q > q 0 , such that cancellation can be expected to occur. However, choosing q 0 ∼ T always leads to a regular T 4 / ln(T ) behavior for small T , whereas Eq. (46) changes sign at very small T , see Figs. 16 and 17. We thus conclude that Eq. (46) is valid for not too small T . The thermal contribution to the Casimir force then reads where q 0 = 2πRT in the Emig et al. approximation [43], whereas q 0 = R/2(a + R) for T > 1/(R + a) and q 0 = RπT /2 for T < 1/(R + a) in the approximation proposed here. See Figs. 16 and 17 for the results at a = 10R and a = 100R respectively. In the small T limit, Eq. (47) reads Writing this as c(T, a, R)T 4 , the T 4 coefficient c always disappears for q 0 ∼ T as T → 0. In our numerical worldline analysis, the systematic discretization errors lead to a vanishing of the corresponding coefficient as well, since the number of points per worldline becomes insufficient for resolution of the cylinder at very small T . For an [43]) and various approximations as discussed in the text. Here, we use the abbreviation H = R+a. Remarkably, our proposed estimates using q0 = πRT /2 and q0 = 1/2H, cf. Eq. (47), describe the actual behavior far better than the analytic result (46), which changes sign as T → 0. Also the T > 1/(R+a) approximation using q0 = 1/2H remains a reasonable estimate even for T < 1/(R + a). For the worldline data, we have used 5000 loops with 2 · 10 7 ppl and 7000 loops with 2 · 10 6 ppl.
increasing number of points per worldline, however, our data actually appears to point to a non-vanishing coefficient, see Figs. 16 and 17. In any case, we expect the leading-order multipole expansion which is behind the asymptotic result (46) to break down at low temperatures due to the geothermal interplay.
For large T on the other hand, Eq. (47) becomes We observe that the negative of the T -independent part approaches the zero-temperature limit of the Casimir force for large a/R faster if we choose q 0 = 1/(R + a) rather than q 0 = 1/2(R+a). This choice of q 0 = 1/(R+a) then constitutes our second estimate for T > 1/(R + a).
For not too small T , the analytic result and the various q 0 approximations nicely agree with our worldline data, see Figs. 16 and 17. For higher temperature, the behavior becomes ∼ T and the different results acquire different slopes which partly disagree for T → ∞. For a = 10, the analytic result becomes ∼ 0.000424L y T , the q 0 = 1/(R + a) approximation yields ∼ 0.000431L y T , and the q 0 = 1/2(R + a) approximation ∼ 0.000334L y T . The numerical worldline result is ∼ 0.00041(3)L y T .
For large a + R, the temperature coefficient becomes [43]) and various approximations as discussed in the text. Even at such large separations, we observe that the analytic result (46) becomes invalid and even changes sign as the temperature approaches zero. Incidentally our simple T > 1/(R + a) approximation using q0 = 1/2H describes the actual behavior rather well also for smaller temperatures. For the worldline data, we have used 2500 loops with 6 · 10 7 , 5000 loops with 3 · 10 7 ppl and 14000 loops with 3 · 10 6 ppl. Let us return to the high-temperature discussion, and finally remark that also in the case of a cylinder the thermal Casimir force normalized to the zero temperature result is well described by the PFA for T 1/2a. Analogously to Eq. (35), we conclude from the dimensionalreduction argument, that the ratio of thermal to zerotemperature force in the high-temperature limit T 1/2a is approximately Also at medium temperatures this ratio is surprisingly well described by the PFA, even better than in the case of a sphere, see Fig. 10. The normalized thermal force is shown in Fig. 18 for various a and T .
V. CONCLUSIONS
In this work, we have analyzed the geometrytemperature interplay in the Casimir effect for the case of a sphere or a cylinder above a plate. Since finitetemperature contributions to the Casimir effect are induced by a thermal population of the fluctuation modes, the geometry has a decisive influence on the thermal corrections as the mode spectrum follows directly from the geometry. A strong geometry-temperature interplay can generically be expected whenever the length scale set by the thermal wavelength is comparable to typical geometry scales. Within our comprehensive study of the Casimir effect induced by Dirichlet scalar fluctuations for the sphereplate and cylinder-plate geometry, we observe several signatures of this geometry-temperature interplay: the thermal force density is delocalized at low temperatures. This is natural as only low-lying long-wavelength modes in the spectrum can be thermally excited at low T . As a consequence, the force density is spread over length scales set not only by the geometry scales but also by the thermal wavelength. This implies that local approximation techniques such as the PFA are generically inapplicable at low temperatures. Quantitatively, the low-temperature force follows a T 4 power law whereas the leading-order PFA correction predicts a T 3 behavior -a result which has often been used in the analysis of experimental data. Only for ratios of thermal to zero-temperature forces, we observe a potentially accidental agreement with the PFA prediction for larger temperatures. Here, the errors introduced by the PFA for the aspect of geometry appear to cancel, whereas the thermal aspects might be included sufficiently accurately.
Another signature of this geometry-temperature interplay is the occurrence of a non-monotonic behavior of the thermal contribution to the Casimir force. Below a critical temperature, this thermal force first grows for increasing distance and then approaches zero only for larger dis-tances. This phenomenon is not related to a competition of polarization modes as in [20,21], but exists already for the Dirichlet scalar case. The phenomenon can be understood within the worldline picture of the Casimir effect [19] being triggered by a reweighting of relevant fluctuations on the scale of the thermal wavelength. From this picture, it is clear that the phenomenon is not restricted to spheres or cylinders above a plate; we expect it to occur for general compact or semi-compact objects in front of surfaces, as long as the lateral surface extension is significantly larger than the thermal wavelength. In fact, another consequence of the delocalized force density is that edge effects due to finite plates or surfaces will be larger for the thermal part than for the zero-temperature force.
Our results have been derived for the case of a fluctuating scalar field obeying Dirichlet boundary conditions on the surfaces. For different fields or boundary conditions, the temperature dependence can significantly differ from the quantitative results found in this work. This is only natural as different boundary conditions can strongly modify the fluctuation spectrum. For instance, the thermal part of the free energy in the sphere-plate case exhibits different power laws for Dirichlet or Neumann boundary conditions in the low-temperature and small-distance limit [9]. For future realistic studies of thermal corrections, all aspects of geometry, temperature, material properties, boundary conditions and edge effects will have to be taken into account simultaneously, as their mutual interplay inhibits a naive factorization of these phenomena. The proximity force approximation is a scheme for estimating Casimir energies between two objects. In this approach, the surfaces of the bodies are treated as a superposition of infinitesimal parallel plates, and the Casimir energy is approximated by Here, one integrates over an auxiliary surface Σ, which should be chosen appropriately. The quantity ε PP (h) denotes the energy per unit area of two parallel plates at a distance h apart, which at zero temperature reads where c PP = π 2 /1440 for the Dirichlet scalar case.
As the PFA does not make any reference to boundary conditions, all the formulas in this appendix are analogously valid for the electromagnetic case; all force formulas then have to be multiplied by a factor of two for the two polarization modes. The effective heights predicted by the PFA for a = 0, R = 1 and T = 0 compared with those of worldline numerics for a sphere and a cylinder, respectively, obtained from the T = 0 force density. The PFA predictions lie in the blue area which is bounded by Eq. (A4) from below and by Eq. (A8) from above. The effective heights seen by worldlines are well approximated by the leading order PFA for r not too large. We conclude that for small a/R the force is described best by the leading order order PFA, since for small a/R the force density is concentrated around r = 0.
At finite temperature, the corresponding expression is The distance is conventionally measured along the normal to Σ. The two extreme cases in which Σ coincides with one of the two bodies provides us with a region spanning the inherently ambiguous estimates of the PFA. For a sphere at a distance a above a plate, we thus integrate either over the plate ('plate based' PFA), or over the sphere ('sphere based' PFA). The Casimir force is then obtained by taking the derivative of (A1) with respect to a. However, for the 'sphere based' PFA, dh/da = 1 (see below). This implies that deriving the force estimate from the PFA of the energy in general is not the same as setting up the PFA directly for the force (the latter would correspond to a surface integral over the parallel-plates force per unit area). In this work, we use the derivation via the energy (A1).
The dependence of the PFA prediction on the choice of Σ disappears in the limit a → 0 at zero temperature to leading order. This result shall be called 'leading-order' PFA. It can also be obtained by expanding the surface of the sphere/cylinder to second order from the point of minimal distance to the plate and then using the 'plate based' PFA for this expansion. The effective heights predicted by the PFA for a = 10, R = 1 and T = 0 compared with those of worldline numerics for a sphere and a cylinder, respectively, obtained from the T = 0 force density. Note that the effective height for the cylinder is in a local maximum at r = 0. At greater separations the heights seen by worldlines are on average lower than the PFA predictions resulting in greater Casimir force. Also at larger separations the leading order PFA reflects best the actual situation, while the 'sphere/cylinder based' PFA turns out to be the worst.
The corresponding expressions for h read
For h PB , we integrate over [−R, R], for h LO over all r and for h SB , h CB over [−π/2, π/2] with an appropriate measure. Note that right underneath the sphere/cylinder all h are equal to a. Demanding dθ = dr for θ → 0 we can transform the integration over θ into an integration over r in a simple way by the substituting sin(θ) → r/R. The integral then goes from −R to R, and the corresponding h reads Also a measure factor resulting from Rdθ = R/ √ R 2 − r 2 and dh/da have to be taken into account. At zero temperature, we can absorb these factors into the new effective height With or without the prefactors h SB/CB is always greater than h PB and h LO and diverges for r → R. Since the factor approaches 1 for small r, all functions h coincide in this limit.
The PFA can also be developed within worldline formalism. Calculating the Casimir force density for two parallel plates, we have to determine that value of propertimes T for which one dimensional worldlines, attached to one of the plates, touch the other plate for the first time. This event is encoded in the lower bound of the proper time integral, whereas the upper bound is set to infinity. Thus, we obtain The representation (A9) is suitable for zero and low temperatures, whereas for high temperatures one should use in (A9) the Poisson resummed winding sum (29). We encounter cumulants of worldline extents λ in low and high temperature limits which can be determined via the analytic expression [37] Let us now point out the difference between the PFA and the worldline approach. In the PFA, we always use one dimensional worldlines to determine the distance, whereas the worldline dimension in the full formalism corresponds to the dimension of the geometry. To obtain the Casimir force for configurations containing one infinite plate in the worldline formalism, we integrate over this infinite plate as in the plate-based approach. However, the integration does not stop at the end of the second body, which in the present case is a sphere or cylinder. At arbitrary large distances, there are still worldlines which see the sphere/cylinder, i.e., we have to integrate to infinity. We therefore expect the leading-order PFA to reflect best the exact force laws. However, the propertime support is not the same, and thus worldlines see an effective height different from the one of the leading-order PFA, see Fig. 19 and 20.
The shape of the effective worldline height is roughly the same for zero and high temperatures. But at low temperature, the worldlines are reweighted. Only worldlines for large propertimes contribute considerably and thus worldlines at larger distances from the sphere become increasingly more important. Also their inner structure comes into play. Using the 'plate based' PFA, we ignore these effects and take into account only the region below the sphere/cylinder with the same function h PB ; hence, the result is expected to be too small.
In the following, we apply Eq. (A9) (multiplied by dh/da if necessary) to find the PFA expressions for the sphere and cylinder above an infinite plate, respectively. For the sphere, the evaluation of the leading-order PFA results in an especially simple expression, Obviously, the relation F PFA LO = 2πR ε PP (a) remains valid also at finite temperature. We thus obtain At finite T and small aT (aT 1/2), Eq. (A9) yields For large aT (aT 1/2), the expression (A3) leads directly to Note that at a = 0 the leading-order PFA predicts a T 3 behavior of the thermal force for all T . At finite a, the validity of the low-temperature limit is independent of R. With increasing a the absolute value of the PFA thermal force is always reduced, irrespective of T , quite the contrary to the full worldline results as discussed in the main text.
At low temperatures, we have a T 4 behavior which is given by the first term in Eq. (A17). For higher T , this T 4 term is canceled by the T 4 term with the exponential function, such that the leading behavior is given by the T 3 term. Then, expanding Eq. (A17), we get a T 2 contribution: Subtracting Eq. (A18) from Eq. (A17) and performing the Poisson resummation, we obtain the full T 1/2R behavior at a = 0 Thus, the leading large-T behavior at a = 0 is ∼ T 3 .
Let us now consider the case a = 0. For a ≪ R and low temperature T 1/2(R + a) ≈ 1/2R, we have a T 4 behavior given by the first term in Eq. (A16). The dependence on a is exponentially suppressed. This corresponds to the case of two parallel plates with an area of πR 2 , where the dependence on a is suppressed exponentially as well.
At medium temperature, 2(R + a) ≈ 2R 1/T 2a only the second and third term in Eq. (A16) are exponentially suppressed and can be neglected. The leading order can be found by expanding the remainder and considering only the converging sums.
To find the subleading terms, we again perform the Poisson resummation. For medium temperature 2R 1/T 2a and a ≪ R, we then obtain The high-temperature limit for 1/T 2a can be performed irrespective of the actual value a/R by summing up the whole Eq. (A16). The result reads Note that Eq. (A21) reduces to Eq. (A14) for a → 0 as it should.
For a larger than a ≈ R, the following temperature behavior occurs. At low temperature 1/T 2(R + a), a T 4 behavior arises from the first term in Eq. (A16). At higher temperatures, the behavior becomes rapidly linear as given by Eq. (A20), being valid for 1/T 2a.
The plate-based force can be obtained in closed form from Eq. (A3) also without using the worldline language: For the sphere-based PFA, the thermal Casimir force is given by where h SB (a) is given by (A5). For a = 0, we obtain the PFA approximation using the worldline language ∆F PFA SB (a = 0, T ) = where γ is Euler's constant. The expansion in T does not terminate after a few terms, so we concentrate on the two leading coefficients. The coefficient in front of T 4 contains the worldline average ln λ . For an analytical expression, we note that ln λ = m ln λ 1/m . For large m, we get λ 1/m → 1, such that we can expand the logarithm, where we have used Eq. (A10). Thus, the small-T limit of Eq. (A25) reads At a = 0, the PFA estimate |∆F PFA SB (a = 0, T )| lies above |∆F PFA PB (a = 0, T )|, see Fig. 5. For not too small T , the worldline result lies above both these PFA predictions, but due to the logarithm in the T 4 coefficient the spherebased PFA becomes larger at smaller T , such that the worldline force enters the area spanned by the PFA prediction, see Fig. 5.
The high-temperature limit can be obtained by expanding Eq. (A25) about T = ∞. The converging terms give the leading-order behavior. For the subleading orders, one has to perform the Poisson summation, however, the integral involved is rather complicated and may still be inflicted with artificial convergence problems. The leading-order behavior for a = 0 and large T reads and corresponds to the leading behavior of the platebased limit (A19). Let us turn to the case of finite a. Expanding Eq. (A24), we obtain the a dependent part of the thermal force, The series has a form of (R/a) ln(1 + a/R), which we verified explicitly to 10th order. Assuming that this form holds to all orders, we get Note that the first two terms agree with the T 4 coefficient of the plate-based formula (A20); we also see that the absolute value of the thermal force decreases with increasing a. As Eq. (A30) was obtained by interchanging summation and integration, we cannot expect Eq. (A30) to describe the full a dependence for all a and T . Indeed at a fixed, the thermal correction ∆F PFA SB (a, T ) becomes ∼ T as T → ∞, which is clearly not the case for Eq. (A30).
We can estimate the range of applicability of Eq. (A30) as follows. At high temperature and a ≈ 0, all PFA estimates agree. For large T , the leading behavior is ∼ T 3 , see e.g. Eq. (A28). With increasing a the force is still attractive. Demanding ∆F PFA SB (a, T ) < 0 we see that that Eq. (A30) leads to a positive thermal force for a 1/T . On the other hand in the low-temperature regime, the a = 0 contribution is given by Eq. (A27). Taking only the leading contribution into account and demanding ∆F PFA SB (a, T ) < 0, we again obtain that the force becomes positive at a 1/T . These rather rough estimates demonstrate that the validity range for a becomes narrower with increasing temperature. For very small a, however the thermal correction is linear in a irrespectively of T , whereas the dependence on a in the plate-based PFA is exponentially suppressed for small T ≪ 1/(R + a).
At large temperatures T > 1/a, we have the familiar situation Unfortunately, a simple relation similar to F PFA LO = 2πR ε PP (a) does not hold any longer for the cylinder, such that the resulting formulas are not related to the known results of parallel plates and are rather complicated. For arbitrary a and T , we obtain where 2 F 2 is the hypergeometric function in the standard notation. Eq. (A34) does not distinguish between a < R and a > R, since the relevant parameter for different temperature regions is aT . For small aT (aT 1/2), we For large aT , the Poisson resummation of Eq. (A34) leads to ∆F PFA LO (a, T ) = which is, of course, −F PFA LO (a, 0) + T F PFA LO (a). Note that at a = 0 the leading-order PFA predicts a T 7/2 behavior of the thermal force for all T . At finite a, the validity of the low-temperature limit is independent of R. With increasing a, the absolute value of the thermal force is always reduced, irrespective of T , quite the contrary to the full worldline results.
b. Plate-based PFA
Here, we give only the analytic expressions for special limits, since no general expression could be found in a closed form. At a = 0 and T ≪ 1/R, the thermal force can be found from the result of two parallel plates with an area of A = 2RL y , ∆F PFA PB (a = 0, T ) = −2RL y π 2 90 T 4 . | 2010-03-17T18:05:55.000Z | 2010-03-17T00:00:00.000 | {
"year": 2010,
"sha1": "ba654c60791ff0a67e319e942d026f364b1acfff",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1003.3420",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ba654c60791ff0a67e319e942d026f364b1acfff",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
24766678 | pes2o/s2orc | v3-fos-license | Ferroelectrically induced weak-ferromagnetism in a single-phase multiferroic by design
We present a strategy to design structures for which a polar lattice distortion induces weak ferromagnetism. We identify a large class of multiferroic oxides as potential realizations and use density-functional theory to screen several promising candidates. By elucidating the interplay between the polarization and the Dzyaloshinskii-Moriya vector, we show how the direction of the magnetization can be switched between 180$^{\circ}$ symmetry equivalent states with an applied electric field.
The rational design of new materials with emergent properties is a riveting challenge today in materials physics. It begins with understanding a mechanism to control the interplay between diverse microscopic degrees of freedom in order to create targeted macroscopic phenomena and ends with the discovery or design of new material realizations. When combined with first-principles density-functional theory, this approach provides an efficient strategy to survey the vast space of possible materials to target for synthesis. For example, new multiferroics in which magnetism coexists with ferroelectricity have been discovered where magnetic order itself induces ferroelectricity. Through this specific spin-lattice interaction, it is readily possible to control the direction of the electrical polarization with a magnetic-field [1,2]. An equally fundamental but technologically more relevant problem that has received far less study is the electricfield control of magnetism [3,4,5,6]. In particular, the electric-field switching of a magnetization between 180 • symmetry equivalent states has yet to be demonstrated. The most promising direction to achieve this in single-phase materials involves a ferroelectric distortion inducing weak-ferromagnetism [7,8,9,10]. Discovering a prototypical structure for which this approach might be realized, however, has remained elusive.
Weak-ferromagnetism (wFM) is the phenomenon whereby the predominantly collinear spins of an antiferromagnet cant in such a way as to produce a residual magnetization (M). It can arise as a relativistic correction to Anderson's superexchange, i.e., the Dzyaloshinskii-Moriya (DM) interaction [11,12], A ferroelectric (FE) distortion can induce wFM when the phenomenological invariant E PLM ∼P·(L×M) is allowed in the energy of the antiferromagnetic-paraelectric (AFM-PE) phase, where P and L are the polarization and AFM vector respectively. Due to this term a coupling between the sign of P and M, for fixed L, is evident. It is important to realize that a FE can still exhibit wFM without the FE distortion per se causing the wFM, i.e., without E PLM in the corresponding PE phase. For example, although BiFeO 3 (the most widely studied multiferroic) can display wFM, such an invariant does not exist as previously shown from explicit first-principles calculations [13] and as we argue below from symmetry. The challenge then is understanding how to start with the microscopic properties of E DM and build a material capturing the macroscopic physics of E PLM . In this Letter we present for the first time design criteria that facilitate this process. In general the criteria target a structure for which wFM is symmetry-forbidden in the PE phase but symmetry-allowed in the FE phase [4,13]. Below we use this to identify a class of multiferroic oxides as potential realizations and use density-functional theory to screen several promising candidates. One complication arises in magnetic systems with broken spatial inversion such as a FE: a symmetry allowed interaction [14] may in some cases generate a long-wavelength magnetic spiral canceling the net M, e.g., as in (bulk) BiFeO 3 . Here we consider the situation where this inhomogeneous state is suppressed, e.g., when single-ion anisotropy is large [15].
Our strategy is to formulate the problem in terms of a structural-chemical criterion and a magnetic criterion. To illustrate the idea we consider a two-sublattice antiferromagnet such as BiFeO 3 . A synopsis of the structuralchemical criterion is as follows: start with a PE structure, decorate the lattice with spins such that the midpoint between two spins coincides with a site having inversion (I) symmetry (so that D = 0 by symmetry, i.e., Moriya's first rule [12]), place a FE-active ion at an I-site. Compounds which satisfy this criterion are quite intriguing because if there are no other symmetry elements that would forbid wFM -the magnetic criterion -all that is required to induce a non-zero D and wFM is to remove the I center by controlling the off-centering of the FE-active ion either by temperature, pressure, or electric-field. The magnetic criterion is primarily a question of how the spins order and of their direction with respect to the crystallographic axes. Notice our microscopic based criteria implies that L is odd under I, which is precisely what is required considering macroscopic phenomenology, i.e., E PLM . These criteria facilitate evaluating known compounds or when combined with crystal chemistry principles designing new prototypes. For example, using PE BiFeO 3 as a starting point, we apply them to design a novel structure.
In PE BiFeO 3 , crystallographic space group R3c, the Bi-ions occupy the A-site, Wyckoff position 2a, with local site symmetry 32, while the magnetic Fe-ion occupies a site with inversion symmetry, the B-site, position 2b, site symmetry3. The Fe-spins order ferromagnetically within antiferromagnetically coupled (111) planes with the magnetic easy axis perpendicular to [111]. Although PE BiFeO 3 satisfies the magnetic criterion, I transforms each Fe sublattice onto itself, IL = I(S 1 -S 2 ) = L, as can be seen in Fig. 1(a). In this case of B-site magnetism the structural-chemical criterion is not met and the invariant E PLM is forbidden by symmetry (the PE point group is 2'/m' or 2/m for which wFM is allowed). In contrast, consider the case where the magnetic ion is on the A-site and similarly ordered so that the magnetic criterion is still satisfied, Fig. 1(b). Now, in this case of A-site magnetism, IL = -L. Placing a FE-active ion such as Ti 4+ on the B-site would then satisfy the structural-chemical criterion. The PE magnetic point group is now 2/m' (2'/m) in which wFM is forbidden and by design a FE distortion, via E PLM , would lower the symmetry to m' (m) thereby inducing wFM. The question remaining is whether compounds in our rationalized structure can be synthesized? As we discuss next, several already exist. The mineral Ilmenite FeTiO 3 is one member of a family of compounds [16] that include the titanantes A 2+ Ti 4+ O 3 with A = Mn-Ni. They are all AFM insulators with ordering temperatures T N ∼40K-100K. At atmospheric pressure they form in the Ilmenite structure, space group R3. Ilmenite can be thought of as an ordered corundum structure. At high-pressure both MnTiO 3 and FeTiO 3 have been found to form a quenchable metastable LiNbO 3 LBO-phase, space group R3c [17,18]. Note this LBO-phase is structurally isomorphic to BiFeO 3 except the magnetic and FE atom positions are reversed, for example: MnTiO 3 →BiFeO 3 implies Mn→Bi and Ti→Fe. This is precisely the structural-chemical criterion outlined above. The remaining question is to identify the magnetic ground state in the FE phase to determine if the magnetic criterion is satisfied. In the remainder of this Letter we present a first-principles study of the FE and magnetic properties of the LBO-phase of MnTiO 3 , FeTiO 3 , and NiTiO 3 . We demonstrate that these are realizations of the design criteria and provide a novel simple picture of how the interplay of D and P leads to electricfield control of wFM.
Method.− We performed density-functional calculations using PAW potentials within LSDA+U [19] as implemented in VASP [20,21]. The wavefunctions were expanded in plane waves up to a cutoff of 500 eV. Integrals over the Brillouin zone were approximated by sums on a 6 × 6 × 6 Γ-centered k-point mesh. Phonons were calculated using the direct method. Where noted, noncollinear calculations with L-S coupling were performed. To find appropriate values of on-site Coulomb U and exchange J H parameters we performed a series of calculations to estimate the Curie-Weiss temperature Θ CW as a function of U for MnTiO 3 , FeTiO 3 , and NiTiO 3 in the ground state Ilmenite structure, space group R3 [22]. For all compounds a value of U = 4.5 eV and J H =0.9 eV was found to give a reasonable account of the measured values. It should be noted that the presented results do not qualitatively change for reasonable variations of U.
Ferroelectricity.− In the soft-mode theory of ferroelectricity, the PE to FE transition is associated with the softening of a single unstable infrared-active phonon. In the R3c→R3c transition, this is a PE phonon polarized along [111] of symmetry type A 2u . We calculated the frequencies of these A 2u phonons at T=0 and found one highly unstable mode, e.g., in MnTiO 3 , ω sof t ≈i150 cm −1 . The character of this soft mode consists of A-ion and Ti displacements moving against oxygen, similar to other R3c FEs such as BiFeO 3 and LiNbO 3 .
Next we broke inversion symmetry and performed a full structural optimization within the R3c space group. In Table I the calculated lattice constants at T = 0 are shown to be in excellent agreement with those observed at room temperature for MnTiO 3 and FeTiO 3 (R3c NiTiO 3 has not yet been synthesized.) The distortion leading from the PE to the FE structure can be decomposed entirely in terms of the soft-mode. The atomic displacements are almost equal in magnitude to those in LiNbO 3 , meeting Abraham's structural criteria for switchable ferroelectricity [23]. Based on Abraham's empirically derived formula relating the magnitude of these distortions to the FE transition temperature [23], we estimate FE T C ∼1500K-2000K. Finally, using the modern theory of polarization [24] we calculated a large P ≈ 80-100 µC/cm 2 comparable to that of BiFeO 3 .
Magnetic structure.− Weak ferromagnetism arises as a small perturbation -from the relativistic spin-orbit interaction -to a predominately collinear magnetic state, i.e., |J|>>|D|, where J and D are Heisenberg and DM exchange respectively. This difference in energy scales naturally separates the problem. As such we first identify the collinear state that minimizes the spin-interaction energy without L-S coupling, i.e., E H = − ij J ij S i · S j , by extracting the first four nearest neighbor (n.n.) exchange integrals, J n , from total energy calculations [22]. We found the state that minimizes E H consists of spins aligned ferromagnetically within antiferromagnetically coupled (111) planes, consistent with the magnetic criterion outlined above. This magnetic state arises due to a strong AFM J 1 coupling between n.n. spins in adjacent (111) planes. The Néel temperature calculated within meanfield theory [25] was found to be T N ∼ 100K for MnTiO 3 and NiTiO 3 and T N ∼ 250K for FeTiO 3 .
For a uniaxial crystal the orientation of the global spin axis relative to the crystallographic axis is given to lowest order by E ani = i K i sin 2 (θ), where θ is the angle between [111] and L. Depending on the sign of K, the spins lie in a plane perpendicular or parallel to [111], in which case wFM is allowed or forbidden, respectively. To calculate the single-ion anisotropy constant K we first performed a self-consistent density-functional calculation with collinear spins, without L-S coupling. Then, using the charge density and wavefunctions, we performed a series of non-selfconsistent calculations with spin-orbit interaction included for different orientations of the global spin axis. We found that the magnetic easy axis lies in the plane perpendicular to [111] with K ranging from -0.03 meV for MnTiO 3 and NiTiO 3 , to -0.7 meV for FeTiO 3 (the much large anisotropy for FeTiO 3 is associated with the orbital degeneracy). Symmetry then allows an additional contribution to the total energy which can cause the spins to cant: E DM = ijD ij · (S i × S j ), whereD ij is the Dzyaloshinskii vector. Phenomenologically this can be described by E DLM =D·(L×M) where L=S 1 -S 2 and M=S 1 +S 2 . Symmetry requires D to point along [111], i.e., parallel/antiparallel to P. Since K<0 requires L to be in the plane perpendicular to [111] and subsequently to D, the induced M is perpendicular to P. Once the direction of L is fixed, the sign of M that minimizes E DLM is determined by the sign of D.
Symmetry allows the spins to cant in the FE phase, but do they and by how much? To address this question we calculated the self consistent spin-density in the presence of the spin-orbit interaction for the FE structure with for example the polarization pointing down. The spins were initialized in a collinear configuration, e.g. L 0 = (2gµ B S, 0, 0), M 0 = (0,0,0), and then allowed to relax without any symmetry constraints imposed. For MnTiO 3 the induced moment was rather weak, M = (0, -0.002, 0) µ B /formula unit (f.u.), although comparable to the canonical weak-ferromagnet Fe 2 O 3 and still measurable. The smallness of this result is not too surprising considering that the spin-orbit parameter λ vanishes for Mn 2+ in the atomic limit. In contrast, λ is relatively large for Fe 2+ and Ni 2+ , and correspondingly the induced moments increase dramatically; M = (0, -0.03,0) µ B /f.u. and M =(0,0.25,0) µ B /f.u. for FeTiO 3 and NiTiO 3 respectively [26]. Finally, we can approximate the strength ofD from the calculated canting angle and J 1 . The results are summarized in Table I. Next we proceed to elucidate the interaction between M and P. Similar calculations as we just discussed were performed for the PE structure and for the FE structure with P in the opposite (symmetry equivalent) direction, again relaxing the spin-density without symmetry constraints. In the former M vanished confirming that the FE distortion is required for the observed wFM while in the latter M switched sign. These results are consistent with our earlier symmetry arguments for a FE inducing wFM, i.e., for the invariant E PLM ∼P·(L×M) [27] and suggests that the direction of M can be switched between stable 180 • directions by an external electric field that would switch P thereby switching M. One way to understand this result is to argue that the sign of the DM vector D depends on the direction of P, i.e., P ∝ D.
At first this may seem puzzling considering the fact that D ij is an axial vector. As Ederer and Spaldin pointed out [13], P would have to change the sense of oxygen rotation in order for D to change sign. In Fig. 2 we compare the S-O-S bonds in the PE and up/down FE states, centering on spin S 2 and its nearest neighbors in adjacent (111) planes. A change in chirality of the S-O-S bonds is clearly visible due to a change in the direction of P. The physics becomes clear by examining the microscopic E DM and realizing that the net Dzyaloshinskii vector D ij = α D α ij has to be summed over all distinct S 1 -O α -S 2 pathways. In the PE structure two pathways, a left and right chiral, contribute toD 12 as we show in Fig. 3. The orientation of the D A 12 vector is given by [28,29] D A 12 ∼ r 1A × r 2A where r 1A is the unit vector pointing along the S 1 -O A direction. There is, however, an additional pathway connecting S 1 to S 2 through O B . In the PE phase D A and D B can be shown to have equal magnitude but opposite sign leading to a vanishing net DM interaction. In the FE phase, P strengthens/weakens one pathway over the other leading to a finite DM interaction (in Fig. 2 we only show the "strong" S 1 -O-S 2 pathway in the FE phase). Therefore the origin of ferroelectricallyinduced wFM in this class of materials is a change in the relative contribution of two DM superexchange pathways (with opposite sign) due to a polar lattice distortion.
Today, the challenge in multiferroics has shifted from finding new magnetic ferroelectrics to identifying materials in which the polarization and the magnetization are strongly coupled. In this work we have presented criteria that have the potential to advance the discovery of such complex materials. These electrically-controlled switchable magnets provide fertile ground for additional studies of how spin and lattice degrees of freedom interact and also hold promise for application in magnetic devices.
Useful discussions with Matthias Bode, Venkat Gopalan, Darrell Schlom, and S.K. Streiffer are acknowledged. Work at the Center for Nanoscale Materials was supported by US DOE, Office of Science, Basic Energy Sciences under Contract No. DE-AC02-06CH11357. | 2007-12-18T20:30:05.000Z | 2007-11-08T00:00:00.000 | {
"year": 2008,
"sha1": "b4891e1ab2620b807abcc7584e6908cbd19c9229",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0711.1331",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b4891e1ab2620b807abcc7584e6908cbd19c9229",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
13304652 | pes2o/s2orc | v3-fos-license | Real-Time Decreased Sensitivity to an Audio-Visual Illusion during Goal-Directed Reaching
In humans, sensory afferences are combined and integrated by the central nervous system (Ernst MO, Bülthoff HH (2004) Trends Cogn. Sci. 8: 162–169) and appear to provide a holistic representation of the environment. Empirical studies have repeatedly shown that vision dominates the other senses, especially for tasks with spatial demands. In contrast, it has also been observed that sound can strongly alter the perception of visual events. For example, when presented with 2 flashes and 1 beep in a very brief period of time, humans often report seeing 1 flash (i.e. fusion illusion, Andersen TS, Tiippana K, Sams M (2004) Brain Res. Cogn. Brain Res. 21: 301–308). However, it is not known how an unfolding movement modulates the contribution of vision to perception. Here, we used the audio-visual illusion to demonstrate that goal-directed movements can alter visual information processing in real-time. Specifically, the fusion illusion was linearly reduced as a function of limb velocity. These results suggest that cue combination and integration can be modulated in real-time by goal-directed behaviors; perhaps through sensory gating (Chapman CE, Beauchamp E (2006) J. Neurophysiol. 96: 1664–1675) and/or altered sensory noise (Ernst MO, Bülthoff HH (2004) Trends Cogn. Sci. 8: 162–169) during limb movements.
Introduction
The natural world stimulates our many senses, which provide a unique percept through multisensory combination and integration [1]. Using various methods, multisensory research has repeatedly shown that certain modalities can alter the perception of other modalities [2][3][4][5][6]. For example, it has been reported that the perceived number of brief visual flashes is influenced by the number of short accompanying beeps [7] (e.g. 2 flashes accompanied with 1 beep often yields the perception of 1 flash: i.e. fusion illusion). Further, the presence of the illusory experience is associated with changes in primary visual cortex activity [8]. This audio-visual illusion also demonstrates the dominance of audition in a temporally demanding task. In contrast, we know from other multisensory integration studies that vision predominantly influences audition in spatially demanding tasks [2][3][4]9]. However, the influence of limb movement on multisensory integration is not known. Indeed, in multisensory studies, either the stimuli were presented to a participant at rest or the influence of any required motor responses on the investigated perceptual processes was not assessed.
Neural-behavioral research has accumulated evidence that vision is an important source of afferent information for the planning and control of goal-directed movements [10]. More importantly, it has also been shown that action can influence the perception of non-visual events [11,12]. Specifically, the production of a voluntary movement can modulate the detection of a tactile stimulation (i.e. action onset decreases the perception of a brief finger stimulation [11,12]). It has been suggested that such ''gating'' of tactile information is associated with modulation of central nervous system activity at the pre-cortical level [13]. Thus, if producing a voluntary movement reduces the tactile detection threshold, it is possible that the processing of other sensory inputs-relevant to the experimental task-increases.
This study aimed to demonstrate that a spatially demanding goal-directed action modulates the relative processing of audition and vision in real-time. Participants (n = 14) quickly moved their right index finger towards a small visual target and the presentation of 1 or 2 flashes accompanied with 1 or 2 auditory beeps (i.e. audio-visual illusion stimuli [5]) started at 0 ms, 50 ms, 100 ms, 150 ms, or 200 ms relative to movement onset. We hypothesized that the perceptual effects of the audio-visual illusion would be influenced by the real-time characteristics of the voluntary action. Such result would support the idea that cue combination [1] can be modulated in real-time during voluntary movements.
In addition to replicating the audio-visual illusion, we found that participants were less influenced by the illusion when their limb was moving at high velocities. When 2 flashes accompanied 1 beep, participants reported 1 flash (i.e. fusion illusion) more often in the early and late portions of the movement (i.e. 0 ms and 200 ms conditions corresponding to the lowest limb velocities) than in the 50 ms and 100 ms conditions (i.e. at the highest limb velocities) (ps,.02). As such, the fusion illusion was experienced 57% and 63% of the time in the 0 ms and 200 ms conditions respectively while it was only reported 44% of the time in the 50 ms and 100 ms conditions (see Table 1). When contrasting limb velocity at stimulus midpoint (i.e. 50 ms after stimulus onset) with the number of perceived flashes in the 2 flashes with 1 beep condition, we observed-across all experimental trials presenting 2 flashes and 1 beep-that the fusion illusion was linearly reduced as a function of limb velocity (see Figure 2).
Our results show that the fusion illusion occurred less often at the high than the low velocity stages of the limb trajectory. While neural-behavioural, psychophysical and neuroimaging studies support the idea that different modalities are combined and integrated [14], this study shows that the mere fact of moving a limb influences such multisensory integration processes in realtime. Possible explanations for these results include sensory ''gating'' mechanisms [11,12] and/or varying sensory noise levels [1] associated with goal-directed behaviors. That is, the altered relative contribution of vision and audition during voluntary action could be associated with reduced processing of non-visual cues during a visually-guided task (i.e. ''gating'') [11,12] and/or increased visual processing caused by larger contrasts of the limb position on the retina between visual samples (i.e. higher visual signal-to-noise ratio at high limb velocities) [1].
In terms of the sensory ''gating'' perspective [11,12], one possible explanation is that the central nervous system modulates its use of sensory information in real-time, as a function of the relevance of the afferent cue. Chapman and colleagues observed that tactile cues were less likely to be detected in close temporal proximity of the onset of a finger movement [11,12]. This decreased tactile sensitivity could be explained by an increased sensitivity to visual cues, which were relevant to the task at hand. In the present study, we purposefully employed a spatially demanding goal-directed action that requires extensive use of visual information [10]. Using such task, it is reasonable to suggest that the central nervous system modulated its use of visual information in real-time as a function of the relevancy of the visual cue. Indeed, high limb velocities can elicit stronger visual signals for the control of goal-directed actions than low limb velocities.
At high limb velocities, two subsequent visual samples provide greater differences in the position of the limb on the retina (i.e., stronger signal) than at low limb velocities. If the noise present in the visual cues provided to the central nervous system is relatively stable, then the signal-to-noise ratio is modulated in real-time as a function of limb velocity during goal-directed action. Such signalto-noise ratio is known to influence multisensory cue combination and integration [1,15]. Thus, our study suggests that optimal cue combination and integration could be modulated in real-time during goal-directed movements-which is not mutually exclusive with sensory ''gating'' [11,12].
In summary, our observations demonstrate the real-time modulation of visual perception during the production of voluntary movements. Thus, the relative contribution of visual and auditory information to our percept is not held constant throughout a goal-directed movement, but is at least modulated as a function of limb velocity.
Materials and Methods
Fourteen right-handed persons (5 females) with normal to corrected-to-normal vision and hearing were recruited from the University of Toronto community (mean age: 23.8 years, SD = 4.4). This protocol was approved by the University of Toronto Research Ethics Board and is also in accordance to the standards outlined in the 1964 Declaration of Helsinki. Written informed consent was obtained prior to any experimental involvement.
The task was performed using an aiming console (see Figure S1) equipped with 2 LEDs (target: green LED; flash: red LED) and a piezoelectric buzzer (2900 Hz). The position of an infra-red emitting diode (IRED) sampled at 250 Hz (Opototrak Certus, Northern Digital Inc.) and a custom-made program (MatLab, The Mathworks Inc.) were used to track the participant's movements and control stimuli presentation, respectively.
After placing the IRED on the tip of the right index finger, participants sat down and were asked to reach from a home position to a target (30 cm movement amplitude), which was aligned with their mid-saggital axis (see Figure 2). In a familiarization phase, participants were taught how to complete the movement within approximately 290 to 350 ms. In the experimental phase, 1 or 2 red flashes accompanied with 1 or 2 auditory beeps were also presented below the green target LED at . Each condition was presented 12 times each (i.e. 240 trials) in a pseudo-random fashion without repeating a condition more than 3 times in a row. Stimulus duration was 24 ms and stimulus onset asynchrony was 36 ms (see Figure S2). Participants were asked to report the number of flashes perceived after each trial (i.e. 1 or 2 flashes). ANOVAs were performed on the mean number of perceived flashes. Alpha level was set at .05 and Tukey HSD post hoc procedures were preformed on the significant main effects and interactions, when appropriate. Figure S1 Aiming console. Board viewed from the participant's side of the table. The custom built console, measuring 50 cm wide627.5 cm deep68.5 cm high, was placed 36 cm from the edge of the table from where participants were seated. A green target LED was located 30 cm to the left of home position. The red stimulus LED was located 6 cm below the target. The piezoelectric auditory stimulus was located 7 cm below the target, within the console. Participants aligned their mid-saggital plane with the target. Found at: doi:10.1371/journal.pone.0008952.s001 (7.52 MB TIF) Figure S2 Temporal profile of stimuli. Profile of two-cue stimulus presented to one modality and one-cue stimulus presented to another modality. Found at: doi:10.1371/journal.pone.0008952.s002 (0.39 MB TIF) | 2014-10-01T00:00:00.000Z | 2010-01-29T00:00:00.000 | {
"year": 2010,
"sha1": "60ee5b1bac9a3cef6e1b79062a8b4456ae394df3",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0008952&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60ee5b1bac9a3cef6e1b79062a8b4456ae394df3",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
51945532 | pes2o/s2orc | v3-fos-license | Unitary Multiperfect Numbers in Certain Quadratic Rings
A unitary divisor $c$ of a positive integer $n$ is a positive divisor of $n$ that is relatively prime to $\displaystyle{\frac{n}{c}}$. For any integer $k$, the function $\sigma_k^*$ is a multiplicative arithmetic function defined so that $\sigma_k^*(n)$ is the sum of the $k^{th}$ powers of the unitary divisors of $n$. We provide analogues of the functions $\sigma_k^*$ in imaginary quadratic rings that are unique factorization domains. We then explore properties of what we call $n$-powerfully unitarily $t$-perfect numbers, analogues of the unitary multiperfect numbers that have been defined and studied in the integers. We end with a list of several opportunities for further research.
Introduction
We convene to let N and P denote the set of positive integers and the set of (integer) prime numbers, respectively.
The arithmetic functions σ k are defined, for every integer k, by σ k (n) = c|n c>0 c k . The unitary divisor functions σ * k are defined by c k . In other words, σ * k (n) is the sum of the k th powers of the unitary divisors of n, where a unitary divisor of n is simply a positive divisor c of n such that c and n c are relatively prime. The author has invented and investigated analogues of the divisor functions in imaginary quadratic integer rings that are unique factorization domains [2]. Here, we seek to investigate analogues of the unitary divisor functions in these rings.
For any square-free integer d, let O Q( √ d) be the quadratic integer ring given by Throughout the remainder of this paper, we will work in the rings O Q( for different specific or arbitrary values of d. We will use the symbol "|" to mean "divides" in the ring O Q( √ d) in which we are working. Whenever we are working in a ring other than Z, we will make sure to emphasize when we wish to state that one integer divides another in Z. For example, if we are working in Z[i], the ring of Gaussian integers, we might say that 1 + i|1 + 3i and that 2|6 in Z. We will also refer to primes in O Q( √ d) as "primes," whereas we will refer to (positive) primes in Z as "integer primes." For an integer prime p and a nonzero integer n, we will let υ p (n) denote the largest integer k such that p k |n in Z. For a prime π and a nonzero number x ∈ O Q( √ d) , we will let ρ π (x) denote the largest integer k such that π k |x. Furthermore, we will henceforth focus exclusively on values of d for which O Q( √ d) is a unique factorization domain and d < 0. In other words, d ∈ K, where we will define K to be the set {−163, −67, −43, −19, −11, −7, −3, −2, −1}. The set K is known to be the complete set of negative values of d for which O Q( √ d) is a unique factorization domain [3].
The norm and absolute value of an element z are defined, respectively, by N(z) = zz and |z| = N(z). We assume familiarity with the properties of these object, which are treated in Keith Conrad's online notes [1]. For x, y ∈ O Q( √ d) , we say that x and y are associated, denoted x ∼ y, if and only if x = uy for some unit u in the ring O Q( √ d) . Furthermore, we will make repeated use of the following well-known facts.
If p is an integer prime, then exactly one of the following is true.
In this case, we say that p is inert in • p ∼ π 2 and π ∼ π for some prime π ∈ O Q( √ d) . In this case, we say p ramifies (or p is ramified) in O Q( √ d) .
• p = ππ and π ∼ π for some prime π ∈ O Q( √ d) . In this case, we say p is a prime, then exactly one of the following is true.
For a nonzero complex number z, let arg(z) denote the argument, or angle, of z. We convene to write arg(z) ∈ [0, 2π) for all nonzero z ∈ C. For each d ∈ K, we define the set A(d) by Definition 1.1. Let d ∈ K, and let n ∈ Z. Define the function Remark 1.1. We note that, for each x in the summation in the above definition, we may cavalierly replace x with one of its associates. This is because associated numbers have the same absolute value. In other words, the only reason for the criterion x ∈ A(d) in the summation that appears in Definition 1.1 (which is implied by the relation x♦z) is to forbid us from counting associated divisors as distinct terms in the summation, but we may choose to use any of the associated divisors as long as we only choose one. This should not be confused with how we count conjugate divisors (we treat 2 + i and 2 − i as distinct divisors of 5 in Z[i] because 2 + i ∼ 2 − i). Also, note that the functions δ * n depend on the ring in which we are working (this is also true of the function I * n , which we will soon define).
We will say that a function f : x and y are relatively prime.
For any n ∈ Z, the function δ * n is multiplicative.
Proof. Let z 1 and z 2 be relatively prime elements of Furthermore, the numbers x 1 and x 2 are unique because of the requirement On the other hand, if x 1 ♦z 1 and x 2 ♦z 2 , then x 1 x 2 is associated to a unique number x such that x♦z 1 z 2 . Therefore, and, if t = 2, we simply say that z is n-powerfully unitarily perfect in O Q( √ d) . If n = 1, we will omit the adjective "1-powerfully." Proof. Part (a) is fairly trivial. To prove part (b), let z 1 and z 2 be relatively prime elements of O Q( √ d) . We may use Theorem 1.1 to write To prove part (c), it suffices, due to the truth of part (b), to show that I * n (π α ) = δ * −n (π α ) for an arbitrary prime π and positive integer α. We have for all distinct j, ℓ ∈ {1, 2, . . . , r}, π j is a prime, α j is a positive integer, and π j ∼ π ℓ . Combining parts (b) and (c) of Theorem 1.2, we see that, for any positive integer n, we may calculate I * n (z) as I * n (z) = r j=1 (1 + |π j | −α j n ).
As an example, let us calculate Thus, 30 is 2-powerfully unitarily perfect in O Q( √ −1) . Now that we have established the foundations that we will need, we may study the properties of some n-powerfully unitarily t-perfect numbers.
Proof. Let Ψ(z) be the set of all primes in A(d) that divide z, and let Φ be the set of all primes in A(d). Then, for any integer n ≥ 3, If n ≥ 5, then we have where ζ denotes the Riemann zeta function.
Next, suppose n = 4. Let us assume that d = −7 so that 2 does not split in O Q( √ d) . Then Now, assume that d = −7 so that 3 is inert. We then have Finally, suppose n = 3 and I * 3 (z) is rational. If π is a prime and |π| = √ p for some integer prime p, then it is easy to see that ρ π (z) must be even in order for I * 3 (z) to be rational. Therefore, satisfies I * n (z) = t for some n ∈ {1, 2} and t ∈ N\{1}. Then N(z) is even.
Proof. Assume, for the sake of finding a contradiction, that N(z) is odd.
where, for all distinct j, ℓ ∈ {1, 2, . . . , r}, π j is a prime, α j is a positive integer, and π j ∼ π ℓ . Suppose that n = 1. If N(π j ) is an integer prime for some j ∈ {1, 2, . . . , r}, then it is easy to see that α j must be even in order for I * (1 + |π j | −α j ) to be an integer (or even a rational number). This means that |π j | α j is an integer for each integers. Furthermore, |π j | α j must be odd for each j ∈ {1, 2, . . . , r}, so 2 r |δ * 1 (z) in Z. As δ * 1 (z) = t|z| and |z| is odd, we see that 2 r |t in Z. However, , which a contradiction. Now, suppose n = 2. Then δ * 2 (z) and N(z) are positive integers. Because (1 + N(π j ) α j ) and N(π j ) α j is odd for each j ∈ {1, 2, . . . , r}, we see that 2 r |δ * 2 (z) in Z. Again, 2 r |t in Z, which is a contradiction because are two of the most heavily-studied quadratic rings, so it is not surprising that they prove to be particularly interesting for our purposes. We proceed to prove a theorem about 2-powerfully t-perfect numbers in each of these rings.
Suppose z is a 2-powerfully unitarily t-perfect number in O Q( √ d) for d ∈ K\{−7} and t ∈ N\{1}. By Theorem 2.1, we see that we may write z ∼ tN(z). Hence, if we assume that 3 ∤ N(z) in Z, then µ must be even. Therefore, under the assumption that 3 ∤ N(z) in Z, we may write When M. V. Subbarao and L. J. Warren studied unitary perfect numbers, which are positive integers n that satisfy σ * (n) = 2n, they noticed that all known unitary perfect numbers are multiples of 3. They then gave four conditions that any unitary perfect numbers not divisible by 3 would need to satisfy [4]. Using the information discussed in the preceding paragraph, we will find analogues of the conditions that Subbarao and Warren established. , and N(x) is odd. For any prime π, we have N(π) ρπ (x) ≡ 1 (mod 6). Furthermore, there exists a prime divisor π 0 of x such that N(π 0 ) ≡ 5 (mod 6), and x has an even number of nonassociated prime factors.
We end with a note about unitarily t-perfect numbers. If d ∈ K and t ≥ 2 is an integer, then we can find a unitarily t-perfect number in O Q( for every unitary t-perfect number in Z. We formalize and generalize this notion in the following theorem. Theorem 2.6. Let b > 1 be a rational number, and let d ∈ K. Let U(b) = {n ∈ N : σ * (n) = bn}, and let V d (b) = {z ∈ A(d) : I * 1 (z) = b}. Then there exists an injective function g : Proof. If p is an integer prime that does not split in O Q( √ d) , let g(p) = p. If p is an integer prime that splits in O Q( √ d) as p = ππ, where π ∈ A(d), let g(p) be the associate of π 2 in A(d). Now, for any positive integer n ∈ U(b) with canonical prime factorization n = r j=1 p α j j , let g(n) be the associate of r j=1 g(p j ) α j that lies in A(d). It is easy to see, using the fact that O Q( √ d) is a unique factorization domain, that g is an injection. To show that the range of g is a subset of V d (b), note that |g(p)| = p for all primes p. Therefore, with n as before, we have I * 1 (g(n)) = I *
Ideas for Further Research
With Theorem 1.2 as evidence, we see that the functions δ * n and I * n have some fairly nice properties that we may exploit for further research. We pose some ideas here.
First, we note that we could generalize the ideas presented in this paper to other quadratic rings. However, if we choose to continue working with imaginary quadratic rings that are unique factorization domains, we could still look at analogues of many other objects defined in the integers. For example, one might wish to investigate analogues of superperfect numbers and unitary superperfect numbers. One could also look at analogues of biunitary or even infinitary divisor functions in quadratic rings.
There are also plenty of questions left open related to the ideas discussed in this paper. For example, the author has made no attempt to actually find n-powerfully unitarily t-perfect numbers, so it is likely that many could be quite easy to discover. One question of particular interest is the following. For a given d ∈ K, what are the rational numbers b > 1 for which the function g : U(b) → V d (b) defined in the proof of Theorem 2.6 is bijective? | 2014-12-09T20:19:37.000Z | 2014-12-09T00:00:00.000 | {
"year": 2014,
"sha1": "7dc42e291bad6c712ac543d4b88118dc503b0147",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7dc42e291bad6c712ac543d4b88118dc503b0147",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
265036758 | pes2o/s2orc | v3-fos-license | Catheter ablation in patients with paroxysmal atrial fibrillation and absence of structural heart disease: A meta-analysis of randomized trials
Introduction Rhythm control strategy in paroxysmal atrial fibrillation (AF) can be performed with antiarrhythmic drugs (AAD) or catheter ablation (CA). Nevertheless, a clear overview of the percentage of freedom from AF over time and complications is lacking. Therefore, we conducted a meta-analysis of randomized controlled trials (RCTs) comparing CA versus AAD. Methods We searched databases up to 5 May 2023 for RCTs focusing on CA versus AAD. The study endpoints were atrial tachyarrhythmia (AT) recurrence, progression to persistent AF, overall complications, stroke/TIA, bleedings, heart failure (HF) hospitalization and all-cause mortality. Results Twelve RCTs enrolling 2393 patients were included. CA showed a significantly lower AT recurrence rate at one year [27.4 % vs 56.3 %; RR: 0.45; p < 0.00001], at two years [39.9 % vs 62.7 %; RR: 0.56; p = 0.0004] and at three years [45.7 % vs 80.9 %; RR: 0.54; p < 0.0001] compared to AAD. Furthermore, CA significantly reduced the progression to persistent AF [1.6 % vs 12.9 %; RR: 0.14; p < 0.00001] with no differences in overall complications [5.9 % vs 4.5 %; RR: 1.27; p = 0.22], stroke/TIA [0.6 % vs 0.6 %; RR: 1.10; p = 0.86], bleedings [0.4 % vs 0.6 %; RR: 0.90; p = 0.84], HF hospitalization [0,3% vs 0,7%; RR: 0.56; p = 0.37] and all-cause mortality [0,4% vs 0.5 %; RR: 0.78; p = 0.67]. Subgroup analysis between radiofrequency and cryo-ablation or considering RCTs with CA as first-line treatment showed no significant differences. Conclusion CA demonstrated lower rates of AT recurrence over the time, as well as a significant reduction in the progression from paroxysmal to persistent AF, with no difference in terms of energy source, complications, and clinical outcomes.
Introduction
Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia, affecting a significant proportion of the global population.
Rhythm control strategy in AF can be performed with antiarrhythmic drugs (AAD) or catheter ablation (CA).
Trials comparing the two strategies showed reduced AF recurrence and less progression from paroxysmal to persistent AF with CA [6,7,9,13].However, a clear overview of long-term freedom from AF recurrences is lacking.Furthermore, no difference in terms of complications and clinical outcomes was observed between the two groups in patients with paroxysmal AF without SHD [6,11,14,16,18].
Therefore, we conducted a meta-analysis of randomized trials with the aim of comparing freedom from AF, progression to persistent AF, overall complication rate and clinical outcomes between CA and AAD.
Data sources and searches
We systematically searched the Medline, Embase and Scopus electronic databases for studies published from the time of inception to May 5th 2023 and focusing on CA versus AAD in paroxysmal AF patients.Two investigators (A.P. and G.V.) independently performed searches including the following terms: "ablation and drug therapy paroxysmal atrial fibrillation".Detailed information of our literature search strategy is available in Supplemental Material in the Expanded Methods.The study protocol was designed before the start of the literature search but was not registered in any database.
Study selection
The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement for reporting systematic reviews and meta-analyses was used in this study [19].
Only RCTs were included to reduce the intrinsic bias due to the nature of non-randomised observational studies.
The studies had to fulfil the following criteria to be included in the analysis: (1) presence of a direct comparison between CA and AAD, (2) adult (>18 years old) study population, (3) ≥ 6-month follow-up, (4) paroxysmal AF, (5) preserved left ventricular ejection fraction (LVEF) and ( 6) reported 1 or more clinical outcomes.Observational studies, unpublished data, conference papers, case reports, editorials, reviews, expert opinions, and non-English studies were excluded.
Data extraction and quality appraisal
Two investigators (A.P and G.V) extracted data from each study using standardized protocol and reporting forms.Two reviewers (A.P and G.V) independently assessed the quality items, and disagreements were resolved by consensus.The quality of individual studies was assessed by two investigators (A.P and G.V) using the Cochrane Risk of Bias Tool version 2.0.
Study endpoints
The study endpoints were: Atrial tachyarrhythmia (AT) recurrence, defined as any recurrent atrial arrhythmias (AF, atrial flutter or atrial tachycardia) lasting longer than 30 s at follow up after the initial 2-3 months blanking period postablation [20].
Progression to persistent AF was defined as the first AT occurrence lasting 7 days or longer or lasting 48 h to 7 days but necessitating cardioversion for termination.
HF hospitalization was defined as HF relapse-related admission excluding hospitalization for AT recurrence.All-cause mortality was defined as death resulting from cardiovascular and other causes.
Statistical analysis
Descriptive statistics are presented as means and standard deviations (SD) for the continuous variables or a number of cases (n) and percentages (%) for the dichotomous and categorical variables.The Mantel-Haenszel Risk Ratio (RR) model was used to summarize the data for binary outcomes among the treatment arms.Summary estimates and 95
Overall complications
All RCTs reported data on overall complications.The most frequent adverse event in the CA group was pericardial effusion/tamponade (1.7 %) while in the AAD group was syncope (0.8 %).A summary of the overall complications is shown in Table 2.No differences were found in overall complication rate between CA and AAD [5.9 % vs 4.5 %; RR: 1.27 (95 % CI: 0.87-1.85);p = 0.22; I 2 = 5 %] (Fig. 3 B) and in subgroup analysis (Supplemental Fig. 1 E, Supplemental Fig. 2 E).
Publication bias
A graph and summary of Cochrane Risk of Bias tool for RCT is reported in Fig. 5.The funnel plots for visual inspection of the bias showed no bias (Supplemental Fig. 3).
Discussion
The aim of this updated meta-analysis was to evaluate the efficacy and safety of CA compared to AAD in the paroxysmal AF treatment in patients without SHD including only RCTs.Specifically, CA showed to reduce AF recurrence rates at 1 year, 2 years, and 3 years, and the progression from paroxysmal to persistent AF with no difference in terms of safety and HF hospitalizations compared to AAD.
Furthermore, at the subgroup analysis, CA confirmed the superior efficacy regardless to the ablation energy employed, preserving a similar safety profile to AAD.
In addition, first-line CA of AF in our meta-analysis was confirmed as superior to AAD therapy in short-and long-term rhythm control, without resulting in reduced safety.
Our study, including 2393 patients, represents the meta-analysis with the largest number of RCTs comparing CA and AAD.In fact, previous recent meta-analyses included about half of the studies and patients and did not perform subgroup analyses by ablation energy and first-line approach [21,22].Our meta-analysis provides robust evidence supporting the superiority of CA over AAD therapy in terms of long-term AF recurrence rates across all time points evaluated.These findings highlight the long-term efficacy of CA in maintaining sinus rhythm and suggest higher efficacy in the management of paroxysmal AF compared to AAD.
Furthermore, our analysis revealed that CA significantly reduces the progression from paroxysmal to persistent AF.This is a notable finding, as the progression to persistent AF is associated with worse clinical outcomes and increased morbidity [5,23].Early AF ablation may alter the natural course of the disease, as pulmonary venous isolation, modulation of the autonomic nervous system and electro-anatomical substrate modification may favour a substantial reversal of adverse structural atrial remodelling [24].Therefore, the ability of CA to prevent or delay this progression represents a significant advantage over AAD therapy leading to improved patient outcomes, avoiding AF ablation in the setting of persistent AF, characterized by less effectiveness than in paroxysmal AF [25].
Our meta-analysis did not find any significant differences in AF recurrence rates or complications when comparing RF and Cryo technologies for catheter ablation.This finding confirms that the choice of energy source does not significantly impact the efficacy or safety of the procedure, as already observed in the FIRE AND ICE Trial [26].Nevertheless, evidence suggests that new technologies may be more efficient [27,28].In addition, the development of new non-thermal tissue-selective energies such as pulsed field ablation would provide excellent efficacy and safety [29].
Although no difference has been shown in terms of complications between CA and AAD, the meta-analytic cohort primarily consisted of relatively young individuals experiencing symptoms, without evident underlying SHD.For instance, the median age in the CABANA trial [30] differed significantly from the current study's population (67.5 years versus 60 years), with a 15 % in heart failure cases and 82 % of patients with CHA 2 DS 2 -VASc score ≥ 2. Nevertheless, a recent sub-analysis of EAST-AFNET 4 [31] showed that early rhythm control in patients with CHA 2 DS 2 -VASc score ≥ 4 reduced the primary composite efficacy outcome of cardiovascular death, stroke or hospitalisation for worsening heart failure or acute coronary syndrome, but not in patients with CHA 2 DS 2 -VASc score < 4. Furthermore, the primary safety outcome (death, stroke or serious adverse events of rhythm control therapy) was not different between study groups in patients with CHA 2 DS 2 -VASc score ≥ 4 but occurred more often in patients with CHA 2 DS 2 -VASc score < 4 randomised to early rhythm control.However, looking at the serious adverse events, these seem to be mainly due to AAD rather than CA (torsade de pointes, drug toxicity, drug-induced bradycardia, druginduced atrioventricular block and syncope).These findings suggest that rhythm control therapy is associated with a better net clinical benefit in patients with multiple comorbidities than in patients with fewer comorbidities, indeed few events of HF hospitalisations and deaths occurred in our meta-analysis.Moreover, in terms of complications, AAD might have a comparable if not higher risk of adverse effects in patients with less comorbidity than with more comorbidities.However, as CA and AAD are associated with different types of complications, it is not possible to make a relevant comparison.
Limitations
It is important to consider certain limitations of our meta-analysis.None of the studies specified blinding of patients and it is possible that the post-ablation medical management differed between RCTs.Furthermore, some studies were open label and with unblinded outcome assessment.However, though patients and researchers were not subjected to blinding regarding treatment allocation and outcome, this was not considered sufficient to determine that these studies are at high risk of bias with regard to the outcomes of interest in this meta-analysis, which are relatively resistant to bias due to lack of blinding.Our metaanalysis reported high heterogeneity for AT recurrence at follow-up without reduction at subgroup analysis.In part, this could be due to the methodology used for assessing AT recurrences in the different studies (loop recorder, Holter ECG 24, periodic scheduled visits), which could potentially misestimate AT recurrence rates.Additional ablation outside the PVs, performed in some RCTs, could have affected the clinical outcomes [32].The RCTs included here enrolled patients from 2006 to 2022, involving temporal changes in both CA and drug therapy.
Conclusions
In conclusion, our meta-analysis of RCTs provides compelling evidence supporting the superiority of CA over AAD therapy for the treatment of paroxysmal AF.CA demonstrated lower rates of AT recurrence at 1 year, 2 years, and 3 years, as well as a significant reduction in the progression from paroxysmal to persistent AF, with no difference for safety in comparison with AAD.Importantly, the choice between RF and Cryo technologies did not affect efficacy and safety, underlining that both technologies are equally effective and safe.
Fig. 1 .
Fig. 1.Evidence search and selection of the preferred reporting items for systematic reviews and meta-analyses (PRISMA).RCT: randomized control trial.* Medline, Embase, Scopus.
A.Parlavecchio et al. % confidence intervals (CI) were reported for the continuous variables as the standardized mean difference.The heterogeneity across studies was evaluated byusing the Chi 2 , Tau 2 , and Higgins-I 2 statistics and random effects models of DerSimonian and Laird was used.Subgroup analyses were performed to assess potential sources of heterogeneity according to ablation energy [Cryoablation (Cryo) and Radiofrequency ablation (RF)] and first-line treatment with CA.Publication bias was assessed by graphical inspection of funnel plots.The statistical analysis was performed using Review Manager (RevMan) (computer program) Version 5.4.1,Copenhagen, Denmark: Nordic Cochrane Centre, the Cochrane Collaboration, 2020.
Fig. 5 .
Fig. 5. (A) Methodological quality graph and (B) methodological quality summary of the Cochrane Risk of Bias tool for Randomized Controlled Trials.
Table 1
Study Baseline Characteristics of Patients Included in the Analysis.
Table 2 .
Summary of overall complications in the included studies. | 2023-11-07T16:01:52.067Z | 2023-11-05T00:00:00.000 | {
"year": 2023,
"sha1": "64e7dfe0fce067a18ad5b884a63182d81d0bb692",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ijcha.2023.101292",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8836beabc1b19e19b7c812fb31ba90d0f9df9ba9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253526694 | pes2o/s2orc | v3-fos-license | Proteoglycans play a role in the viscoelastic behaviour of the canine cranial cruciate ligament
Proteoglycans (PGs) are minor extracellular matrix proteins, and their contributions to the mechanobiology of complex ligaments such as the cranial cruciate ligament (CCL) have not been determined to date. The CCLs are highly susceptible to injuries, and their extracellular matrix comprises higher PGs content than the other major knee ligaments. Hence these characteristics make CCLs an ideal specimen to use as a model in this study. This study addressed the hypothesis that PGs play a vital role in CCL mechanobiology by determining the biomechanical behaviour at low strain rates before and after altering PGs content. For the first time, this study qualitatively investigated the contribution of PGs to key viscoelastic characteristics, including strain rate dependency, hysteresis, creep and stress relaxation, in canine CCLs. Femur-CCL-tibia specimens (n = 6 pairs) were harvested from canine knee joints and categorised into a control group, where PGs were not depleted, and a treated group, where PGs were depleted. Specimens were preconditioned and cyclically loaded to 9.9 N at 0.1, 1 and 10%/min strain rates, followed by creep and stress relaxation tests. Low tensile loads were applied to focus on the toe-region of the stress-strain curves where the non-collagenous extracellular matrix components take significant effect. Biochemical assays were performed on the CCLs to determine PGs and water content. The PG content was ∼19% less in the treated group than in the control group. The qualitative study showed that the stress-strain curves in the treated group were strain rate dependent, similar to the control group. The CCLs in the treated group showed stiffer characteristics than the control group. Hysteresis, creep characteristics (creep strain, creep rate and creep compliance), and stress relaxation values were reduced in the treated group compared to the control group. This study suggests that altering PGs content changes the microstructural organisation of the CCLs, including water molecule contents which can lead to changes in CCL viscoelasticity. The change in mechanical properties of the CCLs may predispose to injury and lead to knee joint osteoarthritis. Future studies should focus on quantitatively identifying the effect of PG on the mechanics of intact knee ligaments across broader demography.
Proteoglycans (PGs) are minor extracellular matrix proteins, and their contributions to the mechanobiology of complex ligaments such as the cranial cruciate ligament (CCL) have not been determined to date. The CCLs are highly susceptible to injuries, and their extracellular matrix comprises higher PGs content than the other major knee ligaments. Hence these characteristics make CCLs an ideal specimen to use as a model in this study. This study addressed the hypothesis that PGs play a vital role in CCL mechanobiology by determining the biomechanical behaviour at low strain rates before and after altering PGs content. For the first time, this study qualitatively investigated the contribution of PGs to key viscoelastic characteristics, including strain rate dependency, hysteresis, creep and stress relaxation, in canine CCLs. Femur-CCL-tibia specimens (n = 6 pairs) were harvested from canine knee joints and categorised into a control group, where PGs were not depleted, and a treated group, where PGs were depleted. Specimens were preconditioned and cyclically loaded to 9.9 N at 0.1, 1 and 10%/min strain rates, followed by creep and stress relaxation tests. Low tensile loads were applied to focus on the toe-region of the stress-strain curves where the non-collagenous extracellular matrix components take significant effect. Biochemical assays were performed on the CCLs to determine PGs and water content. The PG content was~19% less in the treated group than in the control group. The qualitative study showed that the stress-strain curves in the treated group were strain rate dependent, similar to the control group. The CCLs in the treated group showed stiffer characteristics than the control group. Hysteresis, creep characteristics (creep strain, creep rate and creep compliance), and stress relaxation values were reduced in the treated group compared to the control group. This study suggests that altering PGs content changes the microstructural organisation of the CCLs, including water molecule contents which can lead to changes in CCL viscoelasticity. The change in mechanical properties of the CCLs may predispose to injury and lead to knee joint osteoarthritis. Future studies should focus on quantitatively identifying the
Introduction
Ligaments are essential to knee joint stability, defined by their material composition, contributing to their complex mechanical characteristics (Girgis et al., 1975;Butler et al., 1978a;Arnoczky, 1983;Frank, 2004). Knee ligaments are strong fibrous tissues consisting of cellular material and extracellular matrix (ECM) proteins such as collagen type I (Frank, 2004). The viscoelastic properties of ligaments are thought to come from the viscous and elastic properties of the collagen fibres (Puxkandl et al., 2002), the interaction of collagen fibres with other non-collagenous components in the ECM such as elastin and proteoglycans (PGs) (Frank, 2004;Henninger et al., 2010, Henninger et al., 2013Smith et al., 2011, Smith et al., 2014, and water movement (Chimich et al., 1992). In particular, the interactions between sulphated glycosaminoglycans (sGAGs), which are components of proteoglycans (PGs), such as dermatan and chondroitinase sulphate with collagen fibrils in porcine medial collateral ligaments have been reported to increase permeability and decrease peak stress (Henninger et al., 2010).
PGs comprise 0.2%-5% of ligament dry weight and are either non-aggregating [small leucine-rich proteoglycan (SLRPs) such as decorin and biglycan] or large aggregating PGs (versican and aggrecan) (Gillard et al., 1977;Amiel et al., 1984;Hey et al., 1990;Kharaz et al., 2018). They are comprised of a protein core and sulphated GAGs . Approximately 90% of the total PGs in the fresh collateral ligaments are decorin, and the remaining PGs include biglycan, aggrecan, and versican (Ilic et al., 2005). The interactions between PGs with collagen fibrils differ, such that decorin binds to collagen fibrils through its core protein, whereas biglycan and the large PGs, aggrecan and versican, bind to collagen fibrils through their sGAG chains (Pogány et al., 1994;Scott and Thomlinson, 1998;Kharaz et al., 2018). Interactions between the sGAG chains form interfibrillar PG bridges, which are believed to contribute to the mechanical characteristics of collagenous tissues (Scott and Thomlinson, 1998;Scott, 2003). Several studies have shown that decorin contributes to the organisation and mechanical properties of soft tissues such as the skin (Danielson et al., 1997;Eshel and Lanir, 2001;Reed and Iozzo, 2002) and tendons (Robinson et al., 2005, Robinson et al., 2017Connizzo et al., 2013). The sGAG chains' role in tissue mechanics has been vital at low stress and strain levels (Eshel and Lanir, 2001;Eckert et al., 2013). Eshel and Lanir (2001) reported that PGs control rat dorsal skin's response at low strain levels where the collagen fibres were still crimped (toe region of stress-strain behaviour). Similarly, the sGAG content in porcine aortic heart valve leaflets may provide a damping mechanism reducing aortic valve leaflet flutter when the leaflet is not under high tensile stress, reportedly due to strong associations with fibre-fibre and fibre-matrix interaction at lowstress levels (Eckert et al., 2013). In addition to the cross-linking function of sGAG, the chains of sGAG may affect the hydration of soft tissues due to its highly negative charges (Amiel et al., 1995;Thornton et al., 2001;Murienne et al., 2015). The water molecules bond with sGAG can act as a lubricant between collagen fibrils (Scott, 2003) and facilitate the sliding of the fibrils during tensile stretches (Puxkandl et al., 2002), hence affecting the viscoelastic properties of ligaments (Chimich et al., 1992;Thornton et al., 2001;Gautieri et al., 2012).
Most available studies focused on PG's contribution to articular cartilage material mechanics. Nissinen et al. (2021) and Mäkelä et al. (2015) studied changes in the articular cartilage's poroelastic material parameters and fluid flow, which are associated with PG content. A computational modelling study representing the heterogeneous internal architecture of cartilaginous tissues such as knee menisci captured the poroelastic behaviour of the tissue (Elmukashfi et al., 2022). However, the previous computational modelling studies on cartilage and other tissues have not explicitly investigated the role of PGs in tissue mechanics nor included ligaments' microstructure. Hence, investigations of the mechanical role of sGAGs in the knee ligaments are limited. The previous literature focused on the medial collateral ligaments (Lujan et al., 2007(Lujan et al., , 2009Henninger et al., 2010). These studies reported that the interactions between sGAGs and collagen fibrils had no impact on the viscoelastic, tensile and resistance properties (Lujan et al., 2007(Lujan et al., , 2009). However, the permeability of the medial collateral ligament was found to increase with the reduction of sGAG (Henninger et al., 2010), which could be an indication that sGAGs contribute to the mechanical properties of knee ligaments by maintaining tissue hydration. A limitation of these studies was that extracted sections of the ligaments, which might have altered the microstructural organisation of the specimens, were examined instead of testing intact ligaments with their bone attachments ex vivo (Henninger et al., 2010).
In the canine knee joint, the cranial cruciate ligament (CCL) is the most susceptible ligament to injury (Moses et al., 2012) and contains a higher PGs content compared to other knee ligaments (Rumian et al., 2007;Kharaz et al., 2018) which likely contribute to the structural integrity of the tissues. Therefore, we hypothesise that a reduction in PGs content would affect the Frontiers in Bioengineering and Biotechnology frontiersin.org 02 contribution of ECM composition to the structural integrity in the CCL, resulting in altered ligament mechanics, which may ultimately lead to CCL injury and knee joint osteoarthritis (Quasnichka et al., 2005). Thus, this study aimed to qualitatively investigate the role of PGs in the viscoelastic properties (strain rate dependency, hysteresis, creep and stress relaxation) of intact femur-CCL-tibia in an ex vivo test environment.
2 Materials and methods 2.1 Specimen storage, preparation, and purpose Paired disease-free knee joint cadavers (n = 6) from skeletally mature Staffordshire bull terrier canines were obtained with full ethical permission from the Veterinary Research Ethics Committee [(VREC65), University of Liverpool]. Inclusion criteria were knee joints from skeletally mature animals with a bodyweight >20 kg. The entire knee joints were frozen at −20°C until required and defrosted at room temperature for extracting the CCL as a femur-CCL-tibia complex (Readioff, 2017;Readioff et al., 2020a;Readioff et al., 2020b). CCL complexes from the right knee joints were not treated with the PG depletion enzyme (control group), whilst the CCL complexes from the left knee joints were treated to deplete the PGs content (treated group).
Specimen length and cross-sectional area
The CCL lengths were determined between the insertion and origin of the ligaments at the cranial, caudal, lateral and medial planes using Vernier callipers (D00352, Duratool, Taiwan) accurate to ±10 µm (Vasseur et al., 1991;Comerford et al., 2005). The mean values were used in calculating the engineering strains (Readioff, 2017;Readioff et al., 2020b). The method by Goodship and Birch was used to measure the CCLs' cross-sectional area (CSA) (Goodship and Birch, 2005). In brief, alginate dental impression paste (UnoDent, UnoDent Ltd., United Kingdom) was used to make a mould around the CCL, which was used to create replicas of the ligament. The replicas were cut in half, and the surface of the replicas showing middle CSA was determined using ImageJ (a public domain Java image processing program) (Readioff, 2017;Readioff et al., 2020b). The CSA values were then used in the calculations of engineering stress.
Chondroitinase treatment protocol
The CCLs from both groups were immersed for 1 h at room temperature (20°C) in 20 ml buffer solution (15 ml of 20 mM Tris pH 7.5, 150 mM NaCl, 5 mM CaCl2) with protease inhibitors (1 tablet of mini-cOmplete per 10 ml of buffer, SIGMA-ALDRICH/Roche, United States) (Lujan et al., 2009). Chondroitinase ABC (ChABC) 0.25 IU/ml (SIGMA-ALDRICH, United States) enzyme was dissolved in 0.01% bovine serum albumin (BSA), and samples were incubated in this solution for 3 h prior to the mechanical tests as previously described (Lujan et al., 2009). This process was performed to reduce PG contents, and it was based on our preliminary work. Our preliminary work showed that approximately 80% of PGs in sectioned CCLs were digested within 3 h of incubation [Supplementary Materials (Supplementary Table S1 and Supplementary Figure S1)]. Control and treated groups were preserved during the mechanical tests in a custom-built tank filled with 600 ml of the buffer solution and protease inhibitors at room temperature (20°C) (1 tablet of cOmplete Protease Inhibitor Cocktail per 50 ml of buffer, SIGMA-ALDRICH/ Roche, United States).
Mechanical testing protocol
Each femur-CCL-tibia complex examined was attached to an Instron 3,366 (Instron, Norwood, MA) material testing machine fitted with a 10 N load cell (Instron 2,530-428 with ±0.025 N accuracy) using a custom-built stainless steel ducktail clamp and rig (Readioff et al., 2020a;Readioff et al., 2020b). A pre-load of 0.1 N was applied, followed by five load-unload preconditioning cycles to a maximum load of 9.9 N at a 10%/min strain rate (Butler et al., 1978b;Fung, 1993;Savelberg et al., 1993;Provenzano et al., 2002). Subsequently, mechanical tests were performed examining strain rate, creep and stress relaxation behaviours. The strain rate tests consisted of 1) two loading cycles at 0.1%/min strain rate; 2) three loading cycles at 1%/min strain rate; and 3) two loading cycles at 10%/min strain rate. These loading cycles were successively applied, followed by two cycles for creep testing and two for stress relaxation testing.
The creep behaviour of the CCLs was determined by subjecting the ligament to tensile loads of 4.9 N and 9.9 N that remained constant for 15 min each. For the stress relaxation tests, the CCLs were extended by applying a 9.9 N tensile load and monitoring the gradual decrease in tissue stress over 15 min whilst the ligament extension was held constant. Loading and unloading during creep and stress relaxation tests were performed at a 1%/min strain rate. A recovery period of 6 min was applied between each loading-unloading cycle to minimise the effect of the strain history of previous cycles on subsequent behaviour (Readioff et al., 2020a;Readioff et al., 2020b).
Following completion of the mechanical tests, the middle section of each CCL was extracted in preparation for the biochemical assays to determine water and sGAG contents in both control and treated groups (Farndale et al., 1986).
Biochemical assays 2.5.1 sGAG content quantification
CCLs in control and treated groups were digested for 48 h with 10 unit/ml papain in 100 mM sodium acetate, 2.4 mM ethylenediaminetetraacetic acid (EDTA), and 5 mM cysteine hydrochloric acid (HCL) at 60°C (Farndale et al., 1986). Dimethylmethylene blue (DMMB) dye-binding assay (1, 9dimethylmethylene blue) was used to determine the sGAG content of the CCLs (Farndale et al., 1986;Kharaz et al., 2018). Subsequently, 250 µl of DMMB dye was added to 40 µl duplicates of papain-digested CCLs, and this was immediately analysed at 570 nm wavelength. Shark chondroitin sulphate over a concentration range of 0-75 μg/ml was used as a standard, and sGAG content was calculated by comparison with the standard line (Lujan et al., 2009;Kharaz et al., 2018).
Water content quantification
The water content of the CCL in both groups was expressed in terms of the mass of water per unit mass of the wet ligament (Eq. 1) as described previously (Lujan et al., 2009;Kharaz, 2015). Initially, the CCLs were left to thaw at room temperature (20°C), and wet mass was measured. Subsequently, these samples were freeze-dried overnight, and then the dry masses of the CCLs were measured.
Viscoelastic data analysis
Analyses of the load-deformation data were performed using MATLAB (MATLAB R2020b), and the mean and standard deviation of the analysed data were reported. Eqs 2, 3 were used to calculate engineering stress and strain values (Haut and Little, 1969;Woo et al., 1981). Subsequently, the secant modulus for the maximum applied stress describing the stiffness of the CCLs was determined (Eq. 4). Numerical integration (using the trapezoidal rule) of the load-unload stress-strain curves was used to estimate the stored energy in the ligaments (Eq. 5). The hysteresis (dissipated energy) was then calculated from the difference between the stored energy during loading and unloading cycles (Elsheikh et al., 2008) (Eq. 6). Creep behaviour was determined from the strain-time curves and creep compliance function (Eq. 7). The stress relaxation behaviour was determined from the stress-time curves and stress relaxation modulus (Eq. 8). Stress values were normalised by the peak stress at the test start time (t = 0), allowing the comparison of relaxation behaviour across ligaments (Fung, 1993). The stress relaxation rate was calculated at 15 min.
where σ is stress in MPa, F is applied load in N, and CSA is the cross-sectional area at the middle of the CCL in mm 2 .
where ε is strain, ΔL is the change in length in mm (ΔL L 1 − L 0 ), and L 0 is initial length and L 1 is the deformed length of the CCL in mm.
where E secant is secant modulus in MPa, and σ max and ε max are the maximum stress and strain values of the load cycle.
where U is the stored energy in MPa, N is the resolution of the trapezoidal partition, and Δε k is the length of the k th interval (Δε k ε k − ε k−1 ).
Hysteresis U Loading
where U Loading and U Unloading represent the stored energy during the loading and unloading of the ligaments, respectively, in MPa.
where J(t) is the creep compliance function in MPa −1 , and σ 0 is the initial applied stress.
where G(t) is the relaxation modulus function in MPa, and ε 0 is the initial applied strain.
Specimen characteristics
The CCL specimens (n = 6 paired knee joints) were of mixed gender (female = 1 and male = 5), and the bodyweight of the cadavers was in the range of 21.5-29.4 kg (mean ± standard deviation: 25.76 ± 3.12 kg).
Specimen length and cross-sectional area
The CCLs' mean lengths and CSA ranged from 14.58 to 19.25 mm (mean ± standard deviation: 16.42 ± 1.33 mm) and Frontiers in Bioengineering and Biotechnology frontiersin.org 04 from 16.07 to 31.57 mm 2 (mean ± standard deviation: 23.79 ± 5.08 mm 2 ), respectively. The length and CSA of each CCL can be found in the Supplementary Tables S2, S3).
Biochemical assays 3.3.1 sGAG content
The depletion process reduced sGAGs by approximately 19% in the treated group (Table 1). The sGAG content of the CCLs in the treated group ranged from 1.7 to 4.7% (mean ± standard deviation: 3.1 ± 1.1%) as a percentage of dry weight, whereas the range was 2.2-6.6% (mean ± standard deviation: 3.8 ± 1.6%) in the control group.
Water content
The depletion process reduced water content by approximately 4% in the treated group (Table 2). The water content of the CCLs in the treated group ranged from 65.7 to 73.3% (mean ± standard deviation: 69.4 ± 3.0%), whereas the range was 64.7-77.4% (mean ± standard deviation: 72.3 ± 4.2%) in the control group.
Mechanical properties 3.4.1 Stress-strain
The experimental setup was designed to focus on the toe-region of the stress-strain curves where the extracellular matrix, including the PGs, is expected to affect the CCL mechanics (Eshel and Lanir, 2001;Lujan et al., 2009;Henninger et al., 2010;Eckert et al., 2013). The stress-strain behaviour of the CCLs in control and treated groups followed a similar pattern, showing strain rate dependencies ( Figures 1A-C). For example, the stress-strain behaviour of specimens in both groups showed an increase in stiffness with increasing strain rates from 0.1 to 1 and then to 10%/min.
The CCLs in the treated group illustrate higher stress values than those in the control group. During loading at 0.1%/min strain rate, the mean stress at 5% strain was 0.5 MPa in the control and 0.65 MPa in the treated groups ( Figure 1A). Similar patterns were found during loading at 1 and 10%/min strain rates. During 1%/min strain rate, the mean stress at 5% strain was 0.71 MPa in the control and 0.77 MPa in the treated groups ( Figure 1B); and during 10%/min strain rate, the mean stress at 5% strain was 0.75 MPa in the control and 0.83 MPa in the treated groups ( Figure 1C).
Secant modulus
The secant moduli for the maximum applied stress of the CCLs increased with increasing strain rate in both groups ( Figure 2). Similar to the patterns observed in the stress-strain behaviour, the mean secant moduli were higher in the treated than in the control groups. At 0.1%/min strain rate, the mean secant moduli were 9.6 MPa in the control and 11 MPa in the treated groups. Similar patterns were found during loading at 1 and 10%/min strain rates. At 1%/min strain rate, the mean secant moduli were 11.5 MPa in the control and 12.2 MPa in the Frontiers in Bioengineering and Biotechnology frontiersin.org 06 treated groups, while at 10%/min strain rate, the mean secant moduli were 12.1 MPa in the control and 12.4 MPa in the treated groups.
Hysteresis
The hysteresis decreased with increasing strain rates, suggesting strain rate dependencies of the CCLs in both groups ( Figure 3). Hysteresis was consistently higher in the control than in the treated group across all three strain rates. The mean hysteresis at 0.1, 1, and 10%/min were 3.4, 1.8, and 1.2 MPa in the control and 1.8, 1.2, and 0.8 MPa in the treated groups.
Creep
The creep strain-time curves showed increased strain with time in both control and treated groups ( Figures 4A,B). The creep strains were higher in the control than in the treated groups during creep loads of 4.9 and 9.9 N. For example, after 15 min of 4.9 N creep load, the mean creep strain was recorded at 0.32% in the control and 0.2% in the treated groups.
Creep compliance, indicating strain per unit stress, was higher in both groups when the CCLs were subjected to a constant creep load of 4.9 N than 9.9 N ( Figures 4C,D). The CCLs in the control showed higher creep compliance than the treated group during loading at 4.9 and 9.9 N.
The creep strain rate increased when the load was increased from 4.9 to 9.9 N; this was evident in both groups ( Figure 5). Creep strain rates were higher in the control than in the treated groups. During constant loads of 4.9 and 9.9 N, mean creep rates were 0.36 × 10 −3 and 0.5 × 10 −3 % in the control and 0.22 × 10 −3 and 0.3 × 10 −3 % in the treated groups.
Stress relaxation
The normalised stress relaxation-time curves illustrate greater stress relaxation in the control group than in the treated groups ( Figure 6A). For example, at 15 min of relaxation, the mean values for normalised stress relaxation were 84% in the control and 89% in the treated groups. However, the stress relaxation rate was slightly lower in the control than in the treated groups, with the mean values being 0.41 × 10 −3 MPa/s in the control and 0.42 × 10 −3 MPa/s in the treated groups ( Figure 6B).
The stress relaxation modulus, indicating stress variations under the imposed constant unit strain, was higher in the treated than control groups ( Figure 6C). The mean values for the relaxation modulus at t = 0 were 0.12 MPa in the control and 0.18 MPa in the treated groups.
FIGURE 3
Hysteresis (dissipated energy) of the cranial cruciate ligaments (CCLs) during cyclic loading at varying strain rates for the control (black lines) group, where proteoglycans were not depleted, and the treated (red lines) group, where proteoglycans were depleted. The box plot shows individual specimen values (grey dots) and means (red asterisk) during loading at 0.1%/min, 1%/min, and 10%/min strain rates. The outliers are indicated with a red plus sign.
FIGURE 2
Secant modulus values were determined for the cranial cruciate ligaments (CCLs) in the control (black line) group, where proteoglycans were not depleted, and the treated (red line) group, where proteoglycans were depleted, at varying strain rates. The box plot shows individual specimen values (grey dots) and means (red asterisk) during loading at 0.1%/min, 1%/min, and 10%/ min strain rates. The outliers are indicated with a red plus sign.
Frontiers in Bioengineering and Biotechnology frontiersin.org
This study focused on qualitatively observing and describing changes in the viscoelastic characteristics of the CCLs due to changes in the PG content. We hypothesised that altering PGs content in the CCLs, which changes the composition of the ligaments, might affect the viscoelastic characteristics of the CCL. This compositional change is clinically significant because it can affect ligament mechanics, possibly predisposing to a CCL injury and knee joint osteoarthritis (Quasnichka et al., 2005). Therefore, this study qualitatively analysed the contribution of PGs to the viscoelastic behaviour of intact femur-CCL-tibia, particularly the role of PGs in the strain rate dependency, hysteresis, creep, and stress relaxation of the CCLs. Here, we found alterations in the viscoelastic characteristics of the CCLs due to PGs reduction. Changes were observed in tissue stiffness, hysteresis, stress relaxation and creep. However, strain rates and the increase in creep load were unaffected by the PG content reduction.
The design of experiments and mechanical tests were performed based on our previous work (Readioff et al., 2020a;Readioff et al., 2020b) and preliminary investigations (Comerford et al., 2014). The mechanical tests focused on investigating the toe-region of the stress-strain curves, where the collagen fibres are crimped, and sGAG chains are believed to have a significant mechanical contribution to tissues (Eshel and Lanir, 2001;Lujan et al., 2009;Eckert et al., 2013), hence in this study, we examined loads up to 10 N at three slow strain rates (Haut and Little, 1969;Readioff et al., 2020b). The ligaments were not loaded to failure as this study focused on the toe-region of the stress-strain curves. Additionally, the loading protocol optimised the use of specimens for multiple studies, including the regional distribution of PGs across the ligament (Kharaz et al., 2018). Future studies could explicitly investigate the sensitivity of PGs content to the mechanics of the ligaments during higher loading conditions, including failure loads.
FIGURE 4
The creep behaviour of the cranial cruciate ligaments (CCLs) shows creep strain-time curves at (A) 4.9 N and (B) 9.9 N applied loads and creep compliance curves at (C) 4.9 N and (D) 9.9 N applied loads. The black line (mean) and the grey shade (standard deviation) show results from the control group (proteoglycans were not depleted), and the red line (mean) and light red shade (standard deviation) show results from the contralateral CCLs in the treated group (proteoglycans were depleted).
Frontiers in Bioengineering and Biotechnology frontiersin.org
The CCL properties such as length and the cross-sectional area in the mid-ligament regions were determined and used in the calculations of engineering stress and strain values, and these properties were in a range similar to those previously reported for canine CCLs (Butler et al., 1983;Comerford et al., 2005).
When determining the incubation process of the CCLs in chondroitinase ABC (ChABC), our preliminary time-course study showed that after 3 h of incubation in 0.25 IU/ml ChABC, sGAG content was significantly reduced by approximately 82.3% [Supplementary Materials (Supplementary Figure S1)] similar to previous studies (Lujan et al., 2007(Lujan et al., , 2009Henninger et al., 2010). However, unlike the current study, where intact femur-CCL-tibia complexes were used, the preliminary investigation was carried out on CCLs that were transversely cut to extract their middle sections. The transverse cut of the CCLs in the preliminary investigation might have disrupted the synovial sheath allowing better infiltration of the enzyme and reducing the sGAG content. The CCL is surrounded by vascularised synovial tissue (synovial sheath), which protects the ligament's core tissue from exposure to synovial fluid and hence degradation (Amiel, 1990;Petersen and Tillmann, 1999;Chen et al., 2019). The CCLs can be described as fibre-reinforced matrices, and approximately 70% of the CCLs are water. The water content in the CCLs is associated with sGAG chains, and these chains bind with water molecules because of their highly negative charges (Amiel et al., 1995;Thornton et al., 2001;Murienne et al., 2015). In this study, reducing PG content reduced water content by approximately 4%. The change in water content may indicate that when the sGAG chains were reduced, the matrix lost waterbinding sites, reducing CCL's water-retaining capacity.
The stress-strain behaviour of the CCLs was unaffected by the reduction of the PGs content (Figure 1), including strain rate dependency. This outcome agrees with the human medial collateral ligament study, where the removal of dermatan sulphate showed no effect on the quasi-static tensile property (Lujan et al., 2007). Previous studies reported stress-strain behaviour of untreated CCLs under strain rates ranging from 0.1 to 36.8%/min (Haut and Little, 1969;Readioff et al., 2020b). Stresses at 5% strain were 2.1 MPa at 1.7%/min, 2.5 MPa at 2.6%/ min, and 3.5 MPa at 10.8%/min (Haut and Little, 1969), and they were higher than the results reported in this study. For example, stresses of the CCLs in the control group at 5% strain ranged between 0.5 and 0.75 MPa at strain rates up to 10%/min. However, stresses in human ACLs at 5% strain was approximately 0.05 MPa in an old osteoarthritic knee when mechanically tested at 1,400%/min (Peters et al., 2022), and it is lower than reported values in this study, possibly due to the joint degeneration effect on the ligaments.
The secant modulus, showing CCL stiffness, was higher in the treated than control groups (Figure 2). The increase in secant modulus may be linked with the decrease in water content due to changes in the sGAG chains. The decrease in sGAG chains reduces the water-binding sites, reducing water molecules and means for tissue lubrication, increasing internal friction and tissue stiffness (Puxkandl et al., 2002;Scott, 2003;Legerlotz et al., 2013). The modulus of untreated CCLs has been reported as approximately 260 MPa under failure load at a 6,000%/min strain rate equivalent to 2.3 MPa at 0.5% of the applied stress (Butler et al., 1983). The equivalent modulus is lower than the secant moduli reported in this paper for the CCLs in the control group, and they ranged between 10 and 12 MPa at strain rates up to 10%/min. The marked variability in the reported values is likely due to variations in testing techniques and specimen demographics, making it challenging to perform comprehensive comparisons.
Hysteresis of the CCLs in both groups was strain rate dependent, and this characteristic did not change after PGs reduction (Lujan et al., 2007;Readioff et al., 2020b). The treated CCLs had lower hysteresis than those in the control group, suggesting that PG depletion has altered the microstructural organisation of the tissue affecting energy storage (Figure 3). The mean hystereses in the control group were approximately 3.5, 1.8, and 1.2 MPa at 0.1, 1 and 10%/min, and these values are comparable to the previously reported hysteresis ranging from three to 1.5 MPa at strain rates up to 10%/min where similar test methodologies were adopted (Readioff et al., 2020b).
The decrease in creep after PG reduction (Figures 4, 5) can result from decreased hydration (Thornton et al., 2001). With
FIGURE 5
Creep rate of the cranial cruciate ligaments (CCLs) during loading at 4.9 N and 9.9 N in the control (black lines) group, where proteoglycans were not depleted, and in the treated (red lines) group, where proteoglycans were depleted. The box plot shows individual specimen values (grey dots) and means (red asterisk). The outliers are indicated with a red plus sign.
Frontiers in Bioengineering and Biotechnology frontiersin.org higher water molecules inside the tissue, the creep compliance is increased because water provides greater freedom for fibrillar movement (Thornton et al., 2001). Similarly, Murienne et al. (2015) associated the slower creep rate with increased interfibrillar friction with sGAG removal. Reducing sGAG chains reduced normalised stress relaxation ( Figure 6A), and at the low strain level, stress relaxation likely occurred through sliding between collagen fibres (Screen et al., 2013). The results in this study support the notion that the higher water content in the control group permits greater relative movement and, hence, greater relaxation of the microstructural component in the ligaments (Chimich et al., 1992). Larger and faster stress relaxation was observed in mouse tail tendon decorin knockout (Elliott et al., 2003) and mice tendon fascicles (Legerlotz et al., 2013). However, Lujan et al. (2009) showed a small and negligible increase in stress relaxation after reducing sGAG in the human medial collateral ligament. In this study, the change in the relaxation behaviour after treatment can cause fatigue damage in the CCLs, highlighting the critical role of sGAG in tissue mechanics, possibly by maintaining ligament hydration.
Our study had several limitations, one of which was the small reduction in PGs in the intact femur-CCL-tibia complexes. Future studies could overcome this limitation by injecting the enzyme into the intact femur-CCL-tibia complexes to reduce PGs content further. This process could be adapted from the current practice for reducing PGs in cadaveric articular cartilage (Dixon et al., 2021;Culbert et al., 2022). Future work could also image the pattern of collage-proteoglycan interaction in the ligaments using previously established methods (Scott, 1985;Raspanti et al., 2000) and focus on such interactions under dynamic loading conditions.
FIGURE 6
Stress-relaxation behaviour of the cranial cruciate ligaments (CCLs) showing (A) normalised stress relaxation-time curves in the control (black lines: mean, grey shade: standard deviation) group, where proteoglycans were not depleted, and in the treated (red lines: mean, light red shade: standard deviation) group, where proteoglycans were depleted. (B) Stress relaxation rate in the control (black lines) and the treated (red lines) groups. The box plot shows individual specimen values (grey dots) and means (red asterisk). The outliers are indicated with a red plus sign. (C) Stress relaxation modulus in the control (black lines: mean, grey shade: standard deviation) and the treated (red lines: mean, light red shade: standard deviation) groups.
Frontiers in Bioengineering and Biotechnology frontiersin.org
The approximation methods adopted to measure the crosssectional area and length of the CCLs might be another limitation. However, these methods were selected because of their nondestructive approach. Further investigation with a larger number of specimens will allow for quantitative (statistical) measures and an improved understanding of the effect of cadaveric demography (i.e., age, gender and body weight) on the mechanical properties of the CCLs (Woo et al., 1990a;Woo et al., 1990b;Duval et al., 1999).
In conclusion, to the authors' knowledge, this study is the first to qualitatively describe the contribution of PGs to key viscoelastic characteristics of the intact femur-CCL-tibia complex in canine knee joints. We have shown that reducing sGAG chains in the CCLs increases stiffness and stress relaxation modulus and decreases creep strain, creep compliance and creep rate. However, strain rate sensitivity and the sensitivity to the increase in creep load were unaffected by the reduction of the sGAG chains. This study lacks a sufficient sample size and, consequently, lacks a statistical analysis to conclude the significance of the results. Our results suggest a tendency and not a significant behaviour that the role of sGAG is essential in maintaining microstructural organisation, including water molecules in the tissue, which in effect contributes to the viscoelasticity of the CCLs and, ultimately, knee joint stability.
Data availability statement
The original contributions presented in the study are included in the article's Supplementary Materials, further inquiries can be directed to the corresponding authors.
Ethics statement
The animal study was reviewed and approved by the Veterinary Research Ethics Committee [(VREC65), Institute of Veterinary Science, University of Liverpool, Liverpool, United Kingdom].
Author contributions
RR, EC, and AE: conceived and designed the experiments. RR: performed the experiments, analysed the data, prepared figures and tables, and authored paper drafts. BG: analysed the data. YK: performed biochemical assays. EC: conducted an extensive preliminary study. All authors reviewed and approved the final draft of the paper.
Funding
This work was supported by the School of Engineering at the University of Liverpool, Liverpool, United Kingdom; the Wellcome Trust Institutional Strategic Support Fund, University of Liverpool (WT 204822/Z/16/Z); and the National Institute for Health Research (NIHR) Biomedical Research Centre based at Moorfields Eye Hospital NHS Foundation Trust and the UCL Institute of Ophthalmology, London, United Kingdom. | 2022-11-15T14:21:51.041Z | 2022-11-15T00:00:00.000 | {
"year": 2022,
"sha1": "340e83f10ee513164dcefc8dc244080c412cfc27",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "340e83f10ee513164dcefc8dc244080c412cfc27",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55821065 | pes2o/s2orc | v3-fos-license | CHARACTERIZATION OF A NEW Ag-SELECTIVE ELECTRODE BASED ON N3S5-THIAAZACROWN ETHER AS NEUTRAL IONOPHORE
A N3S5-azathiacrown ether has been newly synthesized in high yield. Electrode using this compound as ionophore shows high selectivity and sensitivity towards Ag+. The effect of plasticizers and ion-exchanger on the properties of electrode was also studied and detection limit of 3.8 × 10−8 M Ag+ was obtained. This sensor has a fast response time of <12 s and performs satisfactorily over a wide pH range of 3.6__8.2. The electrode was also successfully used as indicator electrode in the titration of Clin tap water.
INTRODUCTION
With the development of industry, many severe accidents were caused by heavy metals pollution.Selective and sensitive detection of heavy and transition metal ions is of great importance from the ecotoxicological point of view. 1 Silver has been widely used in the photographic film production industry, the electrical and electronic industries.Silver salts have been successfully applied to disinfect water for drinking and recreation purposes, dental and pharmaceutical antibacterial agents, owing to their unique germicidal properties.However, silver can inactivate sulfhydryl enzymes, and combine with amine, imidazole and carboxyl groups of various metabolites.An excess of silver is toxic to fish and microorganisms at a concentration as low as 0.17 μg/L. 2 Because of the increasing demand of silver compounds in industry and common life, severe contamination of the environment by silver is rising.Therefore, the sensitive and selective determination of silver is very important.
Crown ethers are powerful tools for separation, enrichment and analysis of ionic species and have enjoyed widespread use in various areas of science and technology. 3One of the successful applications of them is in analytical chemistry.0][11][12] However, the applications of crown ethers are often subject to the difficulties of synthesis technique.Taking this into consideration, we reported a simple method for the synthesis of azathiacrown ether with high yield in this work.This ligand is expected to form selective complexes with transition metal ions and to give an improved selectivity for silver ions.Potentiometric evaluation of the plasticized poly (vinyl chloride) (PVC) membrane electrodes using those compounds as Ag + -ionophore has been done in terms of its selectivity coefficients and detection limit for Ag + .
Melting points were taken on a WRS-1B digital melting-point apparatus.Infrared (IR) spectra were recorded on KBr pellets using Perkin-Elmer 1430 spectrometer.Nuclear magnetic resonance (NMR) spectra were measured with a Brucker WM-300 instrument and chemical shift were given in ppm from tetramethylsilane (TMS).Mass (MS) spectra were recorded on a Thermo TSQ Quantum Access Agillent 1100.
The synthesis route of thiaazacrown ether L was shown in Scheme 1.
Scheme 1.
The synthesis route of azathiacrown ether L.
Synthesis of compound 2
2-amino thiophenol (1.25 g, 10 mmol) was added to sodium ethoxide formed by addition of sodium (0.23 g, 10 mmol) to absolute ethanol (50 mL) under nitrogen atmosphere.A degassed solution of 1 (5 mmol) in absolute ethanol (20 mL) was added dropwise to the refluxing sodium thiophenolate solution with constant stirring.After the reaction was finished, the reaction mixture was cooled to room temperature and filter off.The solvent was removed under reduced pressure.The solid so obtained were recrystallized from ethanol as yellow crystal.Yields: 75%, m. p. 129.4-129.
Synthesis of compound 3
A solution of chloroacetic anhydride (50 mmol) in CH 2 Cl 2 (50 mL) was added through a dropping funnel to a stirred solution of 2 (16 mmol) in CH 2 Cl 2 (250 mL) at 0-5 o C over a 1 h period.The mixture was stirred overnight under nitrogen atmosphere at room temperature.At the end of this period, the mixture was filtered off.Filtrate was washed with saturated aqueous NaHCO 3 , and solvent was removed under reduced pressure.The resulting mass was washed with ethanol and diethyl ether and then dried in vacuum.The crude products were purified by recrystallization from acetonitrile to give compounds 3 as dark yellow solids.Yields: 65%, m. p. 167.8-168.
Synthesis of macrocyclic compound L
A solution of 3 (0.5 mmol) in DMF (50 mL) and that of 2, 2'-thiodiethanethiol (0.5 mmol) in DMF (50 mL) were added simultaneously to a solution of DMF (50 mL) containing 2 mmol anhydrous Na 2 CO 3 over 1.5 h.The whole process was operated under nitrogen atmosphere with vigorously stir for overnight at room temperature.The resulting mixture was filtered off and the solvent was removed under reduced pressure.The remaining residue was washed in turn with water, ethanol and diethyl ether and then dried in vacuum.The solid products were collected and crystallized from DMF to obtain pure compounds L as yellow crystal.Yields: 85%, m. p. 201. 3
Membrane preparation
The membrane components, containing appropriate amount of PVC, plasticizers (o-NPOE, DOS, DBP or DOP), NaTFPB and ionophore (Total 240 mg), were dissolved in 3.0 mL of THF and stirred vigorously for at least 2 h, and then poured into a glass ring (30 mm i.d.) fixed on a glass plate.The solvent was evaporated overnight at room temperature to give a transparent membrane of 180 μm thickness.For each ISE, a disk of 7 mm diameter was punched from the membranes and glued to a plasticized PVC tube (i.d.6 mm, o.d. 9 mm) with THF/PVC slurry.The tube was then filled with inner filling solution (0.1 M AgNO 3 ) and conditioned for 1 day in 1.0x10 -3 M AgNO 3 .For longterm measurements, the electrodes were kept in dark to avoid the photolysis of AgNO 3 .
Potential measurements
Membrane potentials were measured with a Model PXSJ-216 digital ion analyzer (Shanghai Instruments) in mechanically stirred solution with magnetic stirrer at room temperature (13 o C, and temperature were corrected) in the galvanic cell: Hg/Hg 2 Cl 2 |KCl (sat.)|1.0M LiOAc||sample solution||ISE membrane|0.1 M Ag + |Ag/AgCl.Activity coefficients were calculated according to the Debye-Hückel approximation and EMF values were corrected for liquid-junction potentials with the Henderson equation.The reference electrode Hg/Hg 2 Cl 2 with double junction was used with 1.0 M LiOAc as salt bridge electrolyte.
Selectivity measurements
For selectivity measurements, the electrodes with 10x10 −3 M NaCl as inner filling solution were firstly conditioned in 1.0x10 -2 M NaNO 3 solution overnight.Then, potential measurements were made in the respective nitrate solutions.The sequence of the sample ions was: Li + , H + , Na + , K + , Mg 2+ , Ca 2+ , Cu 2+ , Cd 2+ , Pb 2+ , Hg 2+ and Ag + .For detection of Hg 2+ , solutions were adjusted to pH = 4 using 0.1 M HNO 3 to avoid precipitation.All the measurements were done in three times.The selectivity coefficients were calculated from the potential values according to the separate solution method assuming theoretical slopes.
Influence of membrane composition
The sensitivity, selectivity, working range and stability of an ISE depend on many factors, such as the nature of the ionophore, the addition of ionexchanger and the nature of various plasticizers. 12,14Therefore, membranes with different compositions have been prepared and their potentiometric response characteristics were evaluated.The effect of plasticizers on Ag +selective electrodes based on different crown ethers is shown in Table 1.It is clear that o-NPOE is more effective plasticizer than others in preparing the Ag + -ISEs, which can be explained by the fact that o-NPOE plasticized PVC membranes have much higher dielectric constants than DOS, DBP and DOP based membranes. 15In addition, o-NPOE plasticized the membranes dissolve the ion association complexes and adjust both permittivity and ion exchanger sites mobility to give highest possible selectivity and sensitivity.
Table 1.Influence of the nature of plasticizers on the characteristics of Ag + -ISEs.
Electrode
No.
Composition of membrane w% Liner range, M
Detection limit, M
Slope, mV/dec.As we all known, lipophilic anionic additives (NaTFPB) can act as a charge compensating counter ion in the membrane and thus facilitate the process of ion charge transduction.Accordingly, the effects of the amount of NaTFPB in Ag + -selective membranes on the electrode characteristics were also investigated.The amount of NaTFPB was altered while maintaining the same amounts of ionophore, PVC and plasticizer (o-NPOE) in the membranes (Table 2).The results show that the electrode based on ionophore L and NaTFPB in a mole ratio of 2:1 present the best potential responses, which indicates that compound L form complex with silver ion in a mole ratio of 1:1 in membrane phase.The time-dependent potentiometric response of Ag + -ISE based on L with optimal compositions are given in Fig. 1, and we can see that the electrode reached the equilibrium response in a very short time (<12 s).L A ): a, 3.0×10 -2 ; b, 3.0×10 -3 ; c, 3.0×10 -4 ; d,3.0×10 -5 ; e, 3.0×10 -6 ; f, 3.0×10 -7 ; g, 3.0×10 -8 M.
Potentiometric selectivity of silver electrodes
The influence of interfering ions on the response behavior of the ISE is usually described in terms of selectivity coefficients.The selectivity coefficients, pot AgJ K , of Ag + -ISE were determined by using Bakker's method to eliminate the influence of the inherent sensitivity limit on the response toward discriminated ions. 16Table 3 shows the selectivity coefficients of Ag + -ISE based on the ionophore L. It can be seen that the electrode based on L as ionophore gives the best selectivity and sensitivity toward Ag + than other cations.
Moreover, the complex formation constants, lgβ IL , of Ag + have been calculated for all ionophores, according to the proposed method. 17The lgβ IL values of sensors based on ionophores L was 9.46.The result also indicate that ligand L has good affinity and selectivity towards Ag + .
Effect of pH
The influence of pH on the response of Ag + -ISE based on ionophores L (for sensor No. L A ) in 1.0×10 -3 M and 1.0×10 -4 M AgNO 3 was studied by adjusting the pH of the test solutions with 0.1 M HNO 3 and 0.1 M NaOH (Fig. 2).It is clear that the best performance for the Ag + -ISE based on ionophore L can be obtained in the pH range of 3.6-8.2.
Life time study of the proposed electrode
The degradation of the sensitivity in the polymeric membrane, resulting from ionophores leaching from the membrane, is dependent upon the lipophilicity and chemical stability of the ionophore.The proposed electrodes based on ionophore L can be used over a period of 3 months without significant drift in potentials, because of the high lipophilicity of crown ether.However, it is necessary to point out that the electrode should be stored in dark to avoid the photolysis of AgNO 3 when not in use.
Analytical applications
The fabricated electrode based on ionophore L was successfully used as the indicator electrode for the titration of chloride ion in lab tap water.Titration curves of Cl -with 0.1 M AgNO 3 solution are illustrated in Fig. 3. From the end-point, the concentration of Cl -1.52 × 10 -3 M was got, which is lower than the standard value (GB5749-85: 4.63 × 10 -3 M).In addition, the response characteristic of the proposed PVC membrane electrode is compared with those of the best Ag + -ISEs reported earlier (Table 4).It is apparent that the proposed electrode is superior to the existing electrodes with regard to working concentration range and low detection limit.
CONCLUSIONS
In summary, we present here the synthesis and characterization of a new azathiacrown ether L with good yields and employed as PVC-based membrane electrode for potentiometric determination of Ag + .The sensor L shows good selectivity, wide working concentration (3.0 × 10 −7 -3.0 × 10 −3 M) range, minimum detection limit (3.8 × 10 −8 M) and fast response time (<12 s) with a slope of 56.9 mV/decade of activity between pH range 3.6-9.0.The sensor based on L has also been used as indicator electrode in the titration of Cl - concentration of lab tap water.
Fig. 3
Fig. 3 Titration curve of 50 mL tap water with 0.1 M Ag + as titration reagent obtained by using Ag + -ISE based on ionophore L (Sensor No. L A ).
Table 2 .
Influence of NaTFPB concentrations on the characteristics of Ag + -ISEs.
Table 3 .
Potentiometric selectivity coefficients and response slopes obtained with the separate solution method for o-NPOE/PVC membranes based on ionophore L
Table 4
Comparative analysis of proposed electrode with the reported electrodes. | 2018-12-09T16:45:52.470Z | 2011-03-01T00:00:00.000 | {
"year": 2011,
"sha1": "4f3315c790582a6ebb51f7db9dc1318a654d4745",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.cl/pdf/jcchems/v56n1/art12.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4f3315c790582a6ebb51f7db9dc1318a654d4745",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
54089513 | pes2o/s2orc | v3-fos-license | Impact of Donor Cytomegalovirus Serology and duration of Prophylaxis on Follow-Up Strategy in Lung Transplant Recipients
C l i n M e d International Library Citation: Borro JM, Delgado M, Rey T, Rama P (2015) Impact of Donor Cytomegalovirus Serology and duration of Prophylaxis on Follow-Up Strategy in Lung Transplant Recipients. Int J Transplant Res Med 1:007 Received: May 25, 2015: Accepted: July 02, 2015: Published: July 04, 2015 Copyright: © 2015 Borro JM. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Borro et al. Int J Transplant Res Med 2015, 1:2
Introduction
Cytomegalovirus (CMV) infection is the most prevalent opportunistic infection after lung transplantation.It is reported in between 20% and 50% of cases after discontinuation of prophylaxis, depending on the series [1].CMV usually remains dormant in the lymphatic system, and its reactivation, due to immunosuppressive therapy, may have considerable adverse consequences in the immunocompromised host [2].Its many and varied effects include inflammation, increased morbidity, and decreased graft and patient survival.
caused by the development of invasive disease; and indirect effects, due to overgrowth of other opportunistic infections, malignancy or development of bronchiolitis obliterans [3].Established scientific evidence indicates that prophylaxis during periods of high-dose immunosuppressive therapy is beneficial in the first months after transplantation, as well as during acute rejection episodes.Without prophylactic treatment, the incidence of infection is very high and the risk of disease increases in the first months after surgery [4].Recent studies provide some reliable data providing guidance for both general long-term prophylaxis and more tailored strategies based on pre-emptive therapy.A randomized placebo-controlled multicenter trial showed that 12 continuous months of valganciclovir substantially reduced CMV rates during the first 18 months after lung transplant (CMV occurred in 4% of patients), as compared to 3 months of valganciclovir (CMV occurred in 32% of patients), the standard of care at the start of the trial [5].The authors concluded that extended prophylaxis reduced CMV infection, disease, and disease severity without increased ganciclovir resistance or toxicity.Finlen Copeland et al. also showed a sustained benefit with twelve months of valganciclovir, as compared to 3, over the course of 3.9 years of follow-up in a single-center study [6].However, there is still no consensus on the protocol to be followed or the duration of treatment [7], and some authors advocate shorter treatments (3 months) followed by a pre-emptive strategy to protect against CMV infections, prevent resistance and avoid valganciclovir toxicity.Some studies and meta-analyses focused on solid-organ transplant recipients found no differences in efficacy between these two approaches [8,9], while adverse effects such as leukopenia have been reported in patients on prophylaxis [10].For these reasons, the 2011 consensus document of the Spanish Transplantation Infection Study Group (GESITRA) of the Spanish Society of Infectious Diseases and Clinical Microbiology (SEIMC) recommends short courses of prophylactic treatment, followed by pre-emptive therapy [11].This approach has been the standard of care in our group since 2003.
Our goal in this study was to establish the effects of a 3-month course of valganciclovir prophylaxis followed by pre-emptive therapy on the CMV infection rate in our series, and to determine the number of recurrences, and patient and graft survival, taking into account patient/donor serologies.Another aim was to describe the side effects associated with treatment and the emergence of resistance reported in our center.
Patients
The protocol was approved by the Research Ethics Committee of Hospital Universitario de A Coruña.A retrospective study was conducted of patients undergoing lung transplantation between 2003 and 2010 in Hospital Universitario de A Coruña, who received oral valganciclovir for CMV prophylaxis.Patients who died during the first 3 months after transplantation were excluded.
Definitions
Our cases were defined in accordance with the GESITRA-SEIMC consensus document [12].Thus, "active infection" (CMV infection) was considered when the viral genome, viral proteins or total virus was detected in any tissue or body fluid.Infection was "primary" when virus was detected in a previously seronegative patient."Recurrent infection" was the renewed detection of CMV at least 4 weeks after the infection had been controlled, due to either reactivation of the same endogenous latent strain or reinfection with a new CMV strain."CMV disease" was considered when the infected patient presented symptoms or signs of viral syndrome or organ involvement.The term "viremia" was reserved for isolation of the virus in blood cultures, and "antigenemia" was determined when the viral antigen pp65 was directly detected in leukocytes.Finally, "viral syndrome" was characterized by fever (≥ 38 °C) of at least 2 days' duration within a 4-day period, with neutropenia, thrombocytopenia, elevated transaminases and CMV detection.
General care protocol for lung transplant recipients
The patients were selected for inclusion in the transplantation waiting list according to the International Society for Heart and Lung Transplantation criteria [13].According to this protocol, patients perform physical and respiratory exercises during the waiting period.In addition, candidates for single-lung transplantation receive antifungal prophylaxis with weekly amphotericin B lipid complex via aerosol.Subjects with a history of repeated infections also receive tobramycin before surgery [14].
The surgical technique used by our group has not undergone substantial changes since the beginning of our program in 1999, and is similar to the procedure recently described by the Toronto group [15].Triple therapy immunosuppressive treatment was administered, including basiliximab for induction, oral or intravenous cyclosporine for maintenance initiation, and azathioprine and decreasing doses of corticosteroids in all cases.Cyclosporine and/or azathioprine were switched to tacrolimus and/or mycophenolate after repeated acute rejection or persistent rejection [14,16], and trimethoprim with sulfamethoxazole was given on alternate days for 9 months to prevent Pneumocystis carinii infection [17,18].
Episodes of acute rejection (ISHLT grade ≥ A2) were treated with 1gr prednisone boluses for 3 days combined with increased immunosuppressive treatment if necessary and the appropriate prophylactic therapy was resumed if it had been previously suspended.
Post-transplant prophylaxis for CMV was ganciclovir (10mg/kg/ day, IV) during the first 15 days after surgery for all patients, followed by oral administration of valganciclovir 900 mg/day in a single dose.We never use anti-CMV immunoglobulin as prophylaxis in these patients.The treatment regimen was adjusted between month 3 and 6 of the treatment period, according to risk group.CMV viremia was detected by shell vial assay (between 2003 and June 2007) or wholeblood quantitative polymerase chain reaction (PCR, 2007(PCR, -2009)).CMV detection was routinely performed during oral prophylaxis with valganciclovir, on a weekly basis during the 2-3 months after surgery and at all scheduled follow-up visits thereafter.CMV testing was also performed if infection was suspected, regardless of donor/ receptor (D/R) serology.PCR was considered positive when >700 copies/mL was determined at the beginning of the study.Neither positive PCR nor antigenemia occurred during prophylaxis in this 7-year experience and these tests are no longer performed routinely in our department, unless CMV infection is suspected due to disease manifestations.
In case of asymptomatic infection or CMV syndrome, valganciclovir 900mg was administered twice a day [6].If the patient developed disease, ganciclovir (10mg/kg/day, IV) was prescribed.In case of resistance, foscarnet was used as second-line treatment, with or without immunoglobulins.Hyperinflation during the long-term postoperative period was treated with surgical or bronchoscopic volume reduction [19].
Variables and statistical analysis
All data were collected from the electronic clinical records of the patients included.CMV events included infection, CMV syndrome, and disease.Death was attributed to CMV if the patient died of CMV disease during the course of treatment.Patients with a CMV event who presented increases in viral load and/or clinical worsening of their disease during appropriate treatment with ganciclovir were defined as clinically resistant.CMV: Cytomegalovirus, R: Receptor, D: Donor (SD).Dichotomous variables were compared using the χ 2 test or Fisher's exact test when the sample size did not allow the former.Survival analysis was performed using the Kaplan-Meier method and curves were compared using the log-rank test.The impact of CMV, bacterial and fungal infections, and acute and chronic rejection on survival were calculated using multivariate analyses.The effect of prophylaxis and its duration (Cox regression) were also analyzed.P-value was considered significant at <0.05.The PASW Statistics 18 program was used for statistical analysis.
Results
A total of 139 eligible patients with lung transplant were identified during the inclusion period, of whom 60% were men and 40% were women.The most frequent indications for transplantation were pulmonary fibrosis (42.4%), chronic obstructive pulmonary disease (25.2%) and cystic fibrosis (10%).General demographic and clinical data are shown in table 1.
Donor/recipient CMV serological status in our cohort covered all combinations (Table 1), the most frequent being D+/R+ (n=83; 57%) and the least frequent D+/R− (n=16; 10.7%).The mean duration of follow-up was 31.45 ± 21.0 months after transplantation.During the first year after surgery, 45.3% of patients presented a CMV-related event, and the mean disease-free period after transplant was 6 ± 3.3 months, with 57% subsequently presenting recurrence.
We analyzed serological D/R status as a risk factor for CMV infection.Univariate analysis with Kaplan-Meier curves (Figure 1) showed a lower cumulative probability of survival in patients with serology matches D+/R− and D+/R+.Their respective probabilities tended to be grouped together and fell to an estimated probability of survival of 40-50% 10 months after transplant, compared to the other 2 matches (statistically significant difference: P=0.035).In the multivariate analysis adjusted for age, only age reached significance as a risk factor (OR=1.032; 95% CI=1.007-1.058).The Wald Test indicated that D/R serology should be included in the model (P=0.044) and Cox regression indicated, as expected after Kaplan-Meier analysis, that D+/R+ and D+/R− would be risk factors (B=0.802 and 1.293 respectively) if significantly associated, but none of the different matches analyzed independently reached significance (Table 2,3).
The mean duration of prophylaxis with valganciclovir was 117 ± 40 days (range 60-210 days).There was no relationship between the duration of prophylaxis and the incidence of CMV events (CMV infection, the number of recurrences, or the development of disease), irrespective of CMV donor-recipient serology (P=0.98).
The most frequent adverse effects of prophylactic treatment were increased creatinine and leukopenia.A total of 26% of transplant recipients developed these events, which lead to the discontinuation of prophylaxis in 5 patients.The dose had to be adjusted in 25 patients and 6 needed concomitant treatments, such as colony-stimulating factors (granulocyte colony-stimulating factor), for the continuation CMV: Cytomegalovirus, I: Infections, AR: Acute Rejection, CR: Chronic Rejection of prophylaxis.The incidence of CMV infection was not associated with mortality (P = 0.900) after adjustment for the presence of bacterial and fungal infections, acute rejection, and chronic rejection.Overall estimated probability of survival was 54% at 5 years (Figure 2).Only 9.6% (n = 6) of patients who experienced a CMV event developed disease.Four of them developed CMV pneumonitis and 2 had gastrointestinal tract involvement.None of the deaths were clinically attributed to the CMV event.Two patients who developed clinical disease were resistant to ganciclovir and second-line treatments had to be initiated to achieve resolution.
Discussion
The frequency of CMV infection and disease in lung transplant recipients varies from series to series; incidences of between 54% and 92% in the absence of prophylaxis have been reported [2,20], and even in cohorts receiving prophylactic treatment a 26% incidence has been reported [1].In our study, viremia occurred in 45% of patients, which is similar to the rate reported by Humar et al. [21], but lower than in the series of Schröeder et al. who reported a CMV infection incidence of 68.3% in the first year after surgery in patients treated with ganciclovir [22].According to Singh et al. the period of greatest risk is the first 3 months after transplantation [23], but some recent studies indicate that longer treatments may be desirable to prevent CMV disease [5,6].There is still a lack of consensus on the optimal duration of prophylaxis and the treatment protocol to be followed in lung transplantation.Zuk et al. recently conducted an international survey to explore procedures in 59 centers in 5 continents.The disparity in strategies was confirmed, as the duration of prophylaxis in the different centers was found to range from 3 months to indefinitely.Most centers reported protocols with prophylaxis durations of between 3 and 6 months; 35.6% did not follow any strategy for D−/R− patients, and only 3.4% used pre-emptive therapy [7].A recent multicenter study demonstrated the benefits of prophylactic treatment with valganciclovir for longer than 3 months [5], with no increase in valganciclovir toxicity or resistance, and this may convince some practitioners of the value of this approach.
Pre-emptive therapy, a strategy focused on viral monitoring and early treatment instead of universal prophylaxis, has been suggested as an appropriate alternative.Pre-emptive therapy allows controlled low-level replication, thus inducing immune response in the host and preventing late-onset CMV infection [11,23,24].However, meta-analyses reviewing this strategy have failed to demonstrate its superiority [3,25].Moreover, a recent pilot study found that valganciclovir prophylaxis did not prevent the development of CMVspecific T cell responses [26], suggesting that the host would not need viral replication in the absence of antiviral drugs to develop an immune response.Paraskeva et al. also reported that sub-clinical viral replication, such as pre-emptive therapy would allow, may be related with pneumonitis and the development of bronchiolitis obliterans syndrome (BOS) [27].Extended prophylaxis has been shown to prevent CMV infection and disease with effects lasting 6 months after treatment completion, yet the long-term consequences of extended prophylaxis on CMV prevention and BOS were not explored by Palmer et al. [5].Finlen Copeland et al. did address this issue in their single-center study, reporting a long-term increase in survival during a follow-up of almost 4 years.However, it should be noted that both studies compared 3-month and 12-month prophylaxis treatments with valganciclovir, without the use of pre-emptive therapy, so further studies are needed to shed more light on this question.
In the absence of any clear consensus, practical, empirical and economic reasons have led to the implementation of mixed strategies in clinical environments.This situation has arisen because the costs of each strategy are still under discussion [28,29] and the approach that should be used in different patients with different risk levels is equally unclear [30,31].The general CMV prevention policy of our group is universal prophylaxis with valganciclovir for 3 months, extended to 6 months in the case of D+/R− patients or when the need for immunosuppressive therapy (acute rejection episodes) creates a high-risk situation.This strategy seems to have good results as regards patient outcome, as only 9.6% of our patients between 2003 and 2010 developed CMV disease.As mentioned above, routine PCR determinations during prophylaxis were discontinued, because infection is rare in this period and, indeed, never occurred in our series.We felt that testing should only be performed when infection is suspected after the suspension of prophylaxis, and this has resulted in an improved management of our resources that has not negatively affected patient outcome.As to when pre-emptive therapy should be indicated after the prophylaxis period, Gerna et al. [32] have discussed the importance of selecting an appropriate cutoff level for viral DNA copies in blood.Although some studies have suggested that DNA replication is not needed to elicit specific immune response and prophylaxis does not seem to prevent T cell response [26], Gerna indicate that a high cutoff in PCR determinations (300,000 copies/mL in their practice) would be more efficient in inducing host immune response.In any case, pre-emptive therapy is not currently recommended in clinical practice in Spain, due to the lack of strong evidence [33].
With regard to donor/receptor serology as a risk factor for CMV infection, Caspar da Cunha et al. reported on a series of 242 solid organ transplant patients, 89 of which were lung transplants.They found that donor seropositivity was a major risk factor for CMV infection, irrespective of whether the recipient was seronegative or positive [34].These findings are reflected in our series, in which D+ patients have a clear tendency towards a lower estimated probability of survival (i.e.probability of not presenting CMV infection) than those receiving organs from seronegative donors.Our results, however, did not reach significance.This is possibly due to the small population and the loss of statistical power occurring when the analyses were performed after classification of the sample by serology.Both groups of patients, D+/R+ and D+/R−, tend to have a higher probability of CMV infection, which could be explained by a primary infection with donor CMV, rather than a reactivation of the latent virus in R+ patients.It is common for R− receptors of CMV-positive organs to develop CMV infection despite prolonged prophylaxis: of the 15 cases in our series, only 6 did not develop infection.Nevertheless, closer examination of these 6 cases revealed that 1 died 1 month after discontinuing prophylaxis and another 2 developed the infection 2 and 4 months after data collection was complete.The remaining 3 patients became seropositive but did not develop clinical infection during postoperative follow-up.
As for the length of prophylaxis, Valentine et al. defend the need for indefinite treatment, reporting a low incidence of CMV pneumonia in patients who do not discontinue [35].However, the authors do not specify if pre-emptive therapy was used in case of suspected CMV infection in patients who stopped their prophylactic treatment.Absence of pre-emptive treatment could contribute to the high rates of CMV pneumonitis reported in these patients.On the other hand, Mitsani et al. studied 170 lung transplant patients followed between 2003 and 2008 who received valganciclovir prophylaxis.No relationship was found between duration and dosage of prophylaxis treatment and the development of CMV disease or viremia.The authors stressed the need to define genetic risk markers that promote disease development, in order to guide both treatment and prophylaxis strategies [36].
The long-term use of antiviral drugs in prophylaxis is associated with adverse effects that sometimes lead to treatment discontinuation [25], the development of viral resistance, most frequently in the form of UL97 and UL54 mutations [37,38], and increased healthcare costs [31].Significant adverse effects, namely leukopenia and increased creatinine, were observed in 26% of our patients.These figures are consistent with the literature.Authors who defend prolonged prophylaxis for longer than 6 months in high-risk D+/R− patients admit that the development of resistance and toxicity associated with prophylaxis are the main factors limiting their protocols [39].The incidence of viral resistance to ganciclovir is around 5-10% and this proportion rises in D+/R− combinations and lung transplant recipients [40].The development of resistance is also associated with a higher degree of tissue invasion and a poorer prognosis [41].
In this study, resistance to treatment was suspected when there was an increase in viremia or clinical progression in patients receiving valganciclovir treatment.CMV disease development in our series was infrequent (9.6%), but 33% of these cases were clinically resistant to ganciclovir.This suggests, as reported elsewhere [39], that resistance to antiviral treatment is a risk factor for the development of disease.Our second-line treatment for resistant strains was foscarnet, also used by Reddy et al. [42] to good effect.Several authors report that if CMV pneumonia is correctly treated, BOS can be avoided with no negative consequences on long-term survival [39,43].
Our results support the hypothesis that extended prophylaxis (6 months in our case) can delay CMV events, but does not prevent subsequent infections.This theory has been repeatedly supported by results, for example, from Gavalda et al. [33] Prophylactic treatment during the first 3 to 6 months after transplant combined with antiviral treatment during rejection episodes could be a very useful strategy for improving the survival of recipients in the early months after lung transplant or during periods when high-dose immunosuppression increasing the risk of disease is needed.This protocol reduces the side effects and costs associated with prolonged prophylaxis.Our findings also support the importance of taking high-risk serologies, such as D+, into consideration when establishing the prophylaxis protocol.Consequently, our D− patients received shorter courses, while high-risk patients received longer prophylactic treatments of 6 months.This coincides with the findings of Witzke et al. in kidney transplant recipients [44], who reported significant benefits with a 100-day prophylactic therapy with valganciclovir, especially in D+ patients.Clinical follow-up with PCR tests when infection is suspected allows early diagnosis and prevents disease development in most cases.The most reliable data are those calculated using the actual time of administration of prophylaxis, which is not the case in some trials.In a randomized prospective trial, Palmer et al. found that extended prophylaxis (12 months) significantly reduced CMV disease in lung transplant recipients, compared to prophylaxis periods of 3 months [5].However, 11 different centers participated in this study, and differences in long-term clinical management methods after discontinuing prophylaxis in each center could have influenced the results.Reduced incidence of CMV disease was also reported in lung recipients in the INCA study [45].Both studies used a fixed treatment length protocol, but Monforte et al. did not report on possible discontinuations during the prophylaxis period.This would have been interesting, as patient compliance might be affected by adverse effects, resulting in fewer days on prophylaxis [45].If this was the case, the short treatment period would in fact be less than the 100 or 120 days of treatment specified in the original protocol.
Our results suggest that a follow-up protocol designed to avoid the need for long-term prophylaxis may have benefits in terms of fewer adverse effects for the patients and lower costs, with no negative impact on patient outcome.The detection of patients at risk of developing CMV disease or viral resistance to treatment is a clinical goal that would allow effective and tailored prophylaxis for lung recipients, while reducing side effects.
Figure 2 :
Figure 2: Overall survival curve after lung transplantation
Table 1 :
Demographic Characteristics and Clinical Data
Table 2 :
Multivariate analysis of serological D/R match and survival, adjusted for age (Cox regression)
Table 3 :
Multivariate analysis of infections, rejection and survival, adjusted for age (Cox regression) | 2018-11-22T08:40:53.180Z | 2015-12-31T00:00:00.000 | {
"year": 2015,
"sha1": "1642b03d9ba78fb7182d53f7e1465c1590e4d824",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.23937/2572-4045.1510007",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1642b03d9ba78fb7182d53f7e1465c1590e4d824",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221298699 | pes2o/s2orc | v3-fos-license | Listening forward: approaching marine biodiversity assessments using acoustic methods
Ecosystems and the communities they support are changing at alarmingly rapid rates. Tracking species diversity is vital to managing these stressed habitats. Yet, quantifying and monitoring biodiversity is often challenging, especially in ocean habitats. Given that many animals make sounds, these cues travel efficiently under water, and emerging technologies are increasingly cost-effective, passive acoustics (a long-standing ocean observation method) is now a potential means of quantifying and monitoring marine biodiversity. Properly applying acoustics for biodiversity assessments is vital. Our goal here is to provide a timely consideration of emerging methods using passive acoustics to measure marine biodiversity. We provide a summary of the brief history of using passive acoustics to assess marine biodiversity and community structure, a critical assessment of the challenges faced, and outline recommended practices and considerations for acoustic biodiversity measurements. We focused on temperate and tropical seas, where much of the acoustic biodiversity work has been conducted. Overall, we suggest a cautious approach to applying current acoustic indices to assess marine biodiversity. Key needs are preliminary data and sampling sufficiently to capture the patterns and variability of a habitat. Yet with new analytical tools including source separation and supervised machine learning, there is substantial promise in marine acoustic diversity assessment methods.
Ecosystems and the communities they support are changing at alarmingly rapid rates. Tracking species diversity is vital to managing these stressed habitats. Yet, quantifying and monitoring biodiversity is often challenging, especially in ocean habitats. Given that many animals make sounds, these cues travel efficiently under water, and emerging technologies are increasingly cost-effective, passive acoustics (a long-standing ocean observation method) is now a potential means of quantifying and monitoring marine biodiversity. Properly applying acoustics for biodiversity assessments is vital. Our goal here is to provide a timely consideration of emerging methods using passive acoustics to measure marine biodiversity. We provide a summary of the brief history of using passive acoustics to assess marine biodiversity and community structure, a critical assessment of the challenges faced, and outline recommended practices and considerations for acoustic biodiversity measurements. We focused on temperate and tropical seas, where much of the acoustic biodiversity work has been conducted. Overall, we suggest a cautious approach to applying current acoustic indices to assess marine biodiversity. richness [22,30]. For example, models of simulated terrestrial sounds that were tested in an arboreal site suggested that the forest ambient soundscape varies with biological diversity and abundance and that soundscape complexity increases with species richness [31]. Further, the acoustic presence and abundance of certain grasshopper species in a grassland biotope have been shown to be good predictors of biotope quality [32]. One investigation of the community soundscape in a Costa Rican forest revealed that acoustic diversity strongly correlates with vertical forest structure complexity, which suggests that acoustic monitoring could be an effective method of identifying forest patches containing high species diversity [33]. Similar studies have been proposed for freshwater systems [34]. Examining the sounds occurring in remote marine habitats could serve as an effective proxy for estimating habitat quality and community biodiversity, tracking biological processes and detecting natural and changing patterns of biological activity across multiple spatial and temporal scales.
Many metrics have been proposed for measuring species richness in terrestrial environments through acoustic analysis, particularly for forest birds and insects [22,33,[35][36][37][38][39]. The role of these soundscape 'indices' is essential to quantify the complexity of the soundscape in a single (or handful of ) value(s) [28]. Their success has been variable and their repeatability should be assessed, but it seems that a combination of acoustic indices (rather than a single metric) is more effective at predicting bioacoustic activity [39]. Sueur et al. [16] provided a description of the two predominant types of within-group (α) and between-group (β) indices (where the sample unit to be compared is considered to be the group. That group may be a site, habitat or an event in time. In addition to indices, initial attempts have been made to autonomously extract and count ecoacoustic events in a formalized way and to investigate spatio-temporal patterns in these events in the terrestrial environment and freshwater habitats [39][40][41]. While this does not provide an index of the biodiversity, it does provide a better understanding of its contributors, based on these detected events.
Why ocean acoustics?
The marine environment is rich with sounds (figure 2). As with the terrestrial world, marine biotic sounds reflect many vital biological processes, such as spawning [43][44][45], courtship [46], feeding [47], social cohesion [48,49] and competition [50] among many species of marine mammals, fishes and invertebrates. Further, sound travels efficiently through water [51], especially compared to light, the basis for many visual and traditional survey methods. Sound itself consists of two components, sound pressure, the compression-and rarefraction-induced sound waves and a scalar quantity typically measured in micropascals; there is also acoustic particle motion, the back-and-forth vibratory nature of sound. Particle motion is a vector thus inherently directional and typically measured in acceleration or velocity. Researchers have typically measured sound pressure and that is what we refer to here, unless otherwise stated. Acoustic signals can be detected over ranges of tens to thousands of metres, royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 depending on the species producing them, the background ambient noise level and the propagation conditions [52][53][54][55]. Soundscape recordings may be collected in a range of conditions when vision and other observing methods are limited, including at depth (e.g. the aphotic zone), during dark hours and in murky waters [56,57]. Further, there is now a suite of advanced passive acoustic recorders that allow broadband, long-term recordings in a relatively cost-effective manner, greatly expanding their user base and application. Also, they allow locations to be monitored for months or longer without human presence or disturbances. Long-term recordings may be particularly useful to compare biodiversity levels before and after events, such as storms (hurricanes [58,59]), habitat degradation (oil spills, construction [60]), climate-related changes (temperature rise [18]) and in areas of marine protection [61]. Numerous studies have leveraged bioacoustics to improve our understanding of previously unrecognized species presence or abundance. In the ocean, Earth's largest animals can be cryptic. For example, the North Atlantic right whale (NARW; Eubalaena glacialis) mothers and calves were generally known to migrate between (typically spring, summer) feeding grounds in the Gulf of Maine and the Bay of Fundy to (winter) calving grounds off the coast of Georgia and Florida. The regions between have long been considered a migratory corridor, with limited NARW presence. Yet, two recent passive acoustics studies have revealed NARW detections in Virginia and the mid-Atlantic in every month of the year [62,63]. These studies demonstrated increased seasonal occurrences in autumn and late winter/early spring, and not just during limited periods of the year. Thus, acoustics is leading to the redefining of habitat use and management of a critically endangered species, particularly as these areas see increased naval activity and use for offshore windfarms and resource extraction. In another example, the Northwest Hawaiian Islands were recently noted as having persistent winter occurrence of humpback whale (Megaptera novaeangliae) song. This suggests this area is a previously unrecognized or newly recolonized breeding area for these megafauna [64].
Passive acoustics has helped reveal the presence of essential fish habitat. Sciaenids of the southeastern United States are a multimillion dollar fishery. Luczkovich et al. [65] have used moored recorders to identify the spawning areas and timing of multiple Sciaenid species around North Carolina estuaries. Glider-based studies have been used to find new spawning areas of red grouper (Epinephelus morio), toadfish (Opsanus spp.) and other species [66]. Knowledge of spawning areas allows for the regulation of fishing effort, protection of essential spawning habitat and monitoring of spawning stock biomass fluctuations.
For cryptic species, passive ocean acoustics has a vast advantage for species detection. For example, the addition of acoustics to a survey of a Mediterranean marine protected area (MPA) yielded the detection of a dense population of cusk-eel, Ophidio rochei, and suggests this is a reproduction area [67]. This acoustic detection is even more impressive considering that visual census surveys of the fish fauna were carried out for decades on a monthly basis in this MPA, yet they consistently failed to detect the presence of this species. While overlooking cryptic taxa in visual assessments may not be surprising, it is at least striking in the case of snapping shrimp. Composed of two crustacean families (Alpheidae and Palaemonidae), comprising several hundred species, snapping shrimp make their sound through the collapse of a cavitation bubble generated by the closing of their large claw [68,69]. This bubble implosion creates an extremely broadband (many-frequency) sound with sound levels of ca 190 dB re 1 µPa. [70]. Snapping shrimp choruses can easily be heard by divers and through boat hulls, but living cryptically in goby holes, oyster reefs or deep in the spongocoel of select sponges, these shrimp are rarely seen. Acoustics is certainly their best means of detection.
Given this apparent bioacoustic richness and the advantages of marine bioacoustics measurement, it has been suggested that the terrestrial community-based methods of relating acoustic and standard biodiversity measures could also be applicable to marine environments [71][72][73][74][75].
To date, much of the work seeking to leverage acoustics to measure marine biodiversity has taken place in areas of concern or biodiversity 'hotspots', areas of high biodiversity or with high rates of endemism. These have often been temperate coastal zones and tropical coral reef habitats. Yet, acoustic diversity has been noted in other key areas, such as polar regions [76][77][78][79][80], albeit to a more limited extent. Marine biodiversity is vital in other areas, such as hydrothermal vents, where acoustic monitoring could be useful. These habitats are certainly regions of interest when monitoring changes in biodiversity and habitat; however, for this review, we focused on temperate and tropical regions, where most of the initial bioacoustic diversity work has been conducted.
Coral reefs are often considered an example of high-biodiversity ocean areas and are frequently referred to as the 'rainforests of the sea' because their productivity and biodiversity rival those of their terrestrial counterpart. Coral-based ecosystems host some of the highest diversity of life per unit area on Earth, and harbour about one-quarter to one-third of all marine species [81][82][83]. Reef-associated animals are a major source of protein for millions of people. Also, reef habitats offer shoreline protection for communities and are a significant source of tourism revenue. In all, coral reefs are a multi-billion dollar resource, especially for developing countries that have no other major industries [84]. Despite these vast ecosystem services, these biodiversity hotspots are under threat from rapid anthropogenic change and thus coral reefs are also a priority for timely and cost-effective means of royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 assessing biodiversity and identifying key areas of protection [85]. Ecoacoustic analysis tools may allow us to detect change and monitor the efficacy of management.
Beyond biodiversity, there is a growing understanding that anthropogenic noise may be a stressor in marine environments. Noise impacts are diverse and can influence animal behaviour, physiology and community interactions, and there is growing recognition of the need to address, understand and mitigate many of these impacts. The International Quiet Ocean Experiment (IQOE) grew out of this recognition, forming an international programme of research, observation and modelling to better characterize ocean sound fields and to promote understanding of the effects of sound on marine life [86]. The authors of this paper are the IQOE's 'Working Group on Acoustic Measurement of Ocean Biodiversity Hotspots' (see electronic supplementary material for terms of reference). The IQOE was developed because, in most instances, our current knowledge of the specific effects of anthropogenic sound on marine life is inadequate. This scientific uncertainty has made it difficult to balance the need for precaution in protecting marine ecosystems against the potentially large costs to socially important noise-producing activities, such as commercial shipping, offshore energy exploration and development, and military readiness. A key component of understanding noise impacts is to study the natural ocean soundscape, how animals use this soundscape and how soundscapes are changing. We may then be able to leverage patterns in those soundscapes to better understand the ocean.
The goal of this paper is to provide a timely consideration of the emerging methods that use passive acoustics to measure biodiversity and recommend avenues forward in this developing field. We provide a summary of the brief history of using passive acoustic methods to assess marine biodiversity and community structure, and a critical assessment of some of the challenges faced. A key aim of this paper is to outline recommended practices and considerations for assessing marine acoustic diversity and by extension, biodiversity. We hope such efforts will allow researchers, and potentially managers, to apply acoustics for conservation biology needs, such as comparing biodiversity at different locations within a habitat type or region (i.e. which reef to protect or study) and to quantifying temporal trends at a given location (i.e. identify biodiversity loss over time). Given that this is still an emerging field, we describe future research needs to ultimately improve and broaden biodiversity estimates and tracking using acoustics.
2. Initial methods and challenges in assessing aquatic acoustic biodiversity
Early work
Research groups studying terrestrial ecosystems using ecoacoustics have led the way in applying acoustic indices to measures of biodiversity. One of the early ideas promoted by these groups was to use recordings and acoustic indices to make biodiversity assessments in a rapid timeframe [30]. These included the acoustic entropy index (H ), a calculation adapted from the Shannon's diversity index, which aims to reflect the evenness of a signal's amplitude over time and across the full range of frequencies [30]. Acoustic complexity index (ACI), originally developed for analysing avian communities, summarizes fluctuations of sound within frequency bins across time, then summarizing these variations for the whole frequency range [87]. These and other indices have been reviewed by Harris et al. [88] and Buxton et al. [39]. Coral reefs have provided a range of opportunities and studies to begin linking and testing the efficacy of bioacoustic diversity indices to more traditional observations. Early work suggested sound pressure levels were associated with coral cover, showing promise in using acoustics to assess the community [89], but the recordings were short duration and limited to simple acoustic measures (broadband and octave-band sound pressure levels). Additional work in temperate environments showed promise, addressing three patches of very different habitats (mud, gravel and rocky cliff ) recorded for short periods of time [71]. Differences were found between the sites, using terrestrialderived metrics of acoustic complexity and acoustic diversity, but the authors attributed these results to different levels of snapping shrimp; they did not assess biodiversity. Parks et al. [74] found another use for one of the indices, acoustic entropy, to identify calls of mysticete whales, after the background noise was filtered and reduced. The authors indicated that the methods had promise but did not actually link acoustic records to biodiversity assessments. Similarly, polar researchers have used acoustic metrics, often to quantify marine mammal-based acoustic diversity [77], listening across very large distances. From these key stepping stones, more comprehensive measurements were needed.
royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 2.2. Challenges posed by variability One important consideration in assessing biodiversity is the sampling regime [90,91]. A soundscape sampling regime's effectiveness is influenced by natural acoustic variability. Substantial work has shown that the sound levels and frequency content of coral and temperate reef soundscapes vary substantially in space and time [56,[92][93][94][95]. Changes in the sound field, including fish and snapping shrimp sound production, can change with a variety of factors, including season, lunar periodicity, lunar light levels, temperature, upwelling, tides, salinity and time of day [96][97][98][99][100]. The activity of the animals present affects their detectability; in acoustic assessments, animals can only be detected if they are acoustically active. Conversely, some animal signals will swamp those of others. For example, marine mammals are often dominant contributors to soundscapes from polar to tropical habitats. Calling patterns vary from daily to seasonally, and have substantial spatial variability [101,102]. In some low-latitude sites, fish and invertebrate sounds that are easily detectable in the summer are overwhelmed and masked by the persistent song of humpback whales in the winter [7].
Even within a habitat or community, soundscapes can vary. Kaplan et al. [94] showed that within a reef small, yet significant, variations exist at recording stations just metres apart. Within a reef area, there are areas of preference, and they may become acoustic 'hotspots' as seen in concentrations of snapping shrimp in Hawaii [103]. Fish choruses and calls can also vary substantially across adjacent habitats [104] and by geographical location [105]. One reason for these within-reef differences is probably driven by the sheer statical power of near-continuous acoustic records [94]. It is possible to find differences but they reflect the within-habitat variability. The range (or general extent) of this variability seems less than the differences among dissimilar habitats, but we are only beginning to understand the extent or patterns of such 'intra-habitat' heterogeneity [106]. For example, a key area of study would be to address discerning reef health and diversity gradients from acoustic indices. It is important that both within-and among-habitat differences be addressed to properly use acoustic diversity metrics.
A research frontier for the field of ecoacoustics is, therefore, establishing data collection requirements and sampling regimes that are appropriate for the variability they attempt to quantify. Different goals may be served by different sampling regimes, just as visual surveys of transient and cryptobenthic fish have different standard methodologies.
Comparing soundscapes to other biodiversity measures
Monitoring underwater soundscapes to assess community structure requires comparative observations. Staatermann et al. [72] compared acoustic data to visual surveys in their study area and found a relationship between the 'low band' frequencies (less than 1000 Hz) and the richness and abundance of cryptic fish. This lower-frequency band is often indicative of fish, but not invertebrate, sounds. Kaplan et al. [94] collected four months of acoustic data in the Caribbean Sea and compared them with visual surveys of fishes and benthic cover. They found a relationship between crepuscular increases in fish chorusing to fish abundances and coral cover. Notably, snapping shrimp patterns (1.5-24 kHz) were not associated with either metric. A 16-month follow-up study at additional Pacific Ocean sites showed similar patterns [7]. Desiderà et al. [107] found a relationship between sound types (not indices) and taxonomic diversity. They compared years of visual census data from a Mediterranean Sea MPA to acoustic data and found a strong relationship between taxonomic diversity and acoustic diversity (but not acoustic indices).
In fact, many marine studies have shown that there is not a clear relationship between biodiversity and the terrestrial-derived acoustic indices. With respect to H and ACI-two prominent metrics-snapping shrimp sound patterns and other dominant biological sounds seemed to greatly affect the power of the index, suggesting that loud, frequent or omnipresent sounds could bias some indices. Kaplan et al. [94] calculated acoustic entropy (H) and acoustic complexity (ACI) from their recordings and found no relationship to fish abundances (discerned using lower, fish-dominated frequencies, snapping shrimpdominated high frequencies and across the full recording bandwidth). Buxton et al. also found that acoustic indices did not reliably predict bioacoustic activity in marine habitats. Suggesting such a result was due to the overlap of many biological signals with both the snapping shrimp and anthropogenic sounds [39]. Staatermann et al. [72] found a similar effect with index values being dominated by the presence and intensity of Bocon toadfish (Amphichthys cryptocentrus) or snapping shrimp. This supports early work by McWilliam & Hawkins [71], testing these indices in environments that were dominated by snapping shrimp, finding differences between sites, but suggesting that the differences were due to snapping shrimp patterns, not overall community presence.
royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 Indeed, snapping shrimp are present and abundant in healthy and 'degraded' reefs, but their relative extents in both habitats are not yet well understood [56,108]. As some species of snapping shrimp are often found in sponges or symbiotically living with gobies, it is possible that their snap rates and levels may be associated with those of animals and communities [109,110]. However, acoustic diversity metrics heavily based on the sound production by these persistent and acoustically dominant crustaceans or other mass phenomena, such as fish choruses, may ultimately be limited in their ability to measure biodiversity [111].
These studies contrast somewhat with work from Moorea that appeared to find a relationship between terrestrial acoustic indices and visually observed biota [112,113]. However, a limitation in those efforts may have been that they were relatively short-term recordings that were also made sequentially, not in parallel, which could overlook or be influenced by the acoustic variability within a site, or temporal variability of a diel or lunar cycle. Elise et al. [21] made similar claims regarding the effectiveness of six acoustic indices. However, they also sampled for a very short duration (hours to 24 h) remarkably concluding that listening for only 2 h could be used to discriminate sites.
Making conclusions about community complexity from snapshot recording is potentially misleading, given the substantial variability of coral reef soundscapes. Notably, the efficacy of some indices is greatly affected by temporal variability of habitats [88]. Further, recent work by two research groups, working at sites in the Mediterranean Sea, Bahamas and Pamlico Sound, USA, evaluated the usefulness of several acoustic indices. Using in situ observations, and in the case of [114], also testing H and ACI against synthetic sounds, the indices were strongly affected by temporal variations in the activity of single sound-producing species (such as snapping shrimp or a loud fish chorus). The ACI was found to be sensitive to variations in both sound abundance and sound diversity, making it difficult to discern between these variables. Call type, calculation parameters and the resolution of the analysis also affected the indices' performance. In these cases, the authors concluded that 'ACI and H, therefore, cannot be assumed to track call diversity, and the utility of these metrics as biodiversity indicators in marine environments may be limited' [111,114,115].
Considerations for using acoustic methods to estimate marine biodiversity
While the ultimate goal of soundscape metrics is to provide a measure of the local ecosystem's state and/or biodiversity using the characteristics of the biological sounds, it is important to note that these may not be directly related. While soundscape metrics may be valuable, the variety and variability of life at a recording location may not be directly reflected by the variety and variability of acoustic signals (figure 3). For example, not all sounds are biological in origin, not all animals will make sounds and there is significant variability in sounds that are produced. Soniferous species produce different varieties of sounds and can do so at different rates. Not all species produce sounds at the same intensity and, depending on habitat, not all sounds travel to the recording device in the same manner. Defining each of these factors is difficult. In the same way that key indicator species can help shape understanding of ecosystem health [116], there may be a small number of species that dominate the soundscape, mask other sounds and impact acoustic diversity assessments. Yet, their presence could be an indication of high biodiversity of non-vocal species. Understanding these dynamics and the biases they create is vital to soundscape measurements. It is imperative that we evaluate the biases of methods of biodiversity estimation and ensure that the limitations of methods are taken into account when they are used. These soundscape challenges are not unlike those of other underwater biodiversity assessment methods (e.g. trawls, eDNA, diver, etc.). For example, visual-based assessments (such as baited remote underwater video) can be impaired by the variability of the observed cues and the sampling methods. Several methods have been developed to determine how the behaviour of the animal contributes to it being observed. This includes addressing the effects of diver presence on fish behaviour [117] or bait plumes on the sample area [118], and how the maximum number of animals observed represents the relative or absolute number of animals in the area, such as 'MaxNo' or 'MaxN' used in baited remote underwater video surveys [117,119,120]. Many key ecological parameters vary substantially [121], thus short-term and lowfrequency sampling can miss this variability and potentially lead to mischaracterizing a habitat. The uncertainty is partly addressed by longer observations, data from multiple sources and ground-truthing of methods. In developing representative acoustic metrics, there are several considerations that affect how the data must be approached to tease apart how acoustic diversity and biodiversity are related.
The end user
The needs of the end user are the ultimate driver for the design of any acoustic metric [122]. There are potentially multiple users of acoustic diversity information. Managers and scientists may hold vastly different expectations of what information an 'acoustic index' will provide. This may range from the marine park manager looking to compare biological complexity across multiple sites, to the scientist attempting to identify the point at which an entire coral reef suffered a bleaching event, to the fisheries manager trying to confirm location and timing of spawning fish. The sound types, local conditions that can feed into the index and the spatio-temporal scales over which the index will be relevant, all affect the potential design and outcome of a monitoring programme. Yet, for management of declining resources, properly evaluating a community's health and biodiversity is critical, underscoring the need for proper evaluations. The users' needs (and limitations) will affect the sampling regime, and they must balance sufficient evaluation of the community with rapid reporting for conservation needs. . Assessing biodiversity from the ocean soundscape. Diversity of bioacoustic signals in the soundscape is one way to estimate biodiversity. Complications to bioacoustic diversity measurements include the variability of the soundscape including its bioacoustic cues, presence of geophysical and anthropogenic 'noise', propagation and transmission loss in shallow water habitats, and a bias towards species that produce sounds.
royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 3.2. Separating biotic, geophysical and anthropogenic sounds Any assessment of the soundscape requires segregation of the three predominant source types: biological, anthropogenic and natural abiotic sounds. Separation of these three contributors should be an initial step of any soundscape evaluation. Some analyses may aim to assess and compare the overall soundscape contributions of biological and anthropogenic sounds [123,124], or to assess the influence of abiotic sounds (e.g. wind and wave noise) on biological functions [125]. For almost any soundscape analysis, it is necessary to segregate the contributors into these functional groups, and 'noise' sources can clearly affect acoustic metrics [39]. It is essential to isolate anthropogenic and geophysical contributions to the soundscape, before acoustic measurements of the biological sources can be linked with local biodiversity.
Accurately segregating anthropogenic and geophysical sounds in long-term passive acoustic data is non-trivial and has been attempted many times, with varying degrees of success (e.g. [126][127][128][129]). It has been suggested that biological sources, which are generally transient signals, exhibit higher acoustic variability than anthropogenic sources [127]. However, several anthropogenic sources, such as the passing of large vessels, bear similar acoustic variability to biological sources, particularly mass phenomena, such as fish choruses, and can be mistaken as such. The biotic and abiotic sources may be separated by acquiring recordings that include the commencement and cessation of the source [126]. There may also be a need to remove entire periods of data where anthropogenic noise (e.g. seismic air-gun firing, dredging, shipping channels or pile driving) or geophysical noise (e.g. rain, current flow, wind-driven waves) partially or completely masks even the loudest of biological signals ( [104,130], figure 2). Other geophysical sounds such as ice-generated noise can be seasonally persistent, high amplitude and similar to biological sounds [131,132], making it easy to detect but difficult to distinguish from biological sounds.
Key characteristics of bioacoustic diversity
Relating a location's bioacoustic diversity to biological biodiversity is challenging for a number of reasons. Animals produce sounds intentionally (e.g. communication, orientation, foraging) or non-intentionally (when feeding, moving, etc.) [133][134][135], with varying information content in different sounds. While many sources of these biological sounds have been identified, a larger proportion are still unconfirmed. All, however, contribute to the diversity of the soundscape and provide important information on the location's biodiversity [107]. These calls may be produced rarely, with no temporal pattern, or in series, often with a distinct temporal pattern, depending on their function. Marine mammal signals are immensely variable, with cetaceans and pinnipeds being the most soniferous. With respect to frequency content, mysticetes are typically confined to lower frequencies (less than a few thousand hertz), odontocetes spanning a few kHz to 100-200 kHz and pinnipeds (including the acoustically diverse Weddell seal (Leptonychotes weddellii)) ranging from 80 Hz to 24 kHz [136,137]. Most sounds produced by fishes are limited to a small frequency range, tens to a few thousand hertz (e.g. ca 5000 Hz and lower) [138,139]. There are some marine invertebrates, such as the American lobster, Homarus americanus, that also produce low-frequency sounds [140]. Yet, many invertebrates occupy a much broader acoustic range; many start near 2000 Hz and extend above 50 kHz [55,141,142], with snapping shrimp sound energy extending out to 200 kHz and beyond. Marine biodiversity analysis must distinguish between these sounds that might represent a species, and potentially also sounds that are more general or difficult to identify to species (i.e. snapping shrimp and some delphinids).
Mass phenomena
Frequently emitted sounds can form mass phenomena referred to as choruses [102,143]. Mass phenomena are the grouping of multiple sounds that are no longer discrete signals; rather they have coalesced into a single broader signal, of significantly longer duration and potentially different spectral content. This may occur when sounds from singing whales, a large school of soniferous fish or invertebrates converge, forming a continuous chorus 'when the noise from many individuals is continuously above background for an extended period using an equipment averaging time of 1 second', with a significant increase above background (greater than 3 dB re 1 µPa, [143]).
The choruses and song of some cetaceans and pinnipeds have been studied for decades [79,101,102,144]. Their signals are high amplitude and can be detected and localized out to many tens of kilometres [54,145], and may occur for months at a time. Fish and invertebrate choruses may also be intense and dominant in a soundscape [146][147][148][149] and across a period of up to several hours or royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 across breeding seasons [150][151][152][153]. A chorus can mask the individual, often lower-level, transient sounds and thus will often reduce the detectability of many signals and the overall acoustic diversity. Regardless, it is pivotal to separate mass phenomena from more rarely occurring sounds when assessing bioacoustic diversity [107], and for these sounds to be considered separately.
Transient sounds
Transient sounds are referred to here as all uncommon biological sounds that occur in low abundances or generally do not produce mass phenomena but, nonetheless, are important components in acoustic diversity assessments. Indeed, rare observations, acoustic or otherwise, greatly limit probability of detection and thus raw counts of individuals will be biased towards more gregarious, larger, louder or easily detected species leading towards erroneous richness measurements [154,155]. Further, detecting a high number of rare species is typical of many surveys. Accurately capturing this number is crucial as rarity of species and overall site richness have implications for threat status and extinction risk, and the number of rare species is also used to establish spatial conservation priorities [156,157].
With respect to bioacoustics, many of these sounds may be unintentional cues, such as the transient broadband sounds emitted by some invertebrates (e.g. rasping sound produced by the gastric mill of crabs during feeding). However, invertebrates will also acoustically signal, such as stridulation, e.g. rubbing together of hard body parts [55,[158][159][160]. Such transient sounds are diverse, but as many of them are not used for communication, their specificity is low, e.g. species, individual, etc. [142]. Fish vocalizations for communication are often species-specific and more diverse, but confined to a narrower frequency band compared to transient invertebrate sounds [161][162][163][164]. Vocal repertoire can vary significantly between fish species. For example, haddock (Melanogrammus aeglefinnus) display a number of calls associated with various spawning behaviours, while Atlantic cod (Gadus morhua) are thought to be less versatile vocalists during courtship [43,124,165]. Yet, they may also make unintentional sounds, such as from fast swimming moments in jacks (carangids) or the scrapping of algae off reefs by parrot fishes (scarids) [166,167].
In contrast with fish and invertebrates, marine mammals are known to produce a large variety of sounds, often with species-specific vocal repertoires [168][169][170][171]. Mysticetes, beaked whales and pinnipeds display species-specific spectral peaks, which can be identified in spectral statistics and used to facilitate the analysis of the spatio-temporal changes of marine mammal sounds [42,172,173]. Most delphinids produce a diverse tonal sound repertoire that spans a wide frequency range and changes with behavioural contexts [174][175][176]. Quantification of the frequency diversity may be associated with diversity in both species and behavioural states [177]. In addition, the high-frequency whistle and click sounds produced by toothed whales are highly directional [178][179][180] and recorded spectra may vary with azimuth, when vocalizing animals change their heading directions. Additionally, noise, both anthropogenic and physical, may be transient and have the potential to influence biological sound production [181]. Thus, the ecological interpretation of the spectral complexity in the analysis of marine soundscapes must be carefully developed.
The diversity of transient sounds will be critical for the evaluation of biodiversity in species and behavioural levels. Manually inspecting the presence/absence of different sound types is time consuming, but necessary for building baseline validation information. The detection of transient biophonic sounds, particularly the rare ones, can be impaired by anthropogenic noises or biological choruses, the assessment of their diversity can be very challenging. Using the directional component in sound inherent in particle motion vectors or the use of a hydrophone array to determine source direction are potential methods of separating transient signals from one another and also to separate them from noise (e.g. [182,183]). Given the lower probability of detecting rare and transient sounds [154,155,184], repeated surveys, multiple methods and longer-term observations are also key methods to improve species detections and quality of richness estimations 3.4. Other factors influencing acoustic diversity measurements 3.4
.1. Sound levels
Sound pressure levels and received acoustic energy are some of the key descriptors used to produce many acoustic biodiversity assessments and indices [16]. Yet, the sounds produced by individuals, species and groups of animals can vary vastly in amplitude. Similar to the effect of mass phenomena, the impact of high-amplitude over lower-amplitude calls can bias an index towards a more diverse royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 estimate. In Australia, for example, the source levels of mulloway (Argyrosomus japonicus) and West Australian dhufish (Glaucomsoma hebraicum) differ by greater than 30 dB re 1 µPa [185,186]. In the USA, Gulf corvina (Cynoscion othonopterus) and Atlantic cod (Gadhus morhua) differ by an even greater level, approximately 40 dB (100×) [124,148]. Within a species, source levels may also vary [187,188]; advertisement calls may be much greater in sound level than ancillary cues, such as sounds associated with movement or feeding. In some fish, source levels can also vary seasonally with changes in sonic muscles related to reproduction [189,190]. Yet for most species, we typically do not know the source levels of many calls, nor can this be easily determined.
In addition, received levels are impacted by propagation conditions of the area, such as shallow versus deep water, bottom type, seabed rugosity and physical geometric spreading. Sounds tend to attenuate faster in shallow waters as they repeatedly reflect and are scattered off the seabed and surface [191]. This attenuation rate tends to be more pronounced for low frequencies and in areas with rougher boundaries (more rugose reefs), although high frequencies will absorb faster. Thus, open water can act as a low-pass filter, enabling better lower-frequency sound propagation. Sound pressure also decreases logarithmically with distance as sound spreads from a source [136]. Consequently, sound levels decrease faster closer to the source (i.e. 20 dB or 10× at 10 m and 40 dB at 100 m). With respect to bioacoustics, the physical conditions, therefore, disproportionally affect the detection of lower-amplitude signals and low-frequency animals that tend to communicate in shallow waters and short ranges (e.g. many fish sounds). Consequently, care must be taken regarding how received levels are interpreted in acoustic biodiversity indices.
Spatial range and scale
As noted above, variability in acoustic propagation ( predominantly due to bathymetry and water depth) and the differences in source level of biological signals mean that the ranges over which sounds may be measured will vary. Estimating detection ranges, density within an area [154,192], and effective width of observation, as well as modelling detection is difficult.
Sounds from cryptic or quiet species, such as G. hebraicum or G. morhua, or the damselfish, Dascyllus albisella, may only be detected above ambient noise (natural and/or anthropogenic) at ranges of a few to a few tens of metres [46,124,185]. If such species are not widely or uniformly distributed, then their contribution to local soundscapes, and therefore potentially the applicability of that soundscape to estimate diversity, is limited to the detection range. By contrast, high-amplitude cetacean and fish signals generated in deeper water may propagate great distances and dominate soundscapes [54,193,194]. Initial soundscape studies showed differences between soundscapes separated by several kilometres [89,93,195], and more recent efforts have noted that reefs separated by just a few hundred metres may vary [149]. Further, there are bioacoustic differences within a reef (distances of a few metres), suggesting the soundscapes reflect biological hotspots or areas of particular activity [94,98,112]. As a result, spatial replication is a necessity, not only to account for variability between sites, but also to ensure that the soundscapes recorded are representative of an area or habitat.
Timescales
Any sampling of acoustic diversity should be conducted over a timeframe that is considered representative of the site and sufficient to capture rare species, which will reduce bias in estimates of species richness and ensure relatively high mean detection probability overall. Selection of an appropriate timeframe depends on the objective of the acoustic metric. Prior to establishing a representative timeframe, it is, therefore, highly preferably to record multiple cycles of the temporal variability that is to be quantified and consider this alongside the goal of the indices to be used. Variations in soundscapes and sound production by individual species have been shown to occur in relation to tidal, solar, lunar (and semi-lunar), seasonal and annual cycles [92,99,100]. Diel patterns and crepuscular acoustic activity are some of the largest sources of variability on a coral reef [7,94]. Local geophysical or oceanographic conditions (e.g. tidedependent water depth) can change on similar scales, altering the soundscape during their cycle [196]. In addition, habitats may have varying temporal scales of acoustic diversity. For example, habitat A may exhibit the same degree of acoustic diversity as habitat B, but over a very different timescale (e.g. one week versus six months).
It is also not necessarily appropriate to assume that a species produces sound over the same timeframes at different locations. Fish choruses, potentially produced by the same species, have been shown to change their timing at different locations, occurring at different times of the day/night, or displaying different lunar royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 patterns in intensity [104]. Similarly, snapping shrimp have been shown to produce sounds at different times and daily patterns in adjacent reefs, just hundreds of metres apart [98].
Migratory species, whether marine mammal, fish or invertebrate, can contribute significantly to the local soundscape for the duration of their passing and this can be species-, site-and condition-specific. At the hundreds of metres to kilometres scale, Parsons et al. [96,197] proposed that A. japonicus are mobile, individually moving slowly along a river calling as they go, and that the aggregation (and therefore chorus) position changes during the course of the evening chorus. Whether this was to match changing environmental conditions (location of high tide edge) was not known. On a shorter timeframe, the sound source, such as a whale, may be mobile while vocalizing [198]. Humpback whales (M. novaeangliae), for example, can spend several months around Exmouth Gulf, in Western Australia [199], altering the local soundscape of several areas of the Ningaloo Reef with their song, but pass through Port Hedland (500 km along the West Australian coast) over the course of a few weeks. Migratory species can influence visual surveys in a similar way; therefore, this is not a new problem, but we must consider how we wish to standardize ecoacoustic survey methods in the light of this issue.
The result of these variations is that at a single site, soundscapes (and snapshots of acoustic diversity) can change substantially. Sampling at one time (e.g. sunset or high tide) can be significantly different from later that day, and the same time the next day. With the formation of environmentally driven mass phenomena, the soundscape can change on an hourly basis or quicker. Without confirming which sounds are important for the particular acoustic metric, and targeting times at which these occur, it is necessary to sample at high enough resolution to capture short-term changes and for sufficient duration to encompass as much potential variation as possible [104].
One possible mechanism for evaluating sampling duration is cumulative dynamic range [200], which calculates the variability of sound levels (in power spectral density) as the number of samples increases. As an increasing number of louder and quieter levels are recorded, the mean dynamic range increases in a logarithmic fashion until it eventually asymptotes with time (number of samples). This reflects that the acoustic variation now generally lies between the 1st and 99th percentiles and hence does not contribute to the dynamic range computed. The challenge is that environments must be substantially different (much louder or quieter) to influence the dynamic range with increasing samples, and short-term extreme events, such as passing cyclones, may be missed.
Another possible mechanism for evaluating sampling duration is to examine acoustic data computed over a variety of time windows. Yet, one must first make sure they have sampled for long enough to uncover the variation and signals present, a key step often ignored. This essentially requires longerduration recordings. Subsampling will often miss the transient ( perhaps diverse) events [201].
Pilot data and a short-term evaluation of the sound field allow an initial evaluation of the dynamics and potential sampling regime needed for the acoustic recordings and evaluation. These can contribute to supplementary information and support a study design. In order to fully quantify acoustic diversity, longterm sampling (multi-year) must be optimized to include all potentially significant changes in the soundscape to assess overall diversity. Overall, while potentially attractive, 'snapshots' (e.g. minutes, hours, few days) are likely to miss many of the transient sounds acoustic behaviours present and are not recommended.
Of course, vulnerable habitats such as coral reefs are rapidly deteriorating due to anthropogenic change. Some cyclones [59], disease and bleaching events [202] can impact diversity. Thus, in some cases, there might not be time to base ecoacoustic evaluations on multi-year recordings before they change. Yet not all major (bleaching) events affect acoustic patterns [7], but understanding the natural patterns is key to unravelling such differences. Thus, it seems that given both their strength and subtleties, recording daily, lunar and season variation is vital to accurate acoustic diversity assessments. In essence, a balance must be struck where the limitations and compromises of methods are taken into account in the context of their intended use, but sufficient information is captured to be able to make a valid and robust assessment.
Current methods supporting acoustic diversity assessments
The evaluation of marine soundscapes as a tool for assessing biodiversity is a rapidly developing field. As such, no single method has yet emerged that is broadly considered the standard for quantifying biodiversity acoustically. Some acoustic techniques that are being explored and used are outlined below, though most of these signal processing and data visualization methods are continuing to evolve. Emerging quantitative approaches offer a means to address key needs in this area and are addressed below.
royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 4.1. Long-term spectrogram analyses Long-term spectrogram analysis (LTSA) is a common method to investigate the spectral-temporal change of marine soundscapes. By measuring the spectral average in a user-defined time window (e.g. 5 min interval), a long-term spectrogram can visualize the presence of various sound sources. Choruses are particularly visible in long-term spectrograms and power spectra as they represent energetic mass phenomena [203,204].
It is also possible to take different statistical measurements of spectral variation, such as the spectral median or a specific percentile, to emphasize changes in the sources of continuous sounds, such as environmental sounds and biological choruses. Considering only high percentiles (95th or 99th) within specific frequency ranges can be used to characterize transient high-energy sounds [205]. Collating longterm spectrograms into a single power spectral density probability (PSDP) plot [206] can provide an overview of the levels of energy likely to occur at a given frequency and are source-specific if the source is significant at a site. However, this approach can only display the rough spectral pattern of transient sounds and cannot precisely display the time-dependent spectral modulation of transient sounds. Thus, a combination of manual inspection and automatic detectors/classifiers may still be necessary for investigating the structure of the 'rare' sounds. Perhaps one way to use LTSA processing to address these rare sounds leverages the 'constraint' that many marine soundscapes have, a predominance of lowerfrequency sounds and thus non-Gaussian and right-skewed distribution of power spectral densities (PSDs). The difference between the mean and median or the background ambient noise PSDs can be effective to reduce the influence of continuous signals and thus enhance the presence of transient signals [131,207] (figure 4).
Clustering of audio events
In the analysis of long-duration recordings, we generally do not have sufficient time and personnel to listen and manually scroll through substantial durations of audio data. A technique that can reveal the structure of long-term recordings is necessary to identify the factors driving changes in acoustic diversity. Clustering is a common approach in information retrieval [208]. It assumes that a dataset exhibits an unknown structure. The dataset can be broken down and assessed by finding groups that have similar and unique characteristics.
To facilitate the evaluation of acoustic diversity, clustering algorithms can be run on a long-term spectrogram. By grouping audio clips with similar spectral characteristics, a long-term spectrogram can be summarized into a limited number of audio clusters to facilitate the evaluation of acoustic phenology, such as diurnally and seasonally changing patterns [207]. On the basis of audio clusters, it is possible to measure acoustic diversity and evaluate the relationship with biodiversity ( figure 5). However, the clustering performance may be biased when spectral representations are corrupted by non-biological sources [210]. Another option is to integrate soundscape metrics that are less sensitive to unwanted noise with clustering to improve the content interpretation of long-duration environmental recordings [211]. Despite this, there is still a need to examine the sounds that contribute to each audio cluster in order to understand the ecological factors that influence changes in acoustic diversity (e.g. the likely sources, sound production rates, paired with physical process, etc.)
Acoustic activity detection and feature extraction
Rule-based detectors and feature extractions are widely employed in the assessment of the acoustical behaviour of soniferous marine animals. For example, the sounds of many fish species, including Sciaenidae (croakers and drums), are mainly detected as a sequence of pulses [212,213]. The temporal patterns, or rhythms, which can be compared to acoustic barcodes, contain information for discriminating sound types and eventually species. Thus, analyses of a signal's temporal pattern can be very effective for evaluating the behaviour and species composition of soniferous fish [214], though this can be non-trivial if the species have large vocal repertoires [165,152]. Key advances from marine mammal research, such as spectrogram correlation and parametrizing and grouping spectral peaks, may provide some important tools [173,215].
Another example of rule-based detection and feature extraction is the contour tracking of marine mammals' tonal sounds [216][217][218]. The structure of tonal repertoires can be investigated by grouping different clusters with similar modulation of representative frequencies on a contour. The determination of the number of clusters may be a challenge. Instead of evaluating the cluster number manually, it is royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 possible to assess it by iteratively finding a minimum cluster number that can explain a user-defined variation threshold [177]. A similar technique may be useful for analysing the diversity of transient sounds.
Source separation and deep learning
Sounds can travel more efficiently under water compared to in air, and low-frequency sounds can often travel long distances. This complicates marine soundscape analyses because it tends to result in the overlap and interference of multiple sounds. Machine learning-based source separation has achieved considerable improvements in speech and music [219,220]. Source separation models can be trained by labelled data or constructed in a purely unsupervised manner [221]. The latter one is also called blind source separation, which requires appropriate assumptions about the behaviour of source signals. Several studies have successfully demonstrated the separation of biological, anthropogenic and abiotic sounds using these methods [128,222]. Further, extensive work has been done by applying hydrophone/microphone array systems to isolate source signals of interest [23,223,224]. [131]. Bright green areas indicate fish choruses. (c) The spectrogram of the difference of (a) and (b), which results in the spectrogram of the transient sounds only. A high ANL can mask transient sounds, as highlighted by the areas surrounded in red. In part, this may also be that some biological signals that blend into mass choruses can also appear as low-amplitude ambient noise. ANL, ambient background noise level; RL, received level. Dark blue, 30 dB re 1 µPa; bright yellow, 90 dB re 1 µPa.
royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 The recognition of biological sounds can also be improved by applying deep learning techniques. Initial work has been conducted to recognize bird songs, frog calls and marine mammal vocalizations from large acoustic datasets [225][226][227][228]. Although deep learning has demonstrated its power in many acoustical applications, it requires prior knowledge of all sound sources of interest and sufficient training data (e.g. labelled acoustic datasets) for each of them, including rare sounds that are relevant in acoustic biodiversity assessment. Unfortunately, such a database is still not available for most marine ecosystems and most soniferous marine animals. On the other hand, unsupervised learning techniques can be employed in separating different sound sources and facilitating the evaluation of acoustic diversity. Sparse coding has been introduced in the extraction of acoustic features, which represent the dictionary of acoustic codes for recorded sound sources [229]. Self-learning algorithms, including non-negative matrix factorization and hidden Markov model, can learn hidden variables and associated temporal weights from audio data [126,230,231]. Source-specific independent subspaces can be derived subsequently by identifying source indicators for hidden variables. This approach has been tested in marine and terrestrial soundscapes, and performed well to separate biological sounds from other sound sources [222,232]. Deep learning methods have also been developed to detect and classify cetacean sounds [233]. While these are not yet being applied in datasets rich with bioacoustic sound types, they do work well despite often high levels of diverse background noise.
Complementary data sources
As acoustic behaviour is often related to environmental and anthropogenic drivers, complementary data sources can provide context to the acoustic patterns observed. For example, salinity, temperature, dissolved oxygen and tidal cycle data can help understand presence, intensity and rate of sound production [97,193,234]. Automatic identification system (AIS), vessel monitoring system (VMS), boat ramp surveys or visual observation can all provide information on vessel activity that masks biological sound or drives a particular change in behaviour [235,236]. Traditional observation techniques can provide a snapshot of physical biodiversity, abundance, species composition, habitat complexity, animal size distribution and surrounding habitat. This is vital validation information [154,184,237] and should, where possible, be ongoing [155] to capture environmental changes that can affect biological sound royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 production and therefore local soundscape acoustic complexity and diversity. These complementary datasets help validate the bioacoustic diversity at a particular point in time, but also put that time period into context of the wider influences that can change the soundscape on various temporal scales.
How to listen forward: needs and recommendations for acoustic diversity
While assessing marine acoustic diversity is a nascent field, we have reviewed how prior terrestrial work and other ocean research have provided a foundation; from this foundation, we may build guidelines for designing future acoustic biodiversity studies. We have outlined recommendations below of how to go about this biodiversity assessment task; many of these points highlight the advances or lessons noted above. These suggestions are naturally limited by what we can do now; thus, this section is followed by several needs that may support the field. (i) Replication and site-selection. Within-reef and within-habitat soundscape variation is still being addressed [94,103]; thus, within-site replication is still important to evaluate at a location or habitat. This enables assessments of both α and β biodiversity. Recordings that can be considered replicates should be evaluated to address within-site variation. Because acoustic behaviours are often driven by environmental factors or geophysical conditions, spatio-temporal replicates selected to assess variation between biodiversity or habitat characteristics must have similar geophysical conditions (e.g. salinity, temperature, oxygen cycles, currents tidal cycle). Given that true natural replicates are challenging, sites may be better differentiated by regression-type analyses of gradient differences (i.e. a range of biodiversity or habitat conditions) rather than forced groupings. If we are attempting to evaluate within-site α diversity a nearby outgroup (control site or sites) is essential.
(ii) Longer timescales. While some (often visual) surveys may be necessarily short (i.e. limited by SCUBA), an advantage of passive acoustics is that sensors can be deployed for long durations (months, season and beyond). Longer and repeated surveys have a clear and easily applied advantage in reducing observation biases, a vital factor in meeting the clear conservation and management aims of biodiversity assessments. Thus, we recommend sufficient recording time to include and identify the cycles in soniferous behaviours of interest that may be site-and species-specific, rather than shortterm or subsampled recording, such as the same period of each day.
(iii) Evaluate detectability. No single method or count of animals is perfect, and biodiversity assessments can benefit by correcting animal and species counts by addressing (or estimating) those present, but missed in the assessment. This quantification of detectability (the relationship of what is present in a survey area versus what is actually detected) is a major issue in terrestrial biodiversity measurements, and is likely to be similar, or perhaps a greater issue for marine acoustic measures [184], given the substantial differences in things like call levels, spatio-temporal variability, ambient noise levels and propagation noted earlier.
The influence of these factors on detectability will vary by habitat (and maybe site) and we often still need to evaluate precisely how much a factor, such as ambient noise, affects call or acoustic diversity detectability. Fortunately, detectability has often been assessed in terrestrial biodiversity assessments [155,238,239]. Indeed, terrestrial detectability assessments have often found similarities to the general trends noted here; particularly because species detection probabilities will rarely approach 1 unless many surveys are conducted [155]. Further, long-term monitoring programmes allow for pooling of data, but also need to correct for variation in detectability when comparing species richness over time and between locations [154]. These parallel lessons suggest that, while many covariates will differ between terrestrial and marine environments, perhaps we can apply the analyses frameworks (such as effective sampling distance, species heterogeneity, parsing of habitat types and by species) to improve the reliability of marine acoustic diversity assessments.
(iv) Propagation-related aspects (depth, local bathymetry, substrate) should be identified, and evaluation of weather and other noise sources, especially when comparing among sites. This evaluation supports measurements of detectability.
(v) Concurrent measurements. Multiple methods to count animals and species greatly support biodiversity measurements. These are noted above and include visual surveys, measures of rugosity and quantifying physical (currents) and biogeochemical parameters.
(vi) Pilot data (made available). One must try to understand what sources are influencing this variability, or at least the range of variably, so that proper recording guidelines (such as timeframe of observation) are sufficient. Pilot data are valuable in any study, and bioacoustics is no exception. These data should be royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 made available as electronic supplementary material to guide and support why measurement decisions were made.
(vii) Reduce noise. Many acoustic tools exist to improve signal-to-noise analyses, including methods mentioned earlier. Given the propagation challenges of ocean environments and abundance of natural and anthropogenic noise, noise reduction is vital to improving biodiversity analyses and reliability of data. If the noise is limited, one may also remove files with ship or vessel masking noise from analyses. Further, instrument noise of the recording system is a crucial parameter and should be reported.
Bioacoustical analysis
While no clear methods seem successful to extract biodiversity data from acoustic recordings, and applications of even an array of (terrestrial) acoustic diversity indices seem ineffective [39], we see several potential ways forward with acoustic analyses based on initial successes.
This could include identifying, separating and classifying individual signals. Many fish and aquatic invertebrate sounds are found among the low-frequency noise of their habitats; noise reduction related to those signals of interest is key to proper sound detections. The previously discussed source separation techniques offer substantial promise, but need to ensure that sounds with high and lower occurrence rates are both detected and evaluated. Deep learning methods have shown progress in the enhancement of audio signals from a noisy matrix [126,240]. We may also iteratively find acoustic clusters that can explain a user-defined variation threshold [177]. A similar technique may be useful for analysing the diversity of transient sounds. Marine mammal research efforts have long focused on finding signals within noise. Both spectrogram correlation and detection of frequency contour sounds by detecting and grouping spectral peaks should be applied to other sounds [173,241], and at the very least, their detection and classification can be implemented into a more conglomerate acoustic biodiversity metric. Species-specific acoustic neural networks should be tested on detection and classification of multiple species' sounds [233]. Future integration with these techniques in the analysis of marine soundscapes should maximize the potential deliverables regarding the acoustic behaviour of soniferous fishes and invertebrates, vital biological sources important to the variability of marine soundscapes.
More coarsely, without pulling out individual signals, chorus peak detection and area under the curve measurements have proved initially successful [7,242]. This may be a step that can be used when it proves difficult to pull transient and quiet signals from the cacophony of other sounds.
Additional acoustic measurements
Many sounds are inherently directional. The use of hydrophone arrays can help localize sound sources and track individual calling animals [103]. Such data will also allow us to address the volume or area being sampled [243] (figure 6) and assess diversity within this volume and potentially localize individuals (e.g. [244]). Addressing the sampling volume allows us to account for differences in detection ranges. This will allow for better assessments of biological hotspots as well as potentially enumerating the calling animals for density estimations.
Sound directionality may also be determined by measuring acoustic particle motion (velocity or acceleration) [244]. As technology has improved, measurements of particle motion have become increasingly common [245,246] and, as a direct cue for fauna, it is important to test particle motion measurements of the soundscape to investigate its relationship with biodiversity and acoustic communication. In addition, the instrumentation used to measure particle motion offers an alternative method to sound source localization that requires fewer individual sensors (compared with a hydrophone array) and may improve signal detection in noise (e.g. [182,183]). Understanding variations in particle motion, as detected by receptors, facilitates a better understanding of why changes occur. However, in relating the characteristics of a soundscape to the biodiversity within it, whether the animal perceives sound pressure or particle motion is not necessarily important and sound pressure often provides sufficient information to the observer on the sounds produced. Sound pressure is also more readily measurable.
Although acoustic diversity appears to be related to taxonomic diversity, in order to be fully representative of the biodiversity of a habitat or site, better knowledge of the sound sources and their repertoire is necessary, particularly for fish that make a large contribution to marine soundscapes. The traditional method to assess an animal's sound repertoire is to bring it into a tank and record its sounds [247]; however, there is an inherent potential bias from the effects of captivity. The alternative royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 is labour-intensive in situ validated acoustic observations, such as those taken on re-breather dive equipment [138]. Biologging acoustic-recording tags are increasingly used for marine mammals [248]. A combination of acoustic telemetry tags attached or implanted and three-dimensional passive acoustic localization may alternatively help assessing additional sound source identities, calling rates and propagation distance in the wild.
Open data and open tools
To date, sound archives have been established for most marine mammals and some soniferous fish (e.g. Macaulay Library, Watkins Marine Mammal Sound Database and Moby Sound Archive). However, a comprehensive audio database of all soniferous marine organisms is still not available. A library of labelled fish and invertebrate signals, with metadata such as region recorded, would go a long way in sorting, comparing and analysing large datasets. Additionally, soundscapes in many marine ecosystems, such as the deep ocean, remain relatively unstudied. Under these circumstances, products using acoustic methods for assessing marine biodiversity are limited. In recent years, long-term underwater recordings, such as data obtained by cabled observatories and autonomous recorders (www.iqoe.org/systems), have been archived. These data serve as essential information to investigate the ecosystem-specific soundscape characteristics as well as the relationship between acoustic diversity and marine biodiversity.
Future developments of open data and tools are also critical for expanding the ecological dimensions of acoustic data. An open platform, which allows Internet users to freely access soundscape recordings, will facilitate the identification of biological sounds. A similar approach has been proven to be effective in citizen science (to name a few: iNaturalist, XenoCanto, eBird, Whale FM). The establishment of an open database would not only facilitate public involvement, it could also encourage the future development of analysis toolboxes, while ensuring maintenance of the appropriate control protocols. The Bird Audio royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 Detection challenge, for example, has resulted in a growth in the number of deep learning tools applied to avian acoustic research [228]. Utilization of signal processing and audio recognition tools is critical for information retrieval of marine soundscapes. Currently, tools such as PAMGuide, PAMGuard, Triton, Soundscape Viewer and several ecoacoustic packages in the R programming environments have been employed in various acoustic applications of marine biodiversity (table 1). However, it is necessary to develop more tools to meet the needs of different users and project objectives. Open-source tools will be important drivers to promote the future development of advanced techniques in soundscape information retrieval.
Ongoing development of processing techniques can be facilitated through convening an open forum where validated soundscape datasets (i.e. known acoustic signals with matching biodiversity, habitat, geophysical and bathymetric data) are compared by participants in a blind study. Results are then presented and compared in a transparent format to evaluate processing techniques and results, similar to that of GIS mapping and seafloor substrate and habitat characterization (e.g. Shallow Water Survey, Wellington, New Zealand, 2012) or modelling of fish distribution (e.g. Geohab, Lorne, Australia, 2014).
Given the relative ease of collecting acoustic data with modern technology, but also the enormity of many of these datasets (into the terabytes), cloud computing and leveraging powerful processing will probably be necessary to adequately exploit this information. The simplicity of soundscape data collection may also facilitate and improve observations of biodiversity, particularly in places where routine monitoring is limited via other methods. Such monitoring also requires relatively inexpensive acoustic recorders and means to either conduct on-board processing at the recording site or a costeffective means of relaying relatively large acoustic datasets to the computing cloud. Much of these data collection and analyses should be conveyed to the Web in near-real time.
Building from this capability to link people and data, there is a need for a global field experiment that could be designed to test some of the methods. This would allow us to address how acoustic diversity metric methods vary based on regions and habitat types (to evaluate β-diversity). Ideally, acoustic diversity methods should be generalizable, reproducible and comparable irrespective of geographical and environmental factors. Placing multi-site data or a subset of data, measuring the same parameters, in different parts of the world and in different ecosystems, into common, online repositories would support common analyses to address this need.
Summary
There is a growing call for soundscape measurements to aid biodiversity assessments and, given the diversity of sounds in aquatic environments, such concepts are promising. Yet, it is vital to also address the limitations of these methods so that they are used properly for management and research purposes. The goal of these efforts is ultimately to improve and broaden biodiversity estimates to better determine and monitor critical conservation areas. Therefore, the tools must ultimately serve the Nedelec et al. [246] royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201287 public and resource managers. Passive acoustics has an advantage over many other methods in that it can be cost-effective to observe over long timescales. In the ocean, sound cues travel efficiently, which is a benefit to detecting sounds but a challenge to mitigate masking lower amplitudes and less frequent sounds, which are critical to conservation monitoring. We have provided an assessment of this field's progress to date and a critical evaluation of the considerations for ecoacoustic assessments of biodiversity. The bioacoustic environment is immensely variable. One must try to understand what sources are influencing this variability, or at least the range of variability, so that proper recording guidelines (such as timeframe of observation) are followed. These efforts and our recommendations may be particularly valuable when level of biodiversity and community complexity are unclear, and on a sliding (non-binary) scale. Key needed aspects for these acoustical biodiversity assessments include: (i) replication and site-selection, (ii) recording over sufficient timescales, (iii) evaluation of detectability, (iv) evaluation of propagationrelated aspects, (v) making concurrent measurements of complementary variables, (vi) collection of pilot data and making those data available, and (vii) seeking to reduce acoustic noise. Further, a suite of established and emerging passive acoustic analysis tools exist, and can potentially be incorporated into developing a metric. We see these tools as having great potential, but we suggest a cautious approach to applying current acoustic indices to assess marine biodiversity. Yet, with key preliminary data, an initial understanding of the patterns and variability of a habitat, and new analytical tools, there is substantial promise in quantifying marine bioacoustic diversity and supporting management needs.
Data accessibility. This article has no additional data. Authors' contributions. T.A.M. provided the initial draft. All other authors revised and edited this paper equally. Competing interests. We declare we have no competing interests. Funding. Funding for development of this article was provided by the collaboration of the Urban Coast Institute (Monmouth University, NJ, USA), the Program for the Human Environment (The Rockefeller University, New York, USA) and the Scientific Committee on Oceanic Research. Partial support was provided to T.A.M. from the National Science Foundation grant OCE-1536782. | 2020-08-26T13:06:45.607Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "8521681e93fce161cb90b6aa932b4adf8fb9875d",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.201287",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8521681e93fce161cb90b6aa932b4adf8fb9875d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
]
} |
118664045 | pes2o/s2orc | v3-fos-license | Tracing water vapor and ice during dust growth
The processes that govern the evolution of dust and water (in the form of vapor or ice) in protoplanetary disks are intimately connected. We have developed a model that simulates dust coagulation, dust dynamics (settling, turbulent mixing), vapor diffusion, and condensation/sublimation of volatiles onto grains in a vertical column of a protoplanetary disk. We employ the model to study how dust growth and dynamics influence the vertical distribution of water vapor and water ice in the region just outside the radial snowline. Our main finding is that coagulation (boosted by the enhanced stickiness of icy grains) and the ensuing vertical settling of solids results in water vapor being depleted, but not totally removed, from the region above the snowline on a timescale commensurate with the vertical turbulent mixing timescale. Depending on the strength of the turbulence and the temperature, the depletion can reach factors of up to ${\sim}50$ in the disk atmosphere. In our isothermal column, this vapor depletion results in the vertical snowline moving closer to the midplane (by up to 2 gas scale heights) and the gas-phase C/O ratio above the vertical snowline increasing. Our findings illustrate the importance of dynamical effects and the need for understanding coevolutionary dynamics of gas and solids in planet-forming environments.
INTRODUCTION
Water plays a central and versatile role during planet formation. When present as vapor, it acts as a coolant for the gas and controls the chemistry of primitive materials by setting the oxidation state. In the form of ice, it represents a considerable fraction of the solid mass budget and has the potential to trap other, more volatile molecules in the form of amorphous ice. The presence of water on planets influences geological processes and is essential for creating habitable conditions. As such, understanding water's journey from the molecular cloud to the protoplanetary disk and its delivery onto planetary embryos are active areas of research (e.g., Ciesla & Cuzzi 2006;Mandell et al. 2007;Visser et al. 2011;van Dishoeck et al. 2014).
During the protoplanetary disk phase, the outcome of dust coagulation (the first stage of planet formation) is influenced by how much water is present, mainly because frozen-out ice on grain surfaces is believed to enhance their stickiness and thus promote particle growth (e.g., Supulver et al. 1997;Gundlach & Blum 2015). Models of planetesimal formation for many years have been finding that growth beyond a meter in size is hard to achieve (see Johansen et al. 2014, for a recent review). Water ice plays an important role in three recently proposed solutions for crossing the meter-size barrier. For example, highly-porous grains can potentially grow rapidly enough to overcome radial drift Kataoka et al. 2013;Krijt et al. 2015), but this is only possible when the porous grains can survive high-velocity collisions, something only icy grains are potentially ca-pable of accomplishing (Dominik & Tielens 1997;Wada et al. 2013;Gundlach & Blum 2015). Close to the radial snowline, condensation itself can increase the sizes of individual dust grains significantly (Ros & Johansen 2013). Finally, by freezing out onto grains water can also increase the local solid abundance by a factor of ∼2 (Lodders 2003), which helps to create conditions for triggering the streaming instability (SI), allowing the formation of clumps that can gravitationally collapse and form planetesimals almost instantaneously (Youdin & Goodman 2005;Bai & Stone 2010a,b;Drażkowska & Dullemond 2014;Carrera et al. 2015).
At the same time, the evolution of the solids influence water vapor in a number of ways. First, by dominating the disk's opacity at UV wavelengths, small dust grains play an important role in setting the temperature structure in the disk, influencing where in the disk water is present as ice or vapor (e.g., Min et al. 2011). As such, assumptions about the dust population (its total mass, size distribution, and vertical and radial distributions) are an important part of interpreting observations of water lines and in connecting observed intensities to physical abundances (e.g., Hogerheijde et al. 2011;Antonellini et al. 2015Antonellini et al. , 2016Blevins et al. 2016). Moreover, dust grain dynamics (e.g., radial drift, gravitational settling) are expected to alter the distribution of water in the radial (Cuzzi & Zahnle 2004;Ciesla & Cuzzi 2006;Salyk et al. 2008) and vertical (Meijerink et al. 2009) direction.
Starting with Bergin et al. (2010); Hogerheijde et al. (2011); Du et al. (2015), and now Du et al. (2016), we have increasingly clear evidence that we have a water problem: models predict too much water in the disk surface layers. In the work of Du et al. (2015), this was particularly acute in TW Hya between 5−10 AU, where the 179 micron water line was expected to shine outbut was not detected. This observed under-abundance of water vapor can potentially be understood in terms of the so-called 'cold finger' effect of Stevenson & Lunine (1988), but working in the vertical direction: water vapor is mixed from the vapor-rich atmosphere down to the disk midplane and freezes out onto large, settled grains that act as a sink because they cannot be lofted back up. As a result, vapor is removed from the disk's surface layers (Meijerink et al. 2009). It is unclear however, how far this depletion can continue. For example, the efficiency of the vapor diffusion toward the midplane depends on the details of the local dust population (Monga & Desch 2015). Moreover, the removal of volatiles from the disk atmosphere is not necessarily a one-way process, as small grains capable of transporting ice back to the disk surface are continuously being produced in destructive collisions of larger bodies (Dullemond & Dominik 2005).
With the evolution of water and dust so intimately connected, we build on the work of Krijt & Ciesla (2016) and set out to model the temporal evolution of the vertical distributions of solids, water vapor, and water ice in the region outside the radial snowline. The goal is to develop a framework in which the relevant processes can be simulated simultaneously and self-consistently, connecting the processes happening in the midplane (coagulation, fragmentation, freeze-out of volatiles) to those taking place in the disk atmosphere (sublimation and volatile release), allowing us to study the interplay between the dust and water evolution.
MODEL
For illustrative purposes, we will focus on an isothermal column in a typical protoplanetary nebula, while assuming a constant, vertically-integrated abundance of gas, water and refractory dust. Here, we describe our nebula model and summarize the physics behind the vertical motions of vapor and solids, dust coagulation, and water-dust interaction. In Sect. 3, we develop a numerical model for simulating these processes over relevant timescales.
Disk structure
Our nebula model is based on the Minimum Mass Solar Nebula (Weidenschilling 1977;Hayashi 1981), with a radial gas surface density profile that can be described as (1) The temperature is assumed to be constant in the vertical direction, and varies with radius as (2) For these assumptions, the vertical profile of the gas density equals with h g = c s /Ω the gas pressure scale-height, c s = k B T /m g the sound-speed, Ω the Keplerian frequency, and k B the Boltzmann constant. We set m g = 2.3 amu. A small fraction of the gas is water vapor, whose local mass density we denote as ρ H2O . While ρ H2O ρ g , and neglecting advection, the diffusion equation for the water vapor is given as (Dubrulle et al. 1995) where the turbulent gas diffusivity D g is parametrized as (Shakura & Sunyaev 1973) and does not depend on z in the isothermal approximation. The vertical diffusivity implies a vertical mixing timescale of t D ∼ h 2 g /D g ∼ 1/αΩ.
2.2.
Dust grain model Our initial dust population is assumed to be a monodisperse distribution of refractory 'monomers' with grain size s • = 1 µm and internal density ρ • = 2.6 g cm −3 . The monomers can grow through collisions to form larger aggregates. While the (average) internal density of individual aggregates can in principle become much lower than that of the monomers themselves -in particular when low-velocity collisions result in hit-and-stick growth (e.g., Kempf et al. 1999;Okuzumi et al. 2012) -we will assume in this work that growth leads to compact aggregates, with a fractal dimension close to 3, as depicted in Fig. 1. In that case, an aggregate's mass m and size s are related through m = (4/3)πs 3 ρ int , where we will use ρ int ≈ ρ • . The geometrical cross section of an aggregate is taken to equal σ geo = πs 2 , and the surface area that is available for surface chemistry is written as σ chem = 4πs 2 .
Apart from its size, we will follow the amount of water ice on the aggregate as it moves through the disk. With aggregate-aggregate collisions, sublimation, and condensation all influencing the amount of ice, following how the ice is distributed on the aggregate is highly complex. Here, we only solve for the total number of water molecules on a grain, n ice , and assume the majority of the ice is concentrated close to the aggregate's surface. Furthermore, we assume the ice mantle does not significantly influence the grain's size, mass, or average density. The number of water molecules can be converted into a ice/rock mass ratio 3 with m H2O = 18 amu the mass of a single water molecule. The physical structure of a single dust aggregate is then fully characterized by its size (or mass) and ice fraction f ice (or n ice ). We return to the assumptions made here and their impact on the simulations in the remainder of this work in Sect. 5.
Dynamics
Including dust grain dynamics is vital for this study because ice-coated grains can transport water molecules between different regions of the disk. An important parameter for dynamics (and coagulation) is the particle's -Schematic of the dust particle model (not to scale). Aggregates, conglomerates of µm-size monomers, are compact and have a fractal dimension close to 3, i.e., m ∼ πρ•s 3 (Sect. 2.2). Their surface has a small-scale roughness with an average curvature K ∼ 1/s• reflecting the monomer size and exchanges water molecules with the gas via condensation and sublimation (Sect. 2.3). The fragmentation velocity of the aggregate is a function of its ice/rock ratio (Eqs. 6 and 10). Aggregate image: Seizinger et al. (2013).
with λ mfp = m g /(σ mol ρ g ) the molecular mean free path. It is common to define the dimensionless Stokes number St ≡ Ωt s . We note that displacing a dust grain vertically changes its Stokes number and potentially even the drag regime the particle is in (Epstein or Stokes 4 ) as both ρ g and λ mfp vary with height in the disk. The vertical mixing of dust grains is then controlled by the dust particle diffusivity, related to D g through the Schmidt number (Youdin & Lithwick 2007) For a given stopping time, an effective vertical velocity can be defined as which can be used to calculate the trajectory of an individual grain (e.g., Ciesla 2010). The factor (1 + Ωt s ) −1 in the second term on the RHS of Eq. 9 is there to ensure that the vertical velocity does not exceed the vertical component of the Kepler velocity for large grains with Ωt s 1 (Brauer et al. 2008). In the absence of collisions, the dependence of v eff on particle size (through the stopping time) will result in large grains settling to the midplane while small grains (those with Ωt s α) keep a vertical distribution very similar to that of the gas (Dullemond & Dominik 2004;Ciesla 2010). This picture can change when collisions are frequent, in which case small grains, after being released in collisions of larger bodies, can become trapped in the disk midplane (Krijt & Ciesla 2016 Depending on the velocity at which a grain-grain collision occurs, grains can stick together or be (partially) destroyed (e.g. Blum & Wurm 2008;Güttler et al. 2010). When calculating relative collision velocities, we take into account Brownian motion, turbulence (Ormel & Cuzzi 2007), and differential settling, and include the dependence of the various velocity sources on the location, z, at which the collision takes place.
The maximum velocity at which collisions can result in growth is denoted by the fragmentation velocity v frag . We use a variable fragmentation threshold to capture the effect frozen-out water ice has on the stickiness of the grains, writing The factor of 10 difference between refractory and icy grains is supported by both numerical simulations (Dominik & Tielens 1997;Wada et al. 2013) and experimental results (Gundlach & Blum 2015). The threshold value for f * is less well constrained by theory or experiments. The sticking of microscopic spheres involves only a surface layer with thickness δ/s ∼ 10 −2 (e.g., Chokshi et al. 1993;Krijt et al. 2013), indicating a relatively small amount of water ice can be sufficient to alter the sticking properties (i.e., f * 1). For a larger aggregate however, the details will depend on how exactly the water ice is distributed. Here, we will use f * = 0.1, and note that the main results are not very sensitive to this choice as long as f * 1.
Condensation/sublimation
The saturated vapor pressure for water on a flat surface equals (Supulver & Lin 2000) On a non-flat surface however, this expression has to be modified to include the effect of surface curvature. Following Sirono (2011), we write with γ = 69 erg/cm 2 the surface energy of water ice, v = 3.3 × 10 −23 cm 3 the volume of a water molecule, and K represents the local radius of curvature. With our aggregates being conglomerates of micron-size grains, we will assume that all aggregates, independent of their macroscopic size, have a characteristic (i.e., averaged over their entire surface) local curvature of K = 1/s • (see Fig. 1). In addition, we assume that this K is not influenced by the presence of water ice. For these parameters, typically Kγv/k B T ∼ 10 −2 and P K sat ≈ P sat . The assumptions made here ignore some interesting effects, which we discuss in detail in Sect. 5.
For a given vapor pressure, the sublimation and condensation rates (in g cm −2 s −1 ) are given by (e.g., Supulver & Lin 2000) which we can combine to find the rate of change of the total number of water molecules on a given grain as where we have written v th = (8k B T /πm H2O ) 1/2 as the thermal velocity of water molecules. The RHS of Eq. 16 is positive when P H2O > P K sat (condensation dominates) and negative when P H2O < P K sat (sublimation dominates).
NUMERICAL METHODOLOGY
In this Section we develop a framework that allows us to simultaneously model the physical processes described in Sect. 2. The model builds on the method of Krijt & Ciesla (2016), where coagulation and vertical mixing of dust were combined. Apart from the addition of the water vapor, an important difference with that work is that we now solve for the dust size distribution self-consistently, rather than assuming a coagulation/fragmentation steady state from the start.
Set-up and initial conditions
The simulated region is a single vertical column at a radius r from a sun-like star, and the vertically integrated refractory dust and water+ice contents of the column are taken to equal Σ d /Σ g = 5 × 10 −3 and Σ H2O+ice /Σ g = 5 × 10 −3 (e.g., Lodders 2003). The column is described by a total of N b stacked boxes placed between z = 0 and z = 4h g with heights L = 4h g /N b and volumes V = L 3 . Inside a single box, the temperature, gas density, and vapor density are assumed to be uniform, and the dust particles inside the box are treated as being well-mixed spatially. The dust population in the column is represented by N p super-particles.
Every super-particle i represents N i physical particles that together have a mass M i . We choose to keep M i fixed for all super-particles, so that N i = M i /m i , with m i the mass of an individual particle. The total mass of all physical particles must equal 50% of the total dust mass in the column 5 , i.e., N p M i = Σ d (r)L 2 /2. Gas-phase water molecules are not followed individually, but we solve for the water vapor density ρ H2O (t) inside every box taking into account diffusion between neighboring boxes and losses/gains through condensation/sublimation, while assuming that the bulk gas density and temperature are not influenced by the water vapor and dust distributions.
At t = 0, before we start the calculation, we distribute the refractory dust and water vapor such that it is wellmixed with the gas. The N p refractory dust particles start out as monomers with radii s i = s • = 1 µm and initial z i ≥ 0 are drawn from a Gaussian with half-width h g , resulting in (ρ d /ρ g ) ≈ Σ d /Σ g = 5 × 10 −3 inside every cell 6 . The water vapor is first distributed according to (ρ H2O /ρ g ) = Σ H2O /Σ g = 5 × 10 −3 , and then (without being allowed to diffuse) given the opportunity to equilibrate by freezing out onto the present dust grains. In the region just beyond the radial midplane snowline (r 3.2 AU in our disk model), this results in two distinct vertical regions in the isothermal approximation. Due to low gas densities (ρ H2O < ρ K sat ), the water stays in the vapor phase at high z, resulting in a region of constant water-vapor abundance and grains with f ice = 0. Closer to the midplane, the original well-mixed water vapor abundance exceeds ρ K sat and the molecules condense onto the dust grains. This results in a region where ρ H2O = ρ K sat , leading to a decreasing n H2O /n H2 with decreasing z (since ρ K sat is constant) and grains covered in ice. We will refer to the boundary between these regions as the vertical snowline. The (initial) location of this line can be found by solving ρ H2O = ρ K sat , resulting in where P K sat is a function of temperature (Eq. 13). In the model outlined here (e.g., vertically isothermal), the vertical snowline will remain at z 0 SL when vapor diffusion and vertical motions of dust grains are ignored (even if growth is taking place). As such, we can use this 'static' limit as a reference point for comparing simulation results.
The following Sections detail how the dust and vapor are evolved in time. Specifically, we discuss how coagulation (Sect. 3.3) and gas-grain chemistry (Sect. 3.4) are calculated per box, and how dust dynamics (Sect. 3.2) and vapor diffusion (3.5) allow dust and vapor to move from one box to another.
Dust dynamics
Grain dynamics (gravitational settling and turbulent vertical diffusion) are calculated according to Ciesla (2010), in which a particle's vertical location after some time ∆t is obtained as with ζ = 1/3 and R 1 is a random number between [−1, 1] and the effective velocity given by Eq. 9 depends on the location and size of the grain. Based on the dynamics, the global time-step ∆t global is chosen such that for all super-particles to ensure that grains do not traverse multiple box boundaries in a single time-step. It is this global timestep that we base the rest of our calculations on.
3.3. Dust growth and fragmentation During a global timestep ∆t global , coagulation and fragmentation are solved per box, using the representative particle approach of Zsom & Dullemond (2008) and Zsom et al. (2011). In short, the method comes down to calculating all collision rates between pairs of particles present inside that particular box, and using random numbers to determine which particles collide and when. In this way, we calculate forward in time from collision to collision, until a period ∆t global has passed and we allow grains to move between adjacent boxes again.
The collision rate of super-particle i with a particle represented by super-particle j is with n j = N j /V the number density of j particles, v rel the relative velocity between those particles, and σ ij = π(s i + s j ) 2 the geometric collisional cross section. For numerical reasons, we modify the collision rate when m j < f m i , in which case we group j-particles together and collisions occur between a single i-particle and a total of f m i /m j ) identical j-particles (see Krijt & Ciesla 2016). The modified collision rates read and we use f = 0.1. To solve coagulation in a single box, we first compute the collision rates for all super-particles inside that box, and sum over the individual contributions to find and A random number R is drawn from a uniform distribution between 0 and 1, and the time until the next collision event is computed as If ∆t col > ∆t global , no collision event takes place in this particular box during the global time-step, and we move on to the next box. Alternatively, when ∆t col ≤ ∆t global , we use two more random numbers to identify the collision partners i and j, and another random number to determine the collisional outcome for the corresponding fragmentation probability. In the case of fragmentation, we need one more random number to choose the resulting fragment size (see Eq. 13 of Krijt & Ciesla (2016) for the fragment size distribution). This process is repeated until the sum of ∆t col exceeds ∆t global , in other words, multiple collisions can occur within the same box in a given time-step (see also Zsom et al. 2011). We note that the individual collision rates (and therefore also the sums of Eqs. 22 and 23) have to be re-evaluated after every single collision event because the properties of super-particle i have been altered. We employ the collisional outcome model of Birnstiel et al. (2011), in which the fragmentation probability P frag is 1 at collision velocities above v frag , 0 below 0.8v frag , and has a linear transition between these two regimes. In this collision model bouncing is neglected, so that the probability of sticking equals P stick = 1 − P frag .
Sublimation and condensation
The amount of water vapor inside a given box will change as the result of sublimation/condensation, as dust grains return/remove water molecules to/from the surrounding gas.
While ρ H2O > ρ K sat inside a box, the change in vapor density following from condensation can be written as where the sum is over all i super-particles that are present inside that particular box. The fraction M i /m i takes into account that super-particle i represents N i physical particles. Inserting Eq. 16 results in where A does not depend on ρ H2O . Defining leading to Thus, to calculate the time-dependent condensation during a global timestep ∆t global we only need to evaluate A (which requires summing over the super-particles inside box b) once, before making use of Eq. 28. After computing how much water vapor is actually lost from the gas during ∆t global , we distribute these water molecules to the present super-particles in a cross-section-weighted way, i.e., proportional to s 2 (see Eq. 16). Sublimation is handled slightly differently. Because grains only have a finite number of molecules to give back to the gas, Eq. 16 is solved for each super-particle in the box independently, while the change in ice molecules is limited to ensure n ice (t + ∆t) ≥ 0. The new ρ H2O (t + ∆t) is computed at end of the timestep by adding the contributions of the relevant super particles.
Water vapor diffusion
Lastly, we simulate vertical vapor diffusion during a time-step ∆t global by solving Eq. 4 using the method of finite differences. For the boundary conditions, we assume that there is no vapor flux through the top of the upper box and the bottom of the lower box (the disk midplane).
Final recipe
The recipe for evolving the column then becomes: 5. Allow water vapor to diffuse between adjacent boxes.
Only steps 2 (if a collision occurs) and 3 (if a grain moves from one box to another) require that the collision rates are updated. Because we assume that the water-ice fraction of the grains does not influence their size or Stokes number, and that the vapor distribution does not impact the bulk gas properties, the collision rates stay the same during all other computational steps.
Optical depths
While not used directly in the simulations, an interesting output of the models is the vertical profile of the dust optical depth. In general, the opacity, κ λ , of a dust grain depends on the size, shape, and composition of the grain, as well as the wavelength λ (e.g., Kataoka et al. 2014;Cuzzi et al. 2014). For the particle sizes considered in this work, however, an adequate approximation at UV and visible wavelengths is that the optical cross section equals the geometrical cross section, resulting in κ λ ∼ 1/s i ρ • . The optical depth of a group of grains represented by a single representative particle can then be written as and the cumulative vertical UV optical depth at height z, measured from above (i.e., perpendicular to the midplane), becomes An important location is the height where τ UV (z) = µ 0 (with µ 0 ∼ h g /r ∼ 0.05 the flaring angle of the disk), which indicates the height below which stellar photons cannot efficiently penetrate (D'Alessio et al. 1998). For example, dust grains in the region where τ UV (z) > µ 0 will receive considerably less of the incident stellar UV radiation, protecting them from photo-desorption (see Sect. 5.2). For the conditions at t = 0 (all grains are s • in size and the dust-to-gas ratio is constant vertically), we can obtain with erf −1 the inverse error function. As coagulation and settling of dust begin to take place, the location where τ UV (z) = µ 0 is expected to move closer to the midplane.
3.6.2. Choosing Np and N b The computational cost involved with these calculations depends heavily on the choice of the number of super-particles N p and the number N b of grid cells. To accurately capture the vertical variation of the dust density after grain settling, the size of single cell should be smaller than the scale-height of the largest grains, which can become problematic if settling is very effective (see Sect. 4 of Drażkowska et al. 2013). In our case, fragmentation and turbulent mixing will limit settling, and it suffices to use 20 cells per gas scale-height (i.e., L = 0.05h g ) for the α = 10 −3 case and twice as many for the α = 5 × 10 −4 runs.
The number of super-particles also cannot be chosen to be too small (see also Zsom & Dullemond 2008). In particular, it is important that we resolve the particle size distribution and the vertical profile of the dust. Moreover, to resolve z τUV=µ0 , we need to make sure a single super-particle is sufficiently optically thin, i.e., τ i < µ 0 . Making use of Eq. 29, we find N p 10 4 for s • = 1 µm and Σ d = 1 g/cm 2 . Based on these considerations, we will use a 10 4 < N p < 10 5 in the remainder of this work.
Conservation of total dust and water mass
Since we are looking at a closed column of gas and dust, the total mass of refractory particles and water molecules (present as ice or vapor) should be conserved. In the method outlined in this section, this is true for the mass in refractory grains, but not necessarily for the water molecules. While there is no net flux of water vapor through the bottom and top boundaries of the column, the mass in water ice can vary because of how the representative particle approach works. Specifically, when a representative grain collides, only the properties of that representative particle are changed, while the properties of the grain representing the particle it collides with are left unaltered (Zsom & Dullemond 2008;Zsom et al. 2011). This causes fluctuations in the total water ice mass when collisions occur between grains of very dissimilar ice-to-rock ratios. For the simulations presented in this paper, the fractional variations, measured between start and end of the simulations, were typically ∼10 −3 , and up to factor 10 larger for the simulations at r = 3.5 AU, where collisions between grains with very different f ice are most common. Thus, small changes in the H 2 O abundance are seen, but they are small compared to the effects modeled here. Here we use the methodology outlined in Sect. 3 to model the evolution of dust, ice, and water vapor in isolated columns at radii between 3 and 4.5 AU. At every radial location, the gas surface density and (vertically isothermal) temperature are given by Eqs. 1 and 2. The simulations together cover both sides of the radial snowline, which is located at r 3.2 AU for our disk model. Figures 2 and 3 show the evolution of the dust and water (vapor and ice) over 10 mixing times (t D ∼ 10 3 yr for this model, see Table 1) in columns at r = 3.5 AU and 4 AU for a turbulence characterized by α = 10 −3 . The spherical symbols show the locations, size, and ice/rock ratio of the dust super-particles (Eq. 6), and the background color shows the water vapor abundance relative to H 2 , calculated as The green marker shows the location of the vertical snowline, defined as the location where the water vapor density drops below 99% of ρ K sat , and the yellow and orange marker show where the cumulative optical depth, as calculated from above, reaches µ 0 and 1, respectively. The vertical extent of the plot corresponds to 4h g . The top-left panel of both figures closely resemble the initial conditions (see Sect. 3.1): we have well-mixed water vapor and ice-poor grains above the snowline, and ice-rich grains and a water abundance relative to H 2 that decreases with decreasing z below the snowline. The initial locations of the snowlines in both figures are in good agreement with Eq. 17 when we insert Σ H2O /Σ g = 5 × 10 −3 (see also Table 1).
Focusing first on the time evolution of the solid com-ponent, we see that the dust grains grow to ∼cm sizes on timescales of several thousand years and in general, grains below the snowline are ice-rich while grains in the upper parts of the disk are ice-poor. Looking at the vertical locations of the various representative particles, it is clear that gravitational settling is an important effect for grains larger than a millimeter or so, which is approximately the size for which St ∼ α (e.g., Krijt & Ciesla 2016). In the next Sections, we discuss in detail the resulting dust & ice distributions (Sects. 4.1 and 4.2) and the effect grain growth and settling has on the vapor content of the disk atmosphere (Sect. 4.3) and the location of the vertical snowline (Sect. 4.4). Figure 4 shows the mass-weighted dust size distributions in different vertical regions of the column: the region above z τ =1 , the region above z SL , and the entire column. The area under the curves is normalized to 1, and by plotting the quantity s · m · n(s), the peak of the distribution shows where most of the solid mass is located. As in Sect. 3.1, we neglect the contribution of the ice mantles on particle mass and size 7 . The vertically integrated distributions (bottom panel) closely resemble the coagulation/fragmentation steady-state distributions one would expect (e.g., Birnstiel et al. 2011), with most mass concentrated in the peak close to the maximum size, which, for turbulence-induced fragmentation in the Epstein regime equals (Birnstiel et al. 2012)
Dust distributions
In Fig. 4C, there is a sharp transition between 3 and 3.5 AU, as we go from ice-free grains inside the snowline to sticky, ice-rich aggregates (at least in the midplane) outside of ≈3.2 AU. The difference in the maximum size is two orders of magnitude, as expected from Eqs. 10 and 33. This sharp change in the vertically integrated size distribution around the radial snowline is expected to result in an observable jump in the spectral index, detectable with ALMA (Banzatti et al. 2015;Cieza et al. 2016).
Since we have spatial information on the dust particles, we can also study the variation of the size distribution with height. Figure 4B shows the particle distribution for the grains that are located above the vertical snowline z SL . To highlight their shape, the curves have again been normalized, but the percentages indicate the total mass (relative to the total amount of dust in the column) in these populations. As we move from r = 3.5 to 4.5 AU, the snowline moves up (e.g., Table 1), resulting in a decrease in the mass fraction of dust above this surface (from 3.5% to 0.08%) and a decrease in the size of the largest particles that make it up there. Similarly, Fig. 4(A) shows the distribution above z τUV=1 . This would be the distribution that would be seen in short-wavelength observations of a nearly face-on disk. The τ UV = 1 surface lies relatively high for the r = 3 AU case because the efficient fragmentation of ice-free grains results in a high abundance of small grains with a high surface/mass ratio (Eq. 29). As a result, the grains that are visible at higher z are the smaller and less abundant then for the simulations at larger radii. In summary, depending on which vertical region one is interested in, the shape of the dust distribution can deviate significantly from the one in the midplane (essentially the distribution of Fig. 4C).
Ice/rock ratios of solids
For the cases outside of r = 3 AU, virtually all grains in the midplane are covered in water ice. Figures 2 and 3 show that the highest ice-rock mass ratios are found in the smallest grains, with sizes between 1 − 10 µm. This is to be expected, because the timescale for doubling ones mass purely by capturing water molecules scales as s 3 /(dn ice /dt) ∝ s −1 (see Eq. 16) and is shorter for smaller particles. Figure 5 shows the mean value and standard deviation of the ice/rock ratio as a function of particle size at different locations in the disk. In all cases, f ice ∼ 1 for the largest grains in the midplane, reflecting the fact that there is about as much water as dust in our column, i.e., Σ H2O = Σ d . The spread in the ice/rock ratio of the largest bodies is also small, which we attribute to the many collisions these particles see, which act to average out ice/rock ratios if particles stay in the same region. As we focus on smaller particles, the spread in f ice becomes larger as these grains can cross the vertical snowline more readily.
Vapor depletion in disk atmosphere
Over the course of the simulations shown in Figs. 2 and 3, the water abundance in the upper layers of the disk drops by a factor of ∼2.5 and 11 (i.e., 1/∆ atm in Table 1), respectively, reaching a steady state value at the end of the 10 mixing times that were simulated. At 4.5 AU the depletion is as high as a factor ∼30. The mechanism behind this depletion can be understood in terms of the 'vertical cold finger' effect (Meijerink et al. 2009), and is related to the transport and dynamics of water vapor and ice. Because of condensation there is always less water vapor (relative to H 2 ) in the midplane, resulting in a net flux of water vapor from the atmosphere down to the midplane. At the same time, there is turbulent mixing of ice-covered grains from the midplane back to the atmosphere, where these grains release their volatiles and replenish the water vapor. When all the grains are small, these two fluxes (vapor diffusing down & ice being mixing back up) are balanced, and the water abundance above z SL does not change 8 . However, when grains start to grow to sizes where settling becomes important, the flux of water ice being mixed from the midplane to z > z SL diminishes. Then, the diffusion of water vapor wins, and drains the atmosphere of its water vapor on a timescale comparable to the mixing timescale t D . The cold fin- -Average ice-to-rock ratio as a function of dust particle size at the end of the simulations using α = 10 −3 . The shaded areas indicate deviations from the average by 1 and 2σ, respectively, and the dotted horizontal line corresponds to ger effect is most efficient when there are not a lot of small grains around, in which case it is easier for water molecules to travel close to the midplane before freezing out (see Monga & Desch 2015 and Sect. 5.3). However, even if water molecules preferentially freeze out onto the smallest dust grains, water ice can still be transported to the largest, settled grains through sticking collisions. While the water vapor abundance can drop more than an order of magnitude locally above the snowline, the change in the vertically integrated water vapor to gas ratio (i.e., 2 ∞ 0 ρ H2O (z) dz/Σ g ) is less dramatic because a significant portion of the water mass is in the region below the snowline where ρ H2O = ρ K sat at all times. Table 1 shows that the changes between the initial and final integrated water-to-gas ratio Σ H2O /Σ g are of the order of 10 − 30%.
Location of the vertical snowline
The removal of water vapor from the disk atmosphere discussed in Sect. 4.3 has consequences for the location of the vertical snowline. Basically, the snowline is located at that height where the partial pressure of water equals the saturated pressure, P H2O = P K sat , where P K sat is constant in the isothermal column. When the water mixing ratio drops, the snowline will shift to a region where the total gas pressure is higher, i.e., closer to the midplane. In Figs. 2 and 3, the snowline moves down by 0.12 and 0.22 AU, respectively (Table 1), corresponding to about 1 scale-height at these locations. Fig. 6 shows the time evolution of the location of the vertical snowline between r = 3−4.5 AU. Initially, all simulations start with the snowline at z 0 SL (Eq. 17), but after the growth and set- Model parameters Initial conditions ( Note. -The uncertainties, estimated by looking at the variation over the last 2t D of the simulations, are ∼3% for the maximum grain size, ∼1% for the dust-to-gas ratio and atmospheric and integrated water abundance, and ∼0.01 AU and ∼0.03 AU for the snowline and τUV = µ0 surface locations, respectively. (a) The water vapor abundance in the well-mixed region above the snowline. (b) The atmospheric depletion factor ∆atm is defined as the ratio of the final to initial water vapor abundance in the disk atmosphere.
(c) The water vapor surface density relative to Σg (see text). The initial value is calculated by first assuming that all the water is vapor and well-mixed with the gas, and then allowing it too freeze out beneath the vertical snowline (see Section 3.1). (d) The calculated initial snowline and τUV = µ0 surface locations are consistent with Eqs. 17 and 31.
tling of solids, a few mixing times are enough to lower z SL by one, and in some cases almost two, scale-heights.
It is important to note that the vapor distribution above the snowline reaches a finite steady-state value, rather than continued depletion to n H2O = 0. This can again be understood by thinking about the transport in terms of vapor diffusing down and ice-rich solids mixing up (specifically, through the location of z SL ). First, as more water freezes out onto grains in the isolated column, the ice-rock ratio of the solids increases. Second, as the vertical snowline moves down, it reaches regions of increasing dust-to-gas ratio. These two effects result in the ice-flux (essentially the product of the flux of dust particles times their ice-rock ratios) increasing as z SL approaches the midplane, resulting in the behavior observed in Fig. 6.
Lastly, Fig. 6 reveals that the snowline location reaches a minimum after a couple of mixing times, before increasing slightly to reach the steady-state value. This minimum corresponds to the period in the dust evolution just before small grains start to be replenished by collisions at v rel > v frag , when the abundance of small grains -the most efficient transporters of icy material to the atmosphere -is at an all-time low (this phase is visible in the 4th panel of Fig. 3).
Effect of varying α
Here we briefly discuss how the effects described in the previous sections change when the turbulence strength is reduced to α = 5 × 10 −4 . Figure 7 shows a simulation with (nearly) identical initial conditions to Fig. 3, but assuming a weaker turbulence. Several things stand out: First, the timescales are longer (the mixing time scales as t D ∝ 1/α). Second, settling is more effective at keeping grains confined to the midplane. Finally, dust particles grow to larger sizes when turbulence is weaker because the relative collision velocity is decreased. Comparing the maximum sizes that are listed in Table 1, we see that the lower turbulence results in maximum grain sizes that are a factor or ∼2 larger. This is in agreement with Eq. 33, which indicates s frag ∝ α −1 .
Generally, the fact that particles can grow larger and settle more readily results in the vapor being depleted more effectively in the α = 5 × 10 −4 case. Comparing the resulting water vapor abundances in Table 1, we see indeed that the water abundance in the atmosphere is between 10−50% lower at the end of the simulations with a weaker turbulence. As a result, the snowline also moves further down, as illustrated in Fig. 6 where z SL (t) is plot-ted for both values of α. However, while the steady-state atmospheric vapor abundance and snowline location are lower in the α = 5 × 10 −4 simulations, the time it takes the column to reach that steady-state is longer because of the increased mixing timescale (listed in Table 1) as well as the fact that coagulation is slower for reduced relative collision velocities.
Simulating columns with a much weaker turbulence is challenging because of the very small relative scale-height of the largest grains, which results in a very small timestep in Eq. 19. In addition, a weaker turbulence results in a broader size distribution and therefore needs more representative particles to accurately resolve it. Several elegant methods have been proposed to alleviate the computational costs in such cases (e.g., Charnoz & Taillifet 2012;Drażkowska et al. 2013), though it may not be straightforward to use them in the context of simultaneous dust and vapor evolution.
4.6. Impact on gas-phase C/O ratio The gas-phase C/O ratio heavily influences the chemistry taking place inside protoplanetary disks and the effect of the radial snowlines of H 2 O, CO, and CO 2 on the C/O ratio in the gas and solids in the midplane has been investigated byÖberg et al. (2011) and recently, in the context of radial drift, by Piso et al. (2015). The vertically resolved models introduced in this work allow for the study of the vertical and temporal variations of the gas-phase C/O ratio across the vertical snowline. Generally, the removal of oxygen-bearing molecules (water in this case) from the disk atmosphere will increase the C/O ratio in these regions as most carbon is in forms that are more volatile than water. By making some basic assumptions about the carbon abundances in the gas and refractory solids, we can estimate the magnitude of this effect.
With our models being located inside the radial snowline of CO, we assume the gas-phase carbon is mainly dominated by carbon-monoxide, and assume a vertically constant abundance of n CO /n H2 = 4 × 10 −4 (e.g., Öberg et al. 2011). For the initial conditions in our simulations, we can calculate the initial C/O ratio above the snowline to be (C/O) atm = 0.4. Grain growth and settling alone result in (C/O) atm = 0.9 at the end of the simulation of Fig. 3, and for the largest depletions found in Table 1 (α = 5 × 10 −4 at 4.5 AU) we find (C/O) atm ≈ 1. If a considerable fraction of the available gas-phase carbon is present in the form of oxygen-poor molecules (e.g., CH 4 , atomic carbon), C/O > 1 could well be reached above the vertical snowline. For the solids in the midplane we expect the opposite effect, as the extra water that is added to the dust grains in the midplane will decrease the C/O ratio of these particles. As the ice-to-rock ratio varies with height, but also with particle size (e.g., Fig. 5), we expect the C/O in the solids vary significantly even at a single disk location.
In conclusion, our vertically resolved models indicate that both the gas-phase and solid-phase C/O ratio display significant variations at a single disk radius. At larger disk radii, outside the radial snowlines of important carbon carriers like CO, this picture becomes more complex. Observational studies are starting to pin down volatile abundances in these parts (Du et al. 2015;Kama et al. 2016;Bergin et al. 2016), finding volatiles to be depleted from the gas and possibly pointing toward a mechanism similar to the one described in Sect. 4.3. To study these regions through numerical simulations however, the radial drift of solids and photodesorption (see Sect. 5.2) will have to be included (Piso et al. 2015;Cleeves 2016).
DISCUSSION
The goal of this work has been to show that, even for the relatively simple case of an isolated, isothermal column and standard assumptions for the dust evolution, dynamical effects play an important role in setting the vertical abundances of dust, ice, and water vapor. In that light, we have neglected several complex and more subtle processes that can influence the observed behavior. Here, we briefly discuss some of those effects. 5.1. Dust particle model Throughout this work we have assumed the dust grains are well-described as compact and (roughly) spherical aggregates. In reality, coagulation is expected to form aggregates with a complex internal structure, whose (average) porosity is the result of their growth history and potentially non-collisional compaction mechanisms Okuzumi et al. 2012;Kataoka et al. 2013;Krijt et al. 2015). A high porosity is expected to influence collisional outcomes (Dominik & Tielens 1997;Wada et al. 2011), typically increasing the fragmentation velocity as porous grains are better at dissipating collisional energy. Porous aggregates are often described as fractal-like structures, with a mass-radius relation m ∝ s x with 2 < x < 3 Suyama et al. 2012). Thus, the surface-to-mass ratio does not decrease as rapidly with increasing mass, changing the collisional cross sections and aerodynamic properties of growing aggregates. Specifically, the physical particle sizes at which settling and fragmentation start to dominate will increase, while the maximum Stokes number that can be reached in fragmentation-limited growth is independent of the particle's internal density (Birnstiel et al. 2012).
Another assumption of our model is that the refractory core of the aggregate stays intact when the ice mantle is lost. This is not necessarily the case, in particular when the dust particles are better described as porous aggregates of individual ice-covered monomers, in which case crossing a snowline can result in the aggregate disintegrating (Saito & Sirono 2011). Such non-collisional disruption could lead to an increase in the abundance of small refractory fragments just above the snowline, possibly changing the optical depth locally.
In deriving Eq. 16 we have assumed that the condensation and sublimation of water molecules are processes that do not depend on the size or composition of the grain. In reality, the desorption energies for water molecules also depend on the molecular composition of the grain surface. Compared to a water ice surface, the desorption energy is lower for a carbon surface and higher for a silicate surface (Papoular 2005;Cuppen & Herbst 2007;Goumans et al. 2009). Thus, depending on the composition of the refractory cores in our model, the formation of the first monolayer of water-ice can then be somewhat faster (for silicate grains) or slower (for carbonaceous grains) compared to what we have assumed here. With most grains in our simulations being covered in many monolayers, the impact of this effect on our simulations is small.
Because we picture the aggregates as collections of microscopic monomers (Fig. 1), we assume there are no differences in surface curvature between aggregates of different sizes. While K ∼ 1/s • is a good approximation for the area-averaged curvature, the small regions where monomers are in contact with each other will have a negative radius of curvature. Vapor will preferentially freezeout around these contacts, causing the inter-monomer bonds to harden and the aggregate to become more brittle. This process, known as sintering, can influence the aggregate's collisional behavior (Sirono 1999), in particular in regions where the sintering timescale is short compared to the collision timescale (Sirono 2011;Okuzumi et al. 2016).
Developing a more physical model for the evolution of the internal structure (i.e., porosity) of the aggregates in the presence of grain-grain collisions, sintering, and condensation/sublimation will be the subject of future work. While including these effects are expected to impact the shape of the dust and ice distributions, the loss of water vapor from the disk atmosphere as described in Sect. 4.3 is inevitable after grain growth takes place and aggregates that experience some degree of settling, and are capable of accumulating an ice mantle, are being formed. Furthermore, the formation of larger solids that are able to overcome the meter-size barrier via, e.g., the streaming instability or dust trapping, will trap ices. Depending on the size of these bodies they might trap ices even when environmental temperatures are above the sublimation temperature of water ice. Thus, volatile depletion is an inevitable consequence of grain growth/planetesimal formation.
Photo-desorption
In our calculations, we have ignored the process of photo-desorption, in which incident UV photons remove water molecules from grain surfaces. The resulting loss rate of water molecules (in g cm −2 s −1 ) can be written as F PD = m H2O F 0 Y e −τUV/µ0 (e.g., D'Alessio et al. 1998;Ciesla 2014), with Y = 10 −3 the typical yield per UV photon (Öberg et al. 2009) and F 0 the stellar UV flux at this specific radial location. Since the snowline is usually well shielded from UV photons in our simulations (i.e., z SL < z τUV=µ0 ) it is safe to neglect photo-desorption. For example, near the snowline location at the end of the simulation shown in Fig. 3, and assuming an F 0 of 10 4 times the interstellar radiation field of G 0 = 10 8 cm −2 s −1 (Habing 1968), we obtain F sub ≈ F con ≈ 10 12 F PD , justifying our choice of neglecting photo-desorption in Eq. 16. At larger radii however (r > 20 AU), where gas densities and temperatures are lower, photo-desorption can play an important role (van Dishoeck et al. 2014), and will have to be included. With the vertical profile of the optical depth readily available in our models, we will be able to include photo-desorption self-consistently in future calculations.
Diffusion and condensation beneath the snowline
Our method assumes the gas-phase water is wellmixed within a volume element L 3 . If the condensation timescale for the water molecules that are moving down and crossing the snowline is very short however, this assumption can break down when these water molecules freeze out in a region just below z SL , as the thickness | 2016-10-20T15:42:59.000Z | 2016-10-20T00:00:00.000 | {
"year": 2016,
"sha1": "a5dc7c2449afe1ef1cafab0ea72d018a7f616d97",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1610.06463",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a5dc7c2449afe1ef1cafab0ea72d018a7f616d97",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Environmental Science",
"Physics"
]
} |
2573319 | pes2o/s2orc | v3-fos-license | Disruption of androgen regulation in the prostate by the environmental contaminant hexachlorobenzene.
Hexachlorobenzene (HCB) is a persistent environmental contaminant that has the potential to interfere with steroid hormone regulation. The prostate requires precise control by androgens to regulate its growth and function. To determine if HCB impacts androgen action in the prostate, we used a number of methods. Our in vitro cell-culture-based assay used a firefly luciferase reporter gene driven by an androgen-responsive promoter. In the presence of dihydrotestosterone, low concentrations (0.5-5 nM) of HCB increased the androgen-responsive production of firefly luciferase and high concentrations of HCB (> 10 microM) suppressed this transcriptional activity. Results from a binding assay showed no evidence of affinity between HCB and the androgen receptor. We also tested HCB for in vivo effects using transgenic mice in which the transgene was a prostate-specific, androgen-responsive promoter upstream of a chloramphenicol acetyl transferase (CAT) reporter gene. In 4-week-old mice, the proportion of dilated prostate acini, a marker of sexual maturity, increased in the low HCB dose group and decreased in the high HCB dose mice. In the 8-week-old mice, there was a significant decrease in both CAT activity and prostate weight upon exposure to 20 mg/kg/day HCB. Therefore, in vitro and in vivo data suggest that HCB weakly agonizes androgen action, and consequently, low levels of HCB enhanced androgen action but high levels of HCB interfered. Environmental contaminants have been implicated in the rising incidence of prostate cancer, and insight into the mechanisms of endocrine disruption will help to clarify their role.
Many chemicals have been released into the environment without prior evaluation of their endocrine activity at a molecular level, albeit the carcinogenic potential of these compounds is evaluated by routine mutagenicity testing. However, the concentration necessary to disrupt endocrine regulation may be lower than the carcinogenic level. Lifelong intake of even very low levels of these compounds may disturb the delicate hormone balance and compromise the reproductive fitness and health of many species. Likewise, the estrogenic potential of many environmental contaminants has been evaluated, but their impact on androgenregulated parameters has traditionally been overlooked. The hormone sensitivity of the prostate and the rising incidence of prostate cancer identify it as a possible target organ for endocrine-disrupting chemicals.
Some environmental contaminants, such as hexachlorobenzene (HCB), have the potential to interfere with hormone regulation by mimicking or interfering with natural hormones. HCB is very persistent in the environment and impairs ovarian function in laboratory animals, decreases fertility, and decreases the weights of both seminal vesicles and ventral prostates (Elissalde and Clark 1979;Foster et al. 1996;Muller et al. 1978). In humans, the tissue samples of young boys with cryptorchidism (undescended testes) demonstrated elevated HCB levels compared with control surgical patients (Hosie et al. 2000). These observations of developmental abnormalities are consistent with those seen following disruption of hormone regulation.
Prostate cancer has become the most commonly diagnosed malignancy and the second leading cause of cancer-related death in North American men. However, other than its correlation to industrialization and the high incidence of prostate cancer in farmers, very little is known concerning the environmental factors that may facilitate the development of this disease or augment its progression Sharma-Wagner et al. 2000;Weston et al. 2000).
The steroid hormones responsible for development of the male reproductive tract are the androgens testosterone and dihydrotestosterone (DHT). In specific tissues, testosterone is reduced to the more potent, slower-dissociating androgen DHT (Zhou et al. 1995). DHT induces development of the prostate and male external genitalia by binding to the androgen receptor (AR). The AR is a member of the nuclear steroid receptor superfamily and is expressed in prostate cells and various tissues throughout the body. Upon androgen binding, it will translocate to the nucleus to form a homodimer, bind to androgen-responsive DNA elements, and initiate the transcription of androgen-regulated genes (Wong et al. 1995). The AR will bind a wide variety of ligand structures. In the case of antiandrogens, such as casodex, binding to the AR is thought to induce a receptor conformation that differs from that imposed by agonist binding, altering its ability to activate transcription (Eckert and Katzenellenbogen 1982;Hansen and Gorski 1986). Hormone antagonists may bind the receptor but prevent DNA binding and transcriptional activation, or they may promote receptor and DNA binding but nevertheless fail to initiate transcription (Truss and Beato 1993). Activation of the AR by various polyaromatic hydrocarbons has been shown to have detrimental effects on the development and function of the male reproductive system (Gray et al. 1999).
HCB is a known endocrine disruptor (Gocmen et al. 1989;Smith et al. 1987) that bioconcentrates in the fat of living organisms. HCB can persist in the environment for years, with an estimated half-life in soil of 23 years. It was used as a fungicide to protect onions, wheat seed, and other grains. Its production as an end point has been restricted in North America since 1971, but it is still formed as a major by-product in the manufacture of chemicals such as solvents, chlorine-containing compounds, and pesticides [Agency for Toxic Substances and Disease Registry (ATSDR) 1996]. U.S. Environmental Protection Agency studies have shown that detectable levels of HCB are found in the tissues of over 95% of the population (Robinson et al. 1990). Thousands of people were exposed to HCB in Kurdistan in eastern Turkey from 1955 to 1961. Retrospective analysis suggests an estimated intake of 2.6-4.1 mg/kg/day (Cripps et al. 1984;Gocmen et al. 1989).
In this article we demonstrate that HCB partially agonizes androgen action using a highly sensitive reporter gene assay. Cotransfection of the AR and an androgen-regulated luciferase reporter construct in a well-differentiated prostate cell line measured the ability of HCB to influence AR action. Results from a binding assay using thioredoxin-fused AR ligand binding domain demonstrated that HCB did not influence binding of androgen to its Disruption of Androgen Regulation in the Prostate by the Environmental Contaminant Hexachlorobenzene receptor. Our novel and highly sensitive in vivo approach used two strains of transgenic mice to test the impact of HCB on hormone action. The mE-RABP-CAT and LPB-CAT mouse express the chloramphenicol acetyl transferase (CAT) reporter gene under the direction of a highly androgen-sensitive promoter in the epididymis and prostate, respectively (Lareyre et al. 1999(Lareyre et al. , 2000Yan et al. 1997). HCB decreased CAT expression and prostate weights in both strains of mice.
Materials and Methods
Plasmids. The full-length rat AR cDNA cloned into pRc-CMV (Rennie et al. 1993), rat glucocorticoid receptor (GR) plasmid , and human estrogen receptor (ER) (Smith et al. 1993) were used in the transfection-based assay. The luciferase reporter gene construct responsive to androgens and glucocorticoids (ARR3-luc) consisted of three rat probasin androgen-response elements upstream of the firefly luciferase vector (Snoek et al. 1996). The estrogen-responsive luciferase reporter gene construct (ERE-luc) plasmid consisted of a single vitellogenin estrogen response element cloned into a firefly luciferase vector (Portigal CL. Unpublished data). The pRL-tk vector obtained from Promega (Madison, WI, USA) contained a thymidine kinase promoter upstream of renilla luciferase cDNA. The pPSA-luc contains prostate-specific antigen (PSA) 5´-flanking DNA as described previously (Sato et al. 1997). All plasmid DNA was propagated in JM109 Escherichia coli and was prepared using a QIAGEN Maxiprep Kit (QIAGEN, Mississauga, Ontario, Canada).
Cell culture transfections. HCB (Aldrich Chemical Company, Milwaukee, WI, USA) was prepared as a 10-mM stock in ether. PC3 prostate cancer cells were plated in Dulbecco's Modified Eagle's Medium (DMEM) (GibcoBRL, Burlington, Ontario, Canada) supplemented with 5% dextrancoated charcoal-stripped fetal bovine serum. Plasmids were transiently transfected into the cells using Lipofectin Reagent (GibcoBRL). Each plate received 1.5 µg of the AR, ER, or GR plasmid, 1 µg of the corresponding reporter construct (ARR3-luc or ERE-luc), and 0.01 µg of pRL-tk. HCB and the appropriate hormone (2.5 nM DHT, 1 nM dexamethasone (DEX), or 1 nM estradiol) were added after a 6-hr incubation. Cells were harvested 24 hr later and lysed using Passive Lysis Buffer (Promega). Luciferase assays were performed using the Dual-Luciferase Reporter Assay System (Promega) and the EG&G Berthold Microplate Luminometer LB 96V (Berthold Technologies, Bad Wildbad, Germany). The transfection experiments were also repeated in a similar manner using LNCaP prostate cancer cells cultured in RPMI 1640 defined medium (GibcoBRL). The cytotoxicity of HCB was assessed by staining samples with trypan blue and counting viable cells.
Cell culture ligand displacement assay. Recombinant rat thioredoxin-fused AR ligand binding domain (Trx-ARLBD) was obtained from PanVera (Madison, WI, USA) and diluted in binding buffer (50 mM Tris pH 7.5, 10% glycerol, 0.8 M NaCl, 1 mg/mL BSA, and 2 mM dithiothreitol). A stock of assay mix was prepared by combining 20 nM 3 H-R1881 (NEN Life Science Products, Inc., Boston, MA, USA) and 2% ethanol in binding buffer. A range of test compound concentrations was added to the diluted Trx-ARLBD and assay mix. After overnight incubation at 4°C, we added 33% hydroxylapatite (HAP) slurry [Fast Flow Hydroxylapatite (Calbiochem, San Diego, CA, USA) in 10 mM Tris pH 8.0 and 1 mM EDTA]. The HAP pellets were incubated on ice for 10 min, then washed three times with wash buffer (40 mM Tris pH 7.5, 100 mM KCl, 1 mM EDTA, and 1 mM EGTA). The HAP pellet was resuspended in ethanol and transferred to a scintillation vial. Scintillation counting was completed using the Beckman LS 6500 scintillation counter (Beckman, Mississauga, Ontario, Canada), and results were expressed as mean ± SEM.
In vivo experiments. All animal studies were conducted in accordance with the principles and procedures outlined by the Canadian Council on Animal Care and approved by institutional ethics committees. Animals were kept in standard conditions in 12-hr light:12hr dark cycles in the animal care facilities in the Jack Bell Research Centre (Vancouver, British Columbia, Canada). The mice were fed a standard laboratory mouse chow and water ad libitum. The doses used in the experiment were chosen after a review of relevant literature (Cripps et al. 1984;Geyer et al. 1986;Gocmen et al. 1989). We tried to approximate human exposure while factoring in differences in the size, bioconcentration factors, and life span of mice. Bioconcentration factors (wet weight basis) of chemicals are between 3 and 47 times higher in humans than rats (Geyer et al. 1986).
The mice used in the first study were CD1 strain homozygous transgenic for an androgen-sensitive reporter gene. This consisted of a large section of the 5´-flanking sequence of the rat probasin gene linked to the bacterial CAT reporter gene (designated LPB-CAT). The transgene has been shown to be androgen sensitive and prostate specific (Yan et al. 1997). Five female mice were used in each of the five dose groups. All were fed 0.1 mL canola oil per day. The doses were as follows: control, canola oil alone; positive control, 0.1 mg/kg/day testosterone undecanoate in canola oil; low dose, 5 mg/kg/day HCB in canola oil; medium dose, 10 mg/kg/day; and high dose, 20 mg/kg/day HCB in canola oil. Females were dosed for a minimum of 1 month and then paired with males, one pair per cage. The exception is the testosteronedosed dams, which could only be treated after conception; prepregnancy treatment with testosterone led to transient infertility. The dams were dosed throughout gestation (3 weeks) and lactation (3 weeks) until weaning. Male offspring were then treated for an additional 1 or 5 weeks until they reached 4 or 8 weeks of age, respectively.
In the second study we used a mouse strain designated mE-RABP-CAT. The epididymis of this transgenic mouse strain synthesizes a CAT reporter gene upstream of a retinoic acid-binding protein promoter (mE-RABP). The gene is specifically expressed in mouse mid/distal caput epididymis under the control of androgen (Lareyre et al. 1999(Lareyre et al. , 2000. Seven females were dosed with 0.1 mL/day corn oil, and seven females were dosed with 10 mg/kg/day HCB in corn oil. The mice were paired and dosed as described above. Treated male mice were sacrificed by cardiac exsanguination under methoxyfluorane anesthesia. Anogenital distance (AGD) and body weights were measured. The liver, left testis, left epididymis, and ventral prostate were removed, weighed and frozen. The ventral prostate and liver were first subdivided; half were frozen and the other half were fixed in Articles | Ralph et al.
462
VOLUME 111 | NUMBER 4 | April 2003 • Environmental Health Perspectives 10% neutral buffered formalin for histologic analysis. The right testis, right epididymis, and heart were also fixed for histologic examination. Serum testosterone and thyroxine (T 4 ) levels were tested. Blood from each animal was allowed to clot at room temperature for at least 15 min, and serum was obtained by centrifugation at 14,000 × g for 10 min. Serum was then immediately frozen at -20°C until it was examined for serum testosterone (Tumor Marker Lab, British Columbia Cancer Agency, Vancouver, British Columbia, Canada) and thyroid hormone levels (T 4 ELISA kit; Monobind, Costa Mesa, CA, USA). For the CAT assay, the organ of interest was homogenized and lysed. The protein level was measured using a Pierce BCA assay (Pierce Biotechnology, Rockford, Il, USA). One gram of protein in lysis buffer was heated to 65°C and added to a chloramphenicol, Tris, and 3 H-acetyl coenzyme A (CoA) solution. ScintiLene scintillation fluid (Fisher Scientific, Nepean, Ontario, Canada) was layered on top, and then scintillation counts were taken over time. The slope of disintegrations per minute over time corresponds to CAT activity.
Histopathologic analysis. Slides containing sections of prostate, liver, heart, or epididymis were stained with hematoxylin-eosin. The same pathologist examined each section in a randomized manner using a photonic microscope.
Statistical analysis. All data sets were tested for uniform distribution. Statistical significance (p = 0.05) among the various parameters assessed was established by a Student's t-test when a single treatment was compared with the control or by analysis of variance when the comparison was between several groups. Upon demonstration of statistical significance, Dunnett's multiple comparison test indicated which groups were significantly different from the control group. Statistical analyses were performed using JMP statistical software (SAS Institute, Cary, NC, USA).
Hexachlorobenzene modulates androgen activity in vitro.
Based on the observation that HCB is highly persistent and has the potential to interfere with hormone regulation, we wanted to test if HCB would mimic or inhibit steroid hormones important in the prostate using a tissue cell-culture-based assay. Our approach used DNA constructs containing hormone-responsive binding sites connected to a firefly luciferase reporter gene. The amount of luciferase produced was proportional to the degree of transcriptional activity induced by the ligand-bound receptor. The pRL-tk was cotransfected as a transfection control and was constitutively expressed.
Increasing concentrations of DHT, a positive control, induced rising firefly luciferase production in PC3 cells transfected with the ARR3-luc reporter plasmid and the full-length rat AR ( Figure 1A). After this standardization of our assay, we asessed the impact of HCB. In the presence of half-maximal DHT, low levels of HCB enhanced the androgen-responsive transcription of the luciferase reporter gene up to 2-fold higher than DHT alone. High levels of HCB suppressed this androgen-mediated activity ( Figure 1B). This HCB-induced modification of transcriptional activity was dependent upon the presence of the AR (data not shown), and it appeared to be specific to the AR. In the presence of GR or ER and their respective hormones, HCB did not interfere with hormone-inducible transcription ( Figure 1B). These experiments were repeated in two different prostate cell lines, PC3 and LNCaP, with different promoters (ERE-luc, ARR3-luc, and PSA-luc). The same trend was seen in both cell lines regardless of whether PSA-luc or ARR3-luc was used. In LNCaP cells, HCB will act to agonize androgen action in the presence of the endogenous mutated AR, but this impact is insignificant when compared with the response elicited by the transfected receptors. Each set of test conditions was reproduced in triplicate, and each experiment was repeated a minimum of three times. Based on our observations that HCB impacts in vitro AR action specifically, the next step was to determine if HCB binds directly to the AR.
Hexachlorobenzene was not an AR ligand in vitro. Tritiated R1881, a synthetic androgen, was added to AR ligand binding domain. Nonradioactive ligand (DHT, DEX, or HCB) was added and displaced the tritiated R1881 if the affinity was greater. The pellets were washed and assessed for levels of radioactivity. If binding to the AR occurred, there was a decrease in scintillation counts. Results from this assay indicated that the DHT control was a ligand of the AR, but HCB and DEX were not (Figure 2).
Hexachlorobenzene alters androgenresponsive parameters in vivo. Male mice were treated throughout gestation and lactation via maternal exposure. Three weeks after the date of birth, the male offspring were weaned and fed individually for an additional 1 or 5 weeks to correspond to a sacrifice age of 4 or 8 weeks, respectively. Two different sacrifice points were used because some effects such as delayed or precocious puberty are seen at the immature/prepubescent time point of 4 weeks of age. Other effects can only be observed when the animal is mature, such as abnormal development, or upon sufficient accumulation of the chemical to produce detectable effects.
High-dose HCB decreased CAT activity in adult LPB-CAT male mice. CAT reporter activity in the LPB-CAT mice corresponded to the degree of androgen activity in the prostate (Figure 3). In the 4-week-old mice, the androgen-responsive CAT activity of the testosterone-dosed mice was significantly higher (p < 0.01). In the 8-week-old mice, the CAT activity of mice treated with the medium and high HCB dose was significantly (p < 0.05) lower. We used a second strain of mice to confirm that the observations were not unique to the LPB-CAT mice. mE-RABP-CAT mice also expressed an androgen-responsive CAT reporter. However, several differences exist: the promoter was based on a fragment from the epididymal retinoic acid-binding protein rather than the probasin gene, the transgene was expressed specifically in the epididymis rather than the prostate, and these mice have a C57BL/6 rather than a CD1 background. No significant change in CAT activity was observed between the 10 mg/kg/day HCBtreated and control mice at 4 weeks of age, but there was a significant (p =0.05) decrease in androgen-responsive epididymis-specific CAT activity of 8-week-old mice dosed with 10 mg/kg/day HCB compared with controls ( Figure 4). Thus, it was demonstrated that HCB could impact androgen-regulated transcriptional activity in both strains of mice.
The weight of the prostate is also a measure of androgen activity. The prostate is responsive to androgen levels, as treatment with antiandrogens or castration will decrease the prostate weight (Iguer-Ouada and Verstegen 1997; Schroder 1994). Administration of exogenous testosterone to immature males accelerates prostatic growth so that maximal prostatic size is achieved precociously (Cunha et al. 1987). Prostate development is induced by DHT, and although we used an oral form of testosterone (testosterone undecanoate), its conversion to DHT has been reported (Horst and Erdmann 1980). The average prostate weight of 4-week-old LPB-CAT mice treated with either testosterone or the low dose of HCB was significantly higher than the average for the controls ( Figure 5). In the 8-weekold LPB-CAT mice, treatment with the high dose of HCB significantly lowered the average Articles | HCB affects androgen activity Environmental Health Perspectives • VOLUME 111 | NUMBER 4 | April 2003 463 Figure 2. Effects of nontritiated potential ligand added to thioredoxin-fused AR ligand binding domain (Trx-AR-LBD). Trx-AR-LBD aliquots were incubated in the presence of 20 nM 3 H-R1881, and then DHT or HCB or DEX was added. DHT displaced R1881 and bound the AR, whereas DEX and HCB did not.
Percent 3 H-R1881 bound to AR-LBD
prostate weight compared with the control average (p < 0.01). A decrease in prostate weight implied a delay in sexual maturity and was previously observed following exposure to vinclozolin (Gray et al. 1994), genistein (Delclos et al. 2001), or soy phytoestrogens (Weber et al. 2001). Prostatic acini convert from a nondilated to a dilated form around the time of puberty and could therefore be used as a marker of sexual maturity. Exposure to the low dose of HCB in 4-week-old mice caused a significant increase in the percentage of cases where dilated prostate acini are observed. However, dilated prostatic acini were not observed in any of the high HCB dose samples. Dilated acini were observed in every 8-week-old mouse prostate ( Figure 6). These data suggest that HCB agonized androgen action at low doses, but antagonized it at high concentrations.
Other androgen-sensitive organs include the epididymis and the testis; AGD is also affected by androgen. Treatment with testosterone, low-dose HCB, and medium-dose HCB in the 4-week-old (Table 1) and lowdose HCB in the 8-week-old LPB-CAT (Table 2) males induced a higher average epididymis weight. Thus, low-dose HCB treatment increased the weight of the epididymis, an androgen-sensitive organ, at both time points. Exposure to testosterone propionate (Orgebin-Crist et al. 1983) increases the weight of the epididymis, whereas ethinylestradiol (Kinomoto et al. 2000) or flutamide (Toyoda et al. 2000) exposure decreases it. The AGD is a developmental marker and is larger in males than females. In this study, the medium dose of HCB significantly increased the average AGD compared with the control group (Table 2). Exposure to antiandrogens has been shown to decrease the AGD (Gray et al. 2001;Hib and Ponzio 1995;McIntyre et al. 2001). In 4-week-old LPB-CAT mice (Table 1), average testis weights of the mice treated with testosterone or the low-dose HCB were significantly higher than the control (p < 0.05). In the current study, the impact of HCB on the testes was not as pronounced as other organs of the male reproductive tract. This may have been because the mammalian testis accumulates lower levels of organochlorine chemicals compared with other tissues and affords the germ cells some protection from the potentially toxic compounds (Cooke et al. 2001).
The serum testosterone levels of both the 4-week-old and 8-week-old LPB-CAT mice were measured by the British Columbia Cancer Agency Tumor Marker Lab; because of the temporal nature of hormone release, the values were highly variable. No statistically significant changes were observed, but the average testosterone level of the 4-weekold testosterone-dosed mice was about 3-4 times higher than the control average (data not shown). In a previous study, mice dosed with 250 mg HCB/kg for 21 days demonstrated an increase in vitro metabolism of [ 3 H] testosterone, a decrease in serum concentrations of testosterone, and a decrease in weights of seminal vesicles and ventral prostates (Elissalde and Clark 1979).
The T 4 thyroid hormone levels of 4-and 8-week-old mice were tested, and no visible or statistically significant changes or trends were observed (data not shown). Thyroid hormone levels were tested, as previous studies showed that HCB induced hypothyroidism, albeit at higher doses (Foster et al. 1993;van Raaij et al. 1994).
Acute toxicity was not observed. Chronic ingestion of HCB is associated with hepatomegaly, loss of body weight, wasting of skeletal muscles, leukocytosis, and enlarged thyroid (Hayes and Laws 1991). We measured heart, liver, and body weight to assess signs of toxicity in the treated mice. In this study, we studied endocrine-specific effects and attempted to keep the exposure levels below those that are overly toxic. Loss of body weight was not observed in test mice upon exposure to HCB. The average body weight was significantly (p < 0.01) higher in the groups treated with the low and medium doses of HCB (Tables 1 and 2). The average liver weights significantly increased upon exposure to HCB, but lesions were not observed (Tables 1 and 2). The increase in liver weight may have been due to altered glycogen storage, as there was a dose-dependent increase in centrilobular hepatocellular vacuolization (data not shown). Multifocal hepatic necrosis and mild inflammation of periportal spaces were observed in a few of the samples (data not shown). These alterations are common and were not associated with a specific treatment group. Therefore, although the livers of the high-dose HCB group were enlarged and Articles | Ralph et al.
464
VOLUME 111 | NUMBER 4 | April 2003 • Environmental Health Perspectives Figure 6. Percentage of cases in which the presence of dilated acini was observed in the prostates of LPB-CAT mice at 4 weeks of age (prepubescent) and 8 weeks of age (sexually mature). Abbreviations: C, control; H, high-dose HCB; L, low-dose HCB; M, medium-dose HCB; T, testosterone. Progression from nondilated to dilated acini was a marker for sexual maturity. *Significantly different from the control (p < 0.05). slightly altered glycogen storage was observed, no severe liver toxicity was linked with the doses of HCB used. Heart weight was increased in the highest HCB dose group (Tables 1 and 2). The hearts examined had a normal histological architecture and no lesion was detected in the sections (data not shown).
Discussion
The endocrine system acts as a communication network, and the signals are transmitted via hormones. The levels of individual hormones are tightly controlled for effective endocrine regulation and production of the appropriate biological response at the desired time. Endocrine disruptors are compounds that are able to mimic or block this natural biological process. Because of a certain degree of promiscuity of steroid hormone receptors, they are able to bind a variety of distinct molecules, albeit with different binding affinities. The endocrine and reproductive effects of environmental contaminants are believed to result from a) mimicking endogenous hormones such as estrogens and androgens, b) antagonizing normal endogenous hormones, c) altering the pattern of synthesis and metabolism of natural hormones, d) modifying hormone receptor levels, or e) interfering with steroid-binding protein or steroid transport. Many of the endocrine-disruptor studies have traditionally focused on the impact of environmental estrogens. Novel data have emerged that demonstrate the impact of endocrine disruptors on the androgen axis. Vinclozolin and p,p´-DDE [1,1,1-trichloro-2,2-bis(p-chlorophenyl)ethylene] act as AR ligands and antagonize transcriptional activity in vitro. Exposure to p,p´-DDE and vinclozolin causes malformations of the male reproductive tract in neonates, including a decrease in prostate weights, suppression of androgenregulated genes, reduced AGD, retained nipples, and reduced ventral prostate weights (Gray et al. 1994;Kelce et al. 1997). Although it is not an AR ligand, similar results are seen upon exposure to TCDD (2,3,7,8-tetrachlorodibenzo-p-dioxin) (Mably et al. 1992). Compounds such as HCB and TCDD may influence androgen action via the aryl hydrocarbon receptor (AhR). The AhR is expressed in prostate epithelial cells, and stimulation of the AhR by various polyaromatic hydrocarbons has detrimental effects on the male reproductive system. The AR is also expressed in prostate cells, and crosstalk between the AR and AhR has been shown in vitro. TCDD is the most potent inducer of AhR activity. Testosterone inhibited TCDD-induced CYP1A1 activity in a dose-dependent manner. Reciprocally, testosterone-dependent transcriptional activity and testosterone-regulated PSA expression in the prostate cell line LNCaP was inhibited by TCDD (Jana et al. 1999). Ligand-independent activation of the AR via the estrogen-induced growth factor pathway has also been demonstrated in prostate organ culture (Gupta 2000). In some circumstances, crosstalk may be a normal part of signaling in the prostate, but inappropriate activation or suppression will disrupt the function of the cell and organ development.
HCB was introduced in 1945 as a fungicide that interferes with amine and thiol metabolism to slow the growth rate and sporulation of fungi. It is still formed as a major by-product and contaminant in the manufacture of chemicals such as solvents, chlorine-containing compounds, and pesticides (ATSDR 1996). Total daily intake from air, food, and soil by adults in the general North American population is about 3 ng/kg/day (Newhook and Meek 1994). Average intake of HCB by breast-feeding infants in the general population is much higher (Uhnak and Szokolay 1983), and lactational transfer of HCB has been demonstrated (Nakashima et al. 1997). Childhood exposure to HCB may influence reproduction or physical and mental development. A previous study demonstrated elevated levels of HCB in young males with undescended testes (Hosie et al. 2000). HCB has a potent capacity for accumulation from the environment into the fatty tissues of living organisms (Ernst 1986;Hayes and Laws 1991). Many of the environmental contaminants are lipophilic, and androgens have been shown to stimulate the accumulation of lipids in LNCaP prostate cancer cells (Swinnen et al. 1996). Environmental contaminants have been implicated in the rising incidence of prostate cancer, and insight into the mechanisms of endocrine disruption will help to clarify their role.
In this study, mice were orally dosed with HCB levels similar to highly exposed individuals, but elevated compared with average human exposure levels. At low doses, HCB acted as an androgen agonist in vitro and in vivo. It increased the production of the androgen responsive reporters, firefly luciferase in PC3 cells, and the presence of dilated acini in the prostates of sexually immature LPB-CAT mice. However, higher doses of HCB apparently antagonized androgen action, as the production of firefly luciferase was reduced. HCB
Control
Testosterone Low HCB Medium HCB High HCB Organ (n = 11) (n = 9) (n = 6) (n = 8) (n = 5) did bind directly to the AR and may influence androgen action by altering steroid transport or via receptor crosstalk. In LPB-CAT mice, the high dose of HCB downregulated CAT activity in 8-week-old mice, decreased the presence of dilated prostatic acini, and lowered the average prostate weight. In mE-RABP-CAT mice, HCB decreased the CAT activity of 8-week-old mice. These data provide conclusive evidence of HCB acting as an endocrine disruptor in mice and demonstrates its potential to impact the human androgen axis. HCB can interfere with the transcriptional activity of androgen-regulated genes and the downstream effects, thus amplifying its potential endocrine-disrupting impact. The fact that HCB may affect the androgen-signaling pathway in a different manner depending on the dose should reinforce the concept that environmental xenobiotics, though present at low doses, may pose a threat to human health. | 2018-04-03T05:03:29.572Z | 2003-04-01T00:00:00.000 | {
"year": 2003,
"sha1": "5126a28c8a0bb1f4968a8440048bfe9c41504dae",
"oa_license": "pd",
"oa_url": "https://doi.org/10.1289/ehp.5919",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5126a28c8a0bb1f4968a8440048bfe9c41504dae",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
222260162 | pes2o/s2orc | v3-fos-license | Asymptomatic trocar site hernias: An underestimated complication of laparoscopy
Objective: To estimate the exact incidence of trocar site hernia (TSH) through sonographic examination and to evaluate the predisposing risk factors of TSH. Materials and Methods: Three hundred patients who underwent laparoscopic surgery for benign gynecologic indications were included in this study and called back for a follow-up visit. All patients underwent an ultrasound evaluation for the detection of TSH. Risk factors for TSH formation were investigated. Results: Twenty-five (8.3%) TSHs were diagnosed among 300 postoperative laparoscopies. The highest rate of TSH development among the surgeries was found in tubal ligation cases with 19%. Parity ≥3 [odds ratio (OR), 3.13; 95% confidence interval (CI): 1.21-8.09; p=0.018], and not closing fascia (OR: 6.74; 95% CI: 2.72-16.70; p<0.001) were statistically significant risk factors for the development of TSH in multivariate analysis. Conclusion: The prevalence of TSH is higher than previously reported, and ultrasonographic examination is adequate for detecting subclinical types of this complication.
Introduction
Trocar site hernia (TSH) is defined as an incisional hernia (IH) occurring after laparoscopic procedures at the trocar incision site (1) . TSH is a rare surgical complication with an estimated incidence ranging between 0.6% and 5.2% (2)(3)(4)(5) . However, because available data are based only on symptomatic patients and clinically diagnosed cases, the actual incidence is probably underestimated. The overall incidence might be higher if asymptomatic patients are routinely screened using a more effective diagnostic tool such as ultrasonography (USG). Most TSHs are asymptomatic, but they can occasionally lead to severe morbidity and mortality, such as bowel strangulation and necrosis (5) . Thus, early detection of subclinical TSH before the occurrence of severe complications is essential. Many factors have been implicated as predisposing to TSH formation. Advanced age, obesity, diabetes mellitus and wound infection were identified as patient-related risk factors (1,(5)(6)(7) . Moreover, trocar size, design, and insertion technique; port location; duration of surgery; fascial incision enlargement; and fascial closure were suggested as technical factors for the occurrence of TSH (1,(8)(9)(10) . When the literature on this subject is examined, it is seen that information about prevalence and risk factors of TSH, especially in the female population, is still limited and contradictory. The primary objective of this study was to estimate the exact incidence of TSH by sonographic examination and to evaluate the predisposing risk factors of TSH.
Materials and Methods
This study was conducted in University of Health Sciences Turkey, Bursa Yüksek İhtisas Training and Research Hospital, Clinic of Obstetrics and Gynecology. After receiving approval of the ethics committee, medical records of 491 patients who underwent laparoscopic surgery between May 2016 and July 2018 in our clinic were retrospectively scanned. All patients undergoing a laparoscopic surgical procedure for benign gynecologic indications during the study period with a minimum follow-up of 12 months were included in the study. The exclusion criteria of the study were determined as follows: age <18 years old, previous history of midline laparotomy, pregnancy after the procedure, a subsequent abdominal surgical procedure except for IH repairs, use of the open technique or palmar point for entering the abdomen, conversion to laparotomy, malignancy, and refusal to give informed consent. Also, we excluded any patients who underwent laparoscopic surgery before or after the index procedure. Finally, 300 patients were included in the study and called back for a follow-up visit. The average time between the surgery and the patient's sonographic evaluation with USG was 16 (range, 12-29) months. Written informed consent was obtained from all participants. All patients underwent a clinical examination. Furthermore, all participants underwent an ultrasonographic evaluation for the detection of TSH. The ultrasonographic diagnosis of TSH was defined as any discontinuation of the fascial layer ( Figure 1). Clinical and sonographic examinations were performed by a single physician and a single radiologist. A 9L-D linear broad-spectrum transducer with the GE Health Care Logic S7 expert model USG was used. Laparoscopic surgeries were performed by different surgeons working in our institution. Three or four-port sites were generally used for the procedures, one of which was 10/12 mm cutting trocar for the camera, and two or three were the lateral 5-mm cutting trocars. Initially, a 10 or 12 mm trocar was inserted with the closed technique to the umbilical location. Lateral trocars were placed approximately 6-8 cm from the midline and 4-5 cm above the symphysis. At our institution, the decisions for the type of abdominal entry (direct trocar or Veress needle), closure of umbilical port site (only skin or fascia and skin), and the placement of drain are based on the clinical judgment of the operative surgeon. If a drain was inserted, it is aimed to be removed within 1-2 days in cases without problems. All patients received prophylactic antibiotherapy (1 to 2 g cefazolin was administered intravenously 15 to 60 minutes prior to skin incision). Skin sutures were removed at day 7-10. For each patient enrolled in the study, the hospital and followup records were reviewed for the following data: pre-operative age in years, weight in kilograms and height in meters, body mass index (BMI) (calculated as weight in kilograms divided by height in meters squared), gravida, parity, type II diabetes [defined as blood glycemia >126 mg/dL and/or glycated hemoglobin (HbA1c) >7% and/or use of oral hypoglycemic agents and/or insulin], presence of chronic constipation, smoking status (defined as positive when actively smoking), type of abdominal access, fascia enlargement to remove material, closure of umbilical 10/12 mm trocar fascia, surgical duration, drain placement, and the presence of wound infection (defined as a positive culture and/or presence of infection according to the physician's opinion).
Statistical Analysis
Data were analyzed using the IBM SPSS V23 (SPSS, Inc., Chicago, IL). The Shapiro-Wilk test was used to examine the compatibility of data to normal distribution. The independent samples t-test was used to compare the parameters according to the presence of TSH. The chi-square test was used to evaluate the correlation between categorical data and TSH. Independent risk factors for the development of TSH were analyzed using univariate and multivariate logistic regression analysis. A p-value of <0.05 was considered as statistically significant.
Results
There were 25 (8.3%) TSHs among 300 patients who underwent laparoscopic surgery. Of the 25 patients with TSH, 23 had a Figure 1. Ultrasound image showing herniation of omentum through the defect at the 10 mm trocar site herniation at the umbilicus, and two had herniation at the extraumbilical sites. The TSHs were asymptomatic in 23 patients (92% of all TSHs). Five were detected by physical examination and confirmed by ultrasound, whereas 18 patients had normal abdominal examination findings and were diagnosed as having TSH with USG. Two patients have already undergone hernia repair surgery for symptomatic TSH in the 8 th and 10 th postoperative months, respectively. The first of these patients was a 31-year-old who underwent bilateral tubal ligation and developed wound infection in the postoperative period. She was admitted to the hospital eight months after surgery due to TSH containing omentum tissue in the umbilical region. The second case was a 38-year-old patient who underwent cystectomy. In this case, the cyst material was removed from the left lower quadrant, but fascia was not enlarged, and the drain was inserted from the same site. Ten months after the surgery, she was admitted to hospital with gastrointestinal symptoms and was immediately taken to the operating room due to the detection of intestinal herniation from the trocar site in the left lower quadrant. The most frequent surgeries performed during the study period were cystectomy (33%), hysterectomy (25.3%), and tubal ligation (19.3%), respectively. The highest rate of TSH development among the procedures was found in tubal ligation cases with 19%. The rate of patients with a parity number of three or more was highest in the group undergoing tubal ligation (75.9%). Patient data and surgery types are presented in Table 1.
The gravida and parity of patients who developed hernia (3.4±1.58 vs 2.53±1.64; p=0.011 and 3±1.38 vs 2.15±1.53; p=0.008, respectively) were significantly higher. The fascia closure rate was significantly lower in patients with TSH compared with those without TSH (40% vs 82.2%; p<0.001, respectively). Characteristics of patients with and without TSH are presented in Table 2. There were no statistically significant differences between patients with and without TSH regarding age (42.12±10. 15 Likewise, in terms of hernia development, no statistically significant difference was found between the following parameters: presence of previous surgery, smoking status, trocar entry technique, and fascial incision enlargement (p>0.05). Parity ≥3 and the absence of fascia closure were statistically significant risk factors for the development of TSH in both univariate and multivariate analyses (Table 3). Wound infection was found to have a significant effect on TSH formation in multivariate analysis. There was no significant association with not placing a drain and TSH formation in the multivariate analysis.
Discussion
Data on the frequency of TSHs show a wide distribution in the relevant literature. The lack of consensus regarding the definition TSH is a type of IH that occurs after laparoscopic surgery at the trocar incision site. IH was defined as any abdominal wall gap or defect in the proximity of the postoperative scar by most of the studies (11)(12)(13) . However, some of these studies included a protrusion of abdominal contents in the definition (14) . In this study, we defined hernia as any defect in the fascia layers in the trocar entry site because we aimed to detect all asymptomatic cases. Tonouchi et al. (1) first classified three types of TSHs according to the cause and the onset time. The early-onset type of hernia occurs by the dehiscence of anterior and posterior fascial plane and peritoneum in the early postoperative period. The late-onset type of hernia occurs by the dehiscence of anterior and posterior fascial plane, with peritoneum providing the hernia sac; they appear several months after surgery. Small intestinal obstruction is not seen, and it manifests as an asymptomatic swelling. This special type of hernia is due to the dehiscence of the whole abdominal wall immediately after surgery, with intestine and/or omentum protruding without a sac (1) . All our cases except two symptomatic were thought to represent the late-onset type. In many studies, the frequency of TSH is reported as between 0.6% and 5.2% (2)(3)(4)(5) . According to these publications, the frequency of TSH of 8.3% in our study is significantly high. However, most of these studies focused on hernia cases that required symptomatic or surgical repair. Also, in studies involving asymptomatic cases, hernia diagnosis was made primarily with physical examination findings. In studies where additional imaging modalities were used for diagnosis other than physical examination, higher hernia identification rates were reported (10,15,16) . In the study of Christie et al. (10) evaluating TSH development after robot-assisted urologic surgery, all patients underwent radiologic imaging (computed tomography), and the incidence of TSH was reported as 7.7%. In the literature, there are three studies that evaluated patients who underwent bariatric surgery with an imaging modality, and the incidence of TSH was approximately 24.5% (17)(18)(19) . In our study, the patients were invited to our clinic for re-evaluation after a minimum of 12 months and a maximum of 29 months after surgery. The incidence of TSH detection is significantly higher when the follow-up period is longer than 12 months, and when an imaging modality is used (7) . Generally, there is worldwide variability in the abdominal entry techniques as well as port site closures, and our institution is no different. Some surgeons prefer the Veress needle, whereas others perform the direct trocar technique. Likewise, there is no consensus among surgeons regarding the closure of trocar entry points. In many studies, it is emphasized that leaving the fascia open is the most important factor in the development of TSH, and closure of the fascia is recommended in trocar site incisions of 10 mm and over (1,5,20,21) . In the first systemic review of TSH, Tonouchi et al. (1) indicated that surgical techniquerelated factors rather than patient-related factors were of primary importance in the formation of TSH, and reported that large trocar diameter, open facial defects, and stretching of port sites were strictly related to TSH formation. They suggested the closure of the facial defects of umbilical or extra umbilical areas where trocar diameters of 10 mm and over were used as well as the closure of the fascial defect in the case of active manipulation from the 5-mm port during lengthy procedures.
In a systemic review compiled by Helgstrand et al. (6) 96% of TSHs were reported to occur in trocar locations with 10 mm and larger diameter trocars, and 82% were in the umbilicus; it was suggested to close the facial defects of trocars with a diameter of 10 mm and over. Also, in many studies, it has been reported that 12-mm cutting trocars caused higher rates of hernia formation compared with 10-mm bladeless trocars (22,23) . In our study, 92% of the detected hernias occurred at the port entry points of 10-12 mm trocars. Moreover, in more than half (60%) of our patients who developed hernias, fascial closure was not performed. Our findings also support the literature suggesting the closure of the trocar sites over 10 mm. In the systemic review published by Karampinis et al. (7) in 2019, contrary to expectations, a higher incidence of TSH was found in studies that routinely performed fascial closure in patients who underwent laparoscopic bariatric procedures. However, this result did not reach statistical significance. In our study, we also observed that fascial closure was performed in 40% of hernia cases. This result may be due to the fact that the surgeries were performed by different surgeons with different experiences and skills; therefore, fascia closure may not be effectively performed. The role of sex in the development of TSH is conflicting in the literature (5,6,8) . It is evident that a condition that may lead to laxity and fascial defects, especially in the anterior abdominal wall, such as pregnancy and labor, may be an important risk factor for hernia development. The data on the effect of parity on hernia development in the literature are minimal. It is known that relaxation and damage occur in the abdominal wall and fascial structures due to childbirth (8) . This damage in the umbilical region may cause fascial defects in the following years. In a study involving 2.100 cases, 18% of patients had a fascial defect in the anterior abdominal wall during laparoscopy (24) . Considering factors such as age, BMI, and surgical time, tubal ligation cases were in a low-risk group for hernia development. However, TSH development rate per surgery was found to be significantly higher in these patients in our study. This finding cannot be explained only by the low fascia closure rate. We believe that the high parity number in tubal ligation cases indicates that parity is an important risk factor for TSH development.
To the best of our knowledge, there are no studies examining the relationship between drain insertion and TSH development in the literature. In this study, we aimed to evaluate whether drain insertion decreased the development of TSH by reducing intraabdominal pressure. Although drain insertion resulted in fewer cases of TSH in our study, this finding did not reach statistical significance. On the other hand, drain use may contribute to the formation of hernia by causing infection. Evidence regarding the relationship between closed suction drain (CSD) and surgical site infection (SSI) in the obstetrics and gynecology literature is conflicting (25) . A few studies suggested an increased risk of SSI associated with drain placement but usually associated with open drainage and not the use of CSD (25) . We also believe that short-term closed drainage does not play an important role in the development of infection. The data of both this study and current studies are insufficient to determine the effect of drain insertion on TSH development. We believe that prospective randomized studies on this subject are necessary to clarify this issue. Although many studies have reported that age and BMI might be risk factors for the development of TSH, such a relationship has not been reported in other publications (5)(6)(7)(8)26) . Especially in studies evaluating patients who underwent bariatric surgery, obesity has been reported to be an important risk factor for TSH due to the high intraabdominal pressure and the full thickness of the preperitoneal area, which causes a challenge for closure and increase in wound infection (17,18) . Likewise, being aged over 60 or 70 years has been reported to be associated with increased TSH (8,15,27) . Our study population was relatively younger and had a lower weight average than the published literature reporting increased risk, and no relationship was found between age or BMI and TSH development. It could be explained by the fact that our study group did not have the extreme values stated in the literature regarding the mentioned factors. Contrary to previous studies reporting the association between surgical duration and TSH, surgical duration did not differ between the patients who had TSH (8,27) . Moreover, the highest TSH rate was observed in tubal ligation cases with the shortest surgical duration. The fact that tubal ligation cases had higher values in terms of parity, which was determined as an important risk factor for TSH also in this study, may have masked the effect of surgical time.
In a recently published study, excessive manipulation of the trocar site to remove specimens during surgery was reported to be an important risk factor for TSH formation (28) . The authors also suggested avoiding conditions that increased abdominal pressure such as coughing within 2 weeks after surgery. In our study, the port site for specimen removal was not mentioned in the surgical notes. Likewise, we did not have any data on the exposure of patients to conditions that might increase intraabdominal pressure in the early postoperative period.
Study Limitations
Our study has limitations. The missing data in the medical records is the structural limit of our study; we were unable to control or evaluate many factors that could affect hernia development, such as the use of different brands and sizes (10 or 12 mm) of trocars. The performed surgeries have technical variations because different surgeons performed the procedures. Differences in entering the abdomen (e.g. trocar insertion angle, excessive manipulations) and closure techniques may have influenced the outcomes. Also, the structure and the size of our sample may not be sufficient to determine the effect of individual factors, such as age, BMI, wound infection, and comorbidities.
The strength of our study is the diagnostic method we use in diagnosing TSH. Regardless of the physical examination findings, the evaluation of all patients using USG enabled us to diagnose all asymptomatic or subclinical cases. Thus, we believe that the frequency of TSH detected in our study reflects the true prevalence of this complication. Also, contrary to the relevant literature, which mostly includes general surgery and urology patients, the data of our study, which consists of only female cases, can provide predictions about the risk of TSH in common basic gynecologic procedures. Another important finding of our study is the high rate of TSH in young women with high parity. We believe that the high parity, which has not been sufficiently evaluated in studies published to date, should be considered as an important risk factor for hernia development regardless of the type of surgery.
Conclusion
Our findings suggest that the prevalence of TSH is higher than previously reported, and an ultrasonographic examination is sufficient for identifying subclinical types of this complication. The authors declared that this study had received no financial support. | 2020-10-11T00:37:24.803Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "ff6781669d04357b8319da572f0b357ec4092513",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4274/tjod.galenos.2020.70952",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff6781669d04357b8319da572f0b357ec4092513",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220746843 | pes2o/s2orc | v3-fos-license | Anxiety and Avoidance in Adults and Childhood Trauma Are Associated with Negative Religious Coping
Religion as a coping strategy is mostly connected with positive health outcomes. Yet, negative religious coping (NRC) has been associated with rather negative outcomes that affect one’s health. The aim of this study was to explore whether insecure adult attachment and childhood trauma are associated with higher NRC. A sample of Czech adults (n = 531, 51.1 ± 17.2 years; 43.5% men) participated in a survey. As measures, the NRC subscale of the Brief RCOPE, the Experiences in Close Relationships-Revised questionnaire, and the Childhood Trauma Questionnaire-Short Form (CTQ-SF) were used. From the whole sample, 23.7% respondents reported higher NRC. Respondents with higher anxiety in close relationships were more likely to use negative coping strategies, with an odds ratios (OR) of 1.27 (95% confidence interval 1.01–1.59). Similarly, avoidance was associated with negative coping OR = 1.41 (1.13–1.75). Moreover, each subscale of the CTQ-SF revealed a significant association with high summary NRC. Respondents who reported physical neglect scored highest on summary NRC with OR = 1.50 (1.23–1.83) after controlling for sociodemographic variables, but also for anxiety and depression. Our findings support the idea that childhood trauma experience and adult attachment style are associated with higher use of NRC strategies.
Introduction
Religion belongs among well-documented coping strategies, through which one can understand and deal with stressors [1]. When assessing religious coping, two forms can be distinguished: positive religious coping (PRC) and negative religious coping (NRC) [2]. PRC strategies reflect a secure relationship with God, spiritual connectedness, and meaning in life. On the contrary, NRC is characterized by spiritual tension, and conflicts and struggles with God and others in one's religious community [3].
As a multidimensional construct, religious coping has both positive and negative associations with health [2]. PRC has been associated with increased physical [4] and mental health [5], lower levels of depression [6], and a higher quality of life [7] compared with people who used NRC strategies. Regarding NRC, researchers reported mostly negative health outcomes and poorer psychological adjustment [3,6]. NRC strategies were associated with higher levels of depression [5,8], somatization or disordered eating pathology [5,9], worse quality of life, and lower life satisfaction [8,10] than in people using PRC strategies. Similarly, NRC strategies predicted worse physical functioning [11] and a decline in health [12,13], and were significantly associated with lower comprehension of one's illness and distrust of treatment efficacy [10]. These strategies were also related to higher suicidal risk [10,14] and a higher risk of mortality [15]. Minimizing the negative outcomes of NRC is thus very vital. Therefore, it is important to understand why individuals use NRC.
The first reason people use NRC may lie in their attachment strategies. One's beliefs about and relationship towards God have been found to be similar to human attachment relationships [16,17]. For example, avoidant attachment to a person was positively associated with avoidant attachment to God [17,18] and the desire to keep God at a distance [19,20]. Similarly, anxious attachment to a person was associated with anxiety in attachment to God [17] and thus may be related to a tendency to feel abandoned by God or church and even feel punished by God [19].
The second explanation could be that the inclination to draw on PRC or NRC strategies in crises could be associated with one's image of God [21]. Whereas individual's God concept (i.e., explicit image) can be influenced by many factors, including family, religious community or education and is usually expressed in verbal descriptions of God [22,23], one's implicit image of God may be seen as the way one interacts with God at an emotional, relational and nonverbal level [24]. The development of the God image is closely connected to the attachment theory and relationship with a caregiver and thus one's image of God might be strongly affected by childhood trauma, the experience of maltreatment, or insecure attachment to parents during childhood [25,26]. Many childhood abuse victims tend to view God in rather negative terms, such as unloving, distant, or controlling [26,27]. Victims of traumatic events also reported a negative impact on their religiosity [26]. Nevertheless, in some cases, different traumas were found to be related to an increase in spirituality, because of a person's effort to understand why this had happened [28,29].
As a robust predictor of poor health-related outcomes, NRC has been separately assessed in some studies [9,30,31]. According to these studies, the prevalence of NRC varies from 7 to 50% in various populations [30]. This variation might be explained by the variability of criteria employed to determine the presence of NRC [32][33][34]. Other explanations could involve differences in the cultural context and situational or clinical factors. Thus far, most studies on religious coping and its associations with adult attachment or childhood trauma have been conducted outside of Europe [17,19,26,28,35]. Few studies have been carried out within a European context [36][37][38]. Thus, this study from the Czech Republic, which according to the Pew Research Centre [39] is the country with the highest percentage of religiously unaffiliated people in the world, could contribute to studies on NRC in very secular countries.
Therefore, the aim of this study is to explore the association of adult attachment and childhood trauma with NRC in a highly secular environment. We wanted to assess NRC, using both a total score and a more detailed analysis of individual items, to see which of these items showed the strongest association with our observed variables.
Participants and Procedure
The sample in our research was created by selecting from the original representative sample only the respondents who identified themselves as religious. The original sample of the Czech population aged fifteen years and older was obtained by using a two-step procedure. In the first step, the questionnaire and all further procedures were piloted among 206 participants. This led to the final version of the survey. In the second step, another 2184 participants were randomly chosen with the help of quota sampling and asked to participate in a study on health, life experiences, attitudes, and lifestyle. Quota sampling is a technique often used in research to imitate the known characteristics of the population in the sample, allowing relationships between subgroups to be observed. In this case the criteria that allowed the construction of a representative sample corresponding to the adult Czech population were used. Of these respondents, 384 (17.6%) refused to participate mainly due to their lack of time or no interest in the topic. The remaining sample consisted of 1800 respondents. Among whom only some reported themselves as religious; therefore, the final sample consisted of 531 participants.
Data was collected by professionally trained administrators in September and October 2016 during a standardized face-to face interview with the respondents. Participation in the survey was anonymous and voluntary and respondents did not receive compensation for their participation in the survey. Participants signed an informed consent form prior to the study; this stressed the possibility of leaving the study at any time without giving reasons. The study design was approved by the Ethics Committee of the Olomouc University Social Health Institute, Palacky University in Olomouc (No. 03/2016).
Measures
All instruments were available in the Czech language. Religious background was obtained using self-developed questions on religiosity: 'At present, would you call yourself a believer?' with possible answers: yes, I am a member of a church or religious society; yes, but I am not a member of a church or religious society; no; no, I am a convinced atheist. The question assessed whether respondents consider themselves religious and whether they are affiliated to a specific religion or religious practice.
Religious attendance was measured as the frequency of attending church or religious sessions using the question: "How often do you go to church or to religious sessions?" Possible answers were: never, occasionally; often, but not every week; once a week; more than once a week. Those who reported attending religious sessions at least once a week were considered attending.
Prayer frequency was assessed by the question: "How much time do you devote to personal prayer (excluding religious gatherings)?" with possible answers: at least half an hour a day; approximately 10 min every day; approximately 10 min together per week; I pray only occasionally, I don't pray.
Religious coping was assessed using the negative religious coping subscale (NRC) of the Brief RCOPE [3]. It is composed of 7 items rated on a seven-point scale with possible answers ranging from 'not at all' (1) to 'a great deal' (4) and the total score ranges from 7 to 28. NRC items reflect a religious struggle that grows out of a more tenuous relationship with God. In the analyses, NRC was assessed as a dependent variable. For the purpose of dichotomisation, the approach of Fitchett et al. [32] was followed for the further categorization of responses. Each of the item scores was dichotomized. Scores of 1 or 2 were recoded to '0' (did not use NRC) and scores of 3 or 4 recoded to '1' (used NRC).
To determine the NRC sum, a dichotomous variable was created with a value of '1' if any of the seven NRC items had a value of '1' [30]. Cronbach's alpha was 0.84 in our sample.
Experience in close relationships was assessed using the shortened version of the Experiences in Close Relationships-Revised (ECR-R-16) questionnaire [40], which was validated for the Czech environment [41]. It is composed of 16 items rated on a seven-point scale, with possible answers ranging from 'totally disagree' (1) to 'totally agree' (7), and measures two dimensions of attachment-related experience. Each subscale consists of eight items. The Anxiety subscale measures the extent to which people are insecure about the availability and responsiveness of a partner or a close relation, while the Avoidance subscale measures the extent to which people feel uncomfortable being close to others. In the main analyses, both subscales were assessed as a binary variable created by dichotomizing the score with the subscale's upper quartile as the cut-off point. Cronbach's alpha was 0.70 in our sample for both subscales.
To assess childhood trauma, the Childhood Trauma Questionnaire-Short Form (CTQ-SF) [42] was used. It is a standardized 28-item self-report inventory developed to measure the severity of five types of abuse and neglect in childhood or adolescence by the following subscales: Emotional Abuse, Physical Abuse, Sexual Abuse, Emotional Neglect, and Physical Neglect. Each subscale contains five items with a 5-point Likert-type scale ranging from 'never' (1) to 'very often' (5), leading to scores from 5 to 25 for each subscale. Besides these, the CTQ-SF also has a three-item minimisation/denial validity scale that was developed to detect the underreporting of maltreatment [42]. The CTQ-SF measure was introduced by the statement "The following questions are related to some of your childhood or adolescent experiences" in order to be sure that the trauma occurred in childhood/adolescence. Cronbach's alpha for the CTQ-SF subscales in our sample ranges from 0.62 to 0.89.
Anxiety and depression were assessed by Anxiety and Depression subscales of the Brief Symptom Inventory (BSI-53) [43,44]. The introductory instruction was: "How much has the following symptoms problem distressed or bothered you during the past month?" It was followed by items rated on a five-point scale with possible answers ranging from "not at all" (0) to "extremely" (4). In the main analyses, both subscales dimensions were assessed as binary variables created by dichotomizing the score into the subscale's upper quartile or below. Cronbach's alpha for the Anxiety subscale was 0.83 and 0.88 for the Depression subscale. Gender, age, education, and marital status data were obtained through the questionnaire.
Statistical Analyses
In the first step, we described the background characteristics of the sample and the distribution of NRC item responses. Nonparametric methods were used to compare different sociodemographic groups. The Wilcoxon sign-rank test was used to compare gender; in other cases, when more than two groups were compared, we used the Kruskal-Wallis test. We then assessed the associations of two attachment dimensions, anxiety and avoidance, and the five types of childhood trauma experiences with negative religious coping (in total and each of the seven items separately) using a binary logistic regression model that was crude at first (Model 1), adjusted for gender, age, marital status, and education (Model 2). Finally, above that to establish whether the positive relationship between negative coping and recollected trauma and attachment insecurity are not only a spurious effect of general anxiety and/or depression we assessed the third group (Model 3) adjusted also for background levels of depression and general anxiety to compare groups of people already showing general negativism. Each of the independent variables was assessed in a separate model. All analyses were performed using the statistical software package IBM SPSS version 21 (IBM Corp., Armonk, NY, USA).
Description of the Population
The background characteristics of the sample (mean age 51.1; SD = 17.2; 43.5% men) are presented in Table 1. Of all respondents 23.7% reported NRC score. Elderly respondents scored higher in NRC than younger respondents (p = 0.012). However, a comparison of the groups according religious practice (member of church, attending church services, prayer frequency) did not reveal any significant differences. Table 2 shows the results of binary logistic regression aimed at assessing the associations of adult attachment (anxiety and avoidance) with NRC. The results of a crude and adjusted models were slightly different; in most cases the figures in model 3 (adjusted for general anxiety and depression) were lower than in model 1 (the crude one) and model 2 (adjusted only for sociodemographic variables). Item NRC-7 was significant only for anxiety in a close relationship adjusted for sociodemographic variables, however, after controlling for general anxiety and depression this association was not found. Both anxiety and avoidance in close relationships were associated with a significantly higher summary NRC, with an odds ratio (OR) = 1.27 (95% confidence interval (CI) 1.01-1.59) for anxiety and OR = 1.41 (1.13-1.75) for avoidance) after controlling for sociodemographic and general anxiety and depression variables.
Negative Religious Coping and Childhood Trauma Experience
The results of binary logistic regression assessing the associations of childhood trauma with negative religious coping and its separate items, crude and adjusted (Models 1-3) are presented in Table 3. The results obtained from regression models showed that each of the CTQ subscales was associated with higher NRC even after controlling for the spurious effect of general anxiety and depression. Physical neglect was associated with the highest risk of NRC with OR = 1.50 (1.23-1.83). Moreover, physical neglect was associated with a higher risk of NRC in each item separately. Physical neglect was also the only sub-scale that showed a significant association with the statement 'I wonder whether God had abandoned me' (NRC item 1). Table 2. Associations of experience in a close relationship (avoidance and anxiety) with negative religious coping and its items standardized to z-scores, crude, adjusted for age, gender, marital status, and education plus adjusted for raw anxiety and depression: results of binary logistic regression models leading to odds ratios with 95% confidence intervals.
Discussion
The aim of this study was to assess the associations of adult attachment and childhood trauma with negative religious coping. We found that almost a quarter of religious population showed signs of NRC and we also observed higher NRC within the group of elderly respondents. Furthermore, we found that NRC was associated with both anxiety and avoidance in close relationship and with all five types of childhood trauma experience.
The finding of higher NRC within the group of the elders is in line with results of other studies e.g., [45] and might be explained by usage of more active forms of coping among the young. The elders are, due to higher demands of active forms of coping and increased physiological vulnerabilities, more likely to use passive forms such as religious coping [46].
We also found that respondents who reported anxiety in adult relationships were more likely to report higher NRC. These findings are consistent with those of other studies [19,35]. An explanation could be that when individuals worry about whether their partner is available and reliable, they can transmit their feelings to God and thus use NRC strategies more often. Therefore, we could expect that although individuals with high attachment anxiety may seek help from God or their religious community [37], they might find these sources inadequate. However, the cross-sectional design of this study does not allow us to draw any conclusions on the direction of causality. They may be a mutual influence, as Fitchett [32] and Gall [47] stressed the possibility that a negative perception of God is associated to increased levels of anxiety and distress. Therefore, one's views of God may affect relationship with the other people and a problematic attachment to them. Moreover, it is also possible that individuals with NRC might be less likely to experience a safe relationship to God or to their religious community, which may consequently strengthen their insecure attachment style. Moreover, these participants might further feel abandoned or punished by God as a projection of their personal attachment style [19].
Additionally, we found that attachment avoidance was associated with NRC, which corresponds to the findings of Schottenbauer et al. [20], who reported attachment avoidance qualities as a predictor of NRC. However, our results diverge from Pollard et al. [19], who found no interaction between NRC and attachment avoidance. An explanation for this difference could be that the respondents who reported high attachment avoidance do not apply NRC strategies in a consistent way [19], therefore, the results in various studies might vary. Our findings might be supported by the idea that attachment anxiety and avoidance can be seen as a continuous state of insecurity [37] which could be distressing and may represent a negative impact on individual's life. In a continuous state of distress or in a long-term exposure to negative events, NRC strategies are used more frequently [6], thus positive association between avoidance and NRC can occur. Our results consequently seem to support the correspondence theory, which suggests that for insecurely attached individuals, their relationship to God corresponds to their human relationships [17,35]. Individuals can therefore also transfer their human relationship difficulties to their relationship with God.
Furthermore, we found that all subscales of the CTQ were associated with NRC. These results are consistent with the findings of other studies which have reported a negative impact of childhood trauma on religiosity [26][27][28]. Verbal, physical, and sexual mistreatment are related to difficulties in one's attachment to God and may lead to a tendency to view God as less loving, and more distant and controlling [26]. Moreover, when CTQ subscales were assessed in their association with individual NRC items, physical neglect was found to be associated with each NRC item. Surprisingly, physical neglect was also the only subscale associated with the item focusing on abandonment by God. Thus, these results are contrasting to Granqvist's compensational theory [48], that individuals who experienced a difficult childhood may develop a positive relationship with a higher power which would serve as a substitute and provide a secure base, so they do not feel abandoned. Moreover, as respondents reported also other forms of NRC (i.e., feeling punished or questioning God's love and power) associated with childhood mistreatment, this rather supports the corresponding model [17] where children neglected by their parents may more often transmit their feelings to God and feel neither God cares for them and punish them.
In addition, we found no significant association between emotional and sexual abuse and some NRC items. Although respondents wondered what they had done that God would punish them, questioned God's love, or felt abandoned by the religious community, they did not feel abandoned or punished by God for their lack of devotion. These findings contrast to those of other authors, who found strong associations between sexual and physical mistreatment and a concept of God as distant [27] and an association of feelings of distance from God with emotional neglect [49]. Nevertheless, as our respondents reported that childhood sexual abuse played no role in feeling abandoned by God, our results are consistent with a concept of God as a protective factor and a source of more positive forms of coping [27,28]. However, in these cases, the identity of the abuser seems to play an important role in the further perception of the trauma [27] and therefore should be considered in surveys while assessing the consequences for an individual's relationship to God [48] and the tendency to use NRC strategies. The other explanation could be a social desirability bias in the survey that reflects the effort to report religious coping strategies in accordance with social expectations where negative attitudes to God could be considered morally unacceptable [50].
Finally, the comparison of the three models showed the differences between crude and adjusted data. Whereas the difference between crude model and model adjusted for age, gender, marital status and education was only slight, comparing these models to the model adjusted also for background levels of general anxiety and depression revealed differences. After checking for a spurious effect of general negativism in Model 3, the results showed no associations between anxiety in close relationship and NRC items except for the feelings of punishment from God and NRC summary. The comparison of groups in this model showed that association between NRC and childhood trauma and attachment avoidance and anxiety can be related to general negativism. Moreover, it is possible that adverse childhood experiences and attachment insecurity can be associated with higher adult anxiety and/or depression in general, which can consequently negatively influence one's religious coping.
Our findings of an association between the feelings of being punished by God and negative religious coping support the idea of Pollard [19] that insecure attached individuals can feel punished by God as a projection of their attachment style. Moreover, Model 3 revealed similar results for associations between childhood trauma and negative religious coping. We found associations between physical neglect and all NRC items. This seems to be in line with the findings of other authors [17,27], that difficulties and experience of neglect in childhood may be reflected in the later perception of God and thus lead to increased usage of negative religious coping strategies.
Strengths and Limitations
This study has several important strengths. The most important is its response rate. It is also one of the few studies that assesses the associations of negative religious coping with adult attachment and childhood trauma experience in a secular environment. However, the high rate of religiously unaffiliated respondents in the original sample limited the sample size for this study. Another limitation is the cross-sectional design of the study, which does not allow us to make causal inferences. The third limitation may involve cultural awareness, as our study does not reflect a particular cultural context. Furthermore, the last limitation concerns information bias, as our data were based on self-reports of respondents, which might be influenced by social desirability as religiously affiliated respondents might have responded according to their images of God and religiosity. These limitations should be included in a follow-up study in order to achieve a better and more precise understanding of underlying processes that affect the tendency to use maladaptive religious coping.
Implications
Our findings suggest that attachment avoidance and anxiety as well as childhood experience of maltreatment may affect NRC. Framed within a multidisciplinary approach toward dealing with the history of childhood trauma or with the attachment insecurity, NRC might be worth considering for professional counselling interventions in the area of spirituality aimed at lowering the use of NRC. The counsellor or spiritual guide can obtain information about patient's religious background or whether the patient uses religion to cope with his or her trauma. This can contribute to the culturally sensitive awareness of a counsellor.
At the same time, using NRC strategies can serve as a sign of attachment insecurity and distress, and could therefore be informative for professionals in other areas. Further research is needed to explore the role of religiosity in both one's partner and one's parents in the development of individual religiosity and one's image of God. The role of a perpetrator of violence should be further considered. Moreover, further research should focus on unravelling the causal pathways.
Conclusions
Our findings suggest that adult attachment and childhood trauma are associated with negative religious coping. Attachment anxiety and avoidance may be transmitted to the relationship to God and lead to increased use of NRC strategies. Similarly, individuals who suffered any form of childhood trauma may tend to view God as rather distant and unloving, and they might be more likely to use NRC. Thus, this study offers a deeper understanding of the factors that might contribute to the use of maladaptive NRC. | 2020-07-23T09:06:36.669Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "5dde0210f33994a8e7e7cd94bbb13dc21a949767",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/ijerph/ijerph-17-05147/article_deploy/ijerph-17-05147-v2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "727ff8abb9460f0b17720fd81464e60b3788cf38",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
136795611 | pes2o/s2orc | v3-fos-license | A model of material removal and post process surface topography for copper CMP
Increasing systemic error during copper CMP (Chemical Mechanical Planarization) is due to the uneven surface topography generated during the process. A mechanistic model based on a fundamental understanding of the process constituents was proposed to predict material removal rates and the post CMP topography. Two synergistic mechanisms were proposed: 1) chemically dominant behavior is explained by the repetitive removal and formation of a protective layer on copper surface and chemical dissolution during the process, 2) mechanically dominant removal mechanism is due to the material behavior of copper at the nano-scale and subsequent oxidation and removal of the plastically deformed copper. As a step forward to optimize the process and the manufacturing system, this model was extended to explain pattern dependent variability during copper CMP. © 2010 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of [name organizer]
Introduction
Copper CMP (Chemical Mechanical Planarization) is a technique used to planarize uneven wafer surfaces during semiconductor manufacturing processes. It has been a key enabling technology for copper multilevel metallization of semiconductor interconnects. With scaling down of the semiconductor devices and interconnects, and enlargement of the wafer size, the process requirements for copper CMP have become more challenging (i.e. requiring less variation in height (step height) of the polished surface and less defect level, etc) [1]. Moreover, major yield loss during manufacturing of the semiconductor devices is increasingly related to the design of the chips (i.e. systemic errors) rather than to the processes (i.e. random errors) as the devices are scaled down. Design for Manufacturability (DfM) has been adopted to address this systemic error of the manufacturing system. However, currently available DfM tools utilize an empirical process model (based on Preston's equation [2]) of copper CMP, requiring a significant amount of calibration through experiment. Therefore, a new CMP model that reduces the burden of the calibration is required.
Although many CMP models have been developed by different researchers [2][3][4][5], most of them were insufficient at explaining the material removal mechanism during copper CMP. A synergistic model was proposed by Tripathi et al. for copper CMP [6]. They assumed that the passivation layer formed is thick enough that the abrasives or pad asperities remove only some fraction of the passivation layer at the top. They also neglected the direct removal of copper by the action of abrasives or asperities. However, it is evident from experimental data [7][8][9] that mechanical action alone (without any chemicals) can remove some copper. Also, the time between consecutive asperity and copper interactions is too short for forming a thick layer of the protective (passivation) material. Therefore, both mechanical and chemical aspects and their synergism must be considered to account for the material removal mechanism during copper CMP. In this study, a quantitative and mechanistic model of copper CMP that predicts MRR is proposed and extended to explain pattern dependent variability during the process.
A as
average contact area between an asperity and copper
Modeling of Copper CMP to Predict MRR
A quantitative and mechanistic model for copper CMP is proposed partly based on the model by Tripathi et al. [6]. To overcome the limitation of the previous models, a wide range of input parameters are considered and the synergism between chemical and mechanical aspects is considered. The model can be illustrated as follows: A fraction of the copper surface is occupied by protective material formed by slurry chemistry. During the interaction of copper and a CMP pad asperity/abrasives a fraction of the protective material is removed, then reformed before the next asperity/abrasives interaction. During the time period between consecutive asperity and copper interactions, active chemical dissolution of copper occurs at both protected and unprotected sites, but with different rates. The removal of copper by the dissolution and by the protective material removal is termed chemically dominant material removal. In addition, the force applied by abrasives trapped between pad asperities and copper will cause plastic deformation of the copper if the shear stress induced by the abrasives overcomes the shear strength of copper. This deformation is restricted to the region where defects in the crystal structure are accumulated (such as at grain boundaries). Deformed material will be piled up along the trenches created by the sliding path of the abrasives, then successively oxidized and removed by subsequent asperities/abrasives interactions. This type of material removal is termed mechanically dominant material removal. The overall MRR during copper CMP (MRR total ) is estimated by adding these chemically (MRR chem ) and mechanically dominant (MRR mech ) components.
Chemically Dominant Material Removal Mechanism
The chemically dominant mechanism can be explained using Figure 1. Once the force exerted on copper by pad asperities is determined (Figure 1(a)), the fraction of the protective material that is removed during the interaction of an asperity and copper can be estimated (Figure 1(b)). This is defined as the removal efficiency (). The frequency of asperity/copper interactions and the removal efficiency determine the characteristic time t 0 which dictates how much area of copper is occupied by the protective material, which is termed as coverage ratio, (Figure 1(c)). The coverage ratio right after the interaction by an asperity is the value at t 0 and right before the next interaction by an asperity is the value at t 0 +t as . During copper CMP the coverage ratio changes cyclically between these two values and the amount of copper oxidized (by dissolution and by forming the protective material) can be calculated by evaluating the area under the current density curve between t 0 and t 0 +t as . Thus, the chemically dominant removal of copper can be calculated using Faraday's law as shown in equation (1). Those three components of this mechanism are explained as follows. The frequency of asperity/copper interactions can be determined by evaluating the real contact area ratio and the average area of each contact. Those data were obtained from optical images of the contact interface [10]. Based on the data, the interval between consecutive asperity/copper contacts was determined to be 1-10 ms and the duration of each contact was 10 s.
The relationship between the removal efficiency and the force applied on copper by an asperity was determined by comparing the area swept by an asperity and the area swept by the abrasives trapped between the asperity and copper during the interaction of an asperity and copper. It was assumed that the sweeping action of abrasives embedded in an asperity generates linear trenches on the copper surface. The determined relationship between the removal efficiency and the number of trapped abrasives for typical CMP condition is shown in Figure 2(a). The contact area between an asperity and copper was assumed to 100 m 2 and the abrasive sizes were 100 nm in diameter.
The adsorption kinetics of the protective material on copper in pH 4 aqueous solution containing 0.01M BTA and 0.01M glycine were theoretically analyzed [11], supplying direct input to the model. The chemically dominant material removal rate during copper CMP can be determined by considering all of these inputs.
Mechanically Dominant Material Removal Mechanism
Previous analysis of the mechanical response to the force applied by abrasives to copper was based on mechanical material properties obtained at the macro-scale [3]. The predicted MRR [3] was three orders of magnitude higher than the experimental values [7][8][9], suggesting the need for a new approach. Figure 2(b) shows the interaction of an abrasive and copper during copper CMP. The force exerted on copper by an abrasive was evaluated to be 0.1-1 N for typical copper CMP process. Since the size of grains of copper (about 1 m in diameter) is much larger than the contact area by an abrasive (about 10 nm in diameter), the plastic deformation of copper should be induced by homogeneous nucleation of the dislocations in the crystal of copper. Therefore, nano-scale material behavior is highly relevant to this case. The resolved shear strength of copper by nano-indentation on the inside of the grain was 10.6 GPa [12], which is nearly three orders of magnitude larger than the value at the macro-scale (42 MPa). This value is in fact much larger than the shear stress induced by the abrasives during copper CMP, 1.4-2.9 GPa (estimated by assuming 1000 abrasive particles of 100 nm diameter are trapped in 100 m 2 of contact area between copper and an asperity), implying that the deformation of copper is elastic when an abrasive interacts at the inside of a copper grain. Therefore, it is suggested that plastic deformation only occurs at sites with high concentrations of defects, such as the grain boundaries and the vicinity of plastically deformed regions. Once the copper is plastically deformed, copper will be piled up along the trench formed by the indentation (Figure 2(b)). The surface of the piled up material will be preferentially oxidized by the oxidant and dissolved oxygen in the slurry because of the high density of defects and the large surface area to volume ratio. The oxidized copper will be more brittle than the pure copper; and thus the oxidized layer will be removed by subsequent interactions of asperities or abrasives. Therefore, the mechanically dominant removal rate can be calculated as follows:
Predictions of the Proposed Model
The material removal rate of a 4" copper wafer in a pH 4 slurry containing 0.01 M BTA, 0.01 M glycine and 5 wt% alumina abrasives (100 nm diameter) where potential of copper was controlled to 0.6 V (vs. SCE) by external voltage was estimated to validate the proposed analysis. Here we assume that the average area of contact between the copper and an asperity is independent of the applied pressure, but that the number of asperities that contact the copper increases with increasing pressure. Thus the time between consecutive asperity-copper interactions at a given site is inversely proportional to the real contact area ratio and the sliding velocity: The real contact area ratio Ar% is a linear function of the applied pressure because it captures the number of asperities contacting the copper along with their area [10]. Hence, the time between consecutive asperity-copper interactions is inversely proportional to the applied pressure. Note that with a fixed average area of contact between a given asperity and copper, the number of abrasives embedded in a given asperity and the force transmitted to a single abrasive particle are independent of pressure unless the concentration of abrasives changes. Thus, neglecting any influence of pressure on sliding velocity one would expect the efficiency of a given asperity in removing copper or protective material to be independent of pressure. Finally, when evaluating the force applied to an abrasive particle, it was assumed that only the abrasive particles embedded between an asperity and copper transmit the force applied by the asperity (i.e. the asperity itself does not deform enough by supporting abrasives to contact the surface of copper).
The chemically dominant material removal rate of copper was estimated using equation (1) for current densities measured experimentally in our previous work using glycine solutions containing BTA and externally controlled potentials to induce oxidation [11]. Copper was assumed to oxidize to the Cu 2+ oxidation state. t 0 was determined from the kinetics of adsorption of BTA and equation (3). The estimated chemically dominant removal rate of copper in this environment is shown in Figure 3. The mechanically dominant removal rate of copper, estimated using equation (2) for the same conditions is also shown in Figure 3. Three different values of the fraction of copper surface undergoing plastic deformation by the abrasives (K1), namely 0.05%, 0.1% and 0.2%, were assumed. It was also assumed that all the deformed copper forms ridges that are readily oxidized, then removed by subsequent abrasives (see Figure 2(b)). The total removal rate of copper during CMP under these conditions shows that the material removal behavior does follow Preston's equation, supporting the proposed analysis. Also, the magnitude of the MRR is on the order of that observed during conventional copper CMP.
Modeling to Explain Pattern Dependent Variability
Utilizing the proposed mechanistic CMP model, a modelling framework for the pattern dependent variability is proposed as shown in Figure 4. The success of this approach is dependent on the evaluation of the input parameters to the proposed MRR model. The influence of the wafer topography on the input parameters was qualitatively investigated. Assuming that the length and size of the pad asperities follow a probability distribution function (such as Gaussian distribution), only asperities smaller and longer than the size of a trench can reach the bottom of the trench. In addition, large asperities can deform to reach the bottom of a smaller trench. These two mechanisms affect the number and size of the asperity and copper contact, ultimately influencing the time between consecutive asperity/copper interactions, the removal efficiency and the mechanically dominant removal of copper. As the metal line width increases, the number of asperities that can reach inside of the metal line increases, resulting in decreased time interval between consecutive copper/asperity interactions. This frequent interaction will remove more protective material on the copper surface, allowing more copper to be dissolved. Also, more copper will be deformed and subsequently oxidized by the frequent abrasion by abrasives trapped by the asperities. The expected output is more dishing for wider metal lines as experimentally confirmed [13]. Also, the fact that the grain size of copper is smaller at narrow features on the wafer than at the wider features [14] suggests that copper at narrow lines will be more susceptible to mechanically dominant removal, resulting in dishing. Once the input parameters of the proposed MRR model are evaluated by considering these effects, the local MRR and thus eventually the post CMP topography can be determined if the MRR of dielectrics and barrier materials are known.
Conclusion
A quantitative and mechanistic model of copper CMP was proposed considering the synergism between chemical and mechanical aspects of the process. The proposed MRR model was extended to explain pattern dependent variability by considering the influence of the topography on the input | 2019-04-28T13:14:10.387Z | 2011-11-26T00:00:00.000 | {
"year": 2011,
"sha1": "3a52d613b0ee8ee6d3625c63f8a26c0a64d342ee",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.proeng.2011.11.082",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ff94760705fabbbc9da82610364040d981423db1",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
233619388 | pes2o/s2orc | v3-fos-license | Topoclimatic zoning of continental Chile
ABSTRACT In this study, the topoclimates of continental Chile are mapped. The mapping involves the identification of homogeneous zones based on the relationships between the climatic variables that characterize a location and the topography that influences the spatial behavior of these variables. The climatic and topographical zoning of the study area is conducted using a statistical methodology based on a combination of principal component analysis and cluster analysis. The climate, topography, and topoclimatic zoning yield 20, 8, and 96 clusters, respectively. Maximum topoclimatic variability is identified in sectors with mountain ranges and intermediate depression (especially in valley areas), and minimum variability is detected in the coastal sector. Furthermore, only one of the topoclimatic units has an area larger than 50,000 km2, whereas 46.8% of the units have surface areas below 2,000 km2.
Introduction
The climatic conditions of a location must be known for the development of various functions and applications, especially in areas such as agriculture, forestry, ecosystems, energy, and management of water resources (OMM, 2011). These conditions are influenced by atmospheric patterns and the topographic characteristics of a given location (Garreaud, 2011;Giorgi et al., 2003;Leung et al., 2003).
Chile has a wide variety of climates due to the effects of ocean-atmospheric and topographic factors such as the Humboldt Current, Andes, Cordillera de la Costa, Pacific Anticyclone, and Western Wind System. Globally, the country has an arid climate from its northern border at 17°S to 32°30 ′ S, a temperate climate from 32°30 ′ S to 43°S, and a cold climate extending from 43°S to the southern tip of the country at 56°S (Sarricolea et al., 2017).
Locally, the climate distribution varies depending on the landforms (Gil & Olcina, 2017;Romero & Vinagre, 1985), which form a barrier to oceanic influences in the west and to precipitation from the Amazon in the east, modify the temperature depending on the altitude, and cast shadows on the valleys, changing the daily and annual temperature regimes (Antonioletti et al., 1972;Juliá et al., 2008). Only 20% of Chile's surface is flat (Gaete et al., 2006), with an altitude ranging 0-6,800 m (Errázuriz et al., 1998). Topographically, four macroforms are distributed in continental Chile, which from the east to the west are: Andes Mountain Range, which is the main orographic unit of the country, Intermediate Depression, Coast Range, and Coastal Plains.
Climate studies require the provision of accurate and extensive meteorological information that is available on demand (OMM, 2011;UNGRD, 2014). Chile's National Agroclimatic Network (RAN), consisting of public and private entities, includes 322 meteorological stations (Automatic Weather Stations, AWS; Caroca et al., 2015). However, the homogeneity of parameters and spatial representativeness were not considered during the installations, making it difficult to conduct climate studies more precisely than in the past (Garreaud, 2011;Meza, 2014;Uribe et al., 2012).
While researchers have recently established distinct climate zones for Chile, the methodologies differed in their degree of detail and classification strategy. Such studies include the edaphoclimatic zoning of Coquimbo (Morales et al., 2006), bioclimatic zoning of Chile (Uribe et al., 2012), agroclimatic zoning of Chile (Santibáñez et al., 2017), and an update of the Köppen-Geiger climate classification for continental Chile (Sarricolea et al., 2017). In this study, a geographical information system (GIS) technology was integrated with statistical methods.
We developed homogenous zone maps for defining areas with relationships between the climatic variables and topography (Romero & Vinagre, 1985) based on independent climate and topographic classifications (Juliá et al., 2008). The methodology of this study can be applied to other countries using climate information layers that have already been developed; therefore, it is less time-consuming and less computationally intensive than other climate studies. Based on this method, quality national climatic information maps can be generated, which can be used by researchers working on related topics and decision-makers involved in territorial planning (Canu et al., 2015;Canu et al., 2006) and the preparation of public policies aimed at the economic and social development of the country.
Data
The 26 bioclimatic layers elaborated in the Bioclimatic Atlas of Chile (Uribe et al., 2012, Supplementary Material 1) were used in raster format at a scale of 1:250,000 with a 90-m spatial resolution. These variables correspond to water deficit (WD), growing degree days (GDD), annual degree days (ADD), potential evapotranspiration in January and July (ETP_jan and ETP_jul), water surplus (WS), relative humidity in January and July (RH_jan and RH_jul), chilling hours (CH), aridity index (AI), humidity index for January and July (HI_jan and HI_jul), wet period (WP), dry period (DP), frost-free period (FFP), mean annual precipitation (MAP), solar radiation in January and July (SR_jan and SR_jul), summer severity (sum_SEV), winter severity (win_SEV), average temperature in January and July (TM_jan and TM_jul), maximum temperature in January and July (TX_jan and TX_jul), and minimum temperature in January and July (TN_jan and TN_jul).
A 1-km geographic grid was used for statistical analysis (Bell et al., 2007;Caroca et al., 2015;Vicente-Serrano et al., 2016). In the case of climatic variables, the average value of the data contained in each quadrant was obtained. The mean altitude and maximum slope of the topographic variables were calculated to capture the maximum changes in the topography (Felicísimo, 1994). The standard deviations of the convexity and roughness were calculated to assess the topographic heterogeneity (Beguería & Lorente, 1999).
Climatic and topographic zoning
Based on the data and the study reported by Cortez et al. (2020), climatic and topographic zoning was independently conducted using principal component analysis (PCA) and cluster analysis (CA).
Principal component analysis
PCA was used to reduce the dimensionality of highly correlated (>0.6) data. During PCA, linear combinations of original variables are generated, and components that explain less of the data are eliminated (Pino & Mulsow, 1983). Thus, the repetition of information can be avoided, and the processing can be simplified (Demey et al., 1994;Vyas & Kumaranayake, 2006). PCA creates new variables or components that contain the most important information from the original data sample in terms of variation (Morales et al., 2008;Peña, 2002).
Cluster analysis
CA was used to form groups with the same characteristics based on similarities or dissimilarities between the data resulting from the PCA (Núñez-Colín & Escobedo, 2011;Rueda et al., 2016), yielding maximum homogeneity within each group and maximum heterogeneity between them (De la Fuente, 2011; Vilá et al., 2014). A non-hierarchical grouping method with a K-means algorithm (Hartigan & Wong, 1979) and Euclidean distance (Castro et al., 2012;Moraleset al., 2006;Rodríguez & Menzonet, 2004;Wagstaff et al., 2001) was used because it is more appropriate for large data matrices. One hundred iterations of this process were performed to achieve greater consistency in the formation of clusters. The data clusters were then spatialized to generate topographic and climatic zoning maps.
Topoclimatic zoning
The topoclimates were identified by combining the climatic and topographic cartographies to assign a specific value to each unique crossing of zones. In addition, areas accounting for less than 0.2% of the national surface area were eliminated because they were considered negligible within the scale of this study.
In addition, a nomenclature was developed for each topoclimate based on three thermal-and two hydrictype variables, which describe and differentiate topoclimatic variations throughout Chile when considered together. The thermal variables considered were ADD, used to evaluate the accumulation of temperature between the northern and southern regions of the country, and summer and winter thermal amplitudes (sum_TA and win_TA), which are indicators of the continentality degree of a topoclimate (due to the moderating effect of the ocean). Regarding hydric variables, the AI was considered because it reflects the relationship between water availability and evapotranspiration, and the DP was considered to differentiate the water concentration regimes in the country. In the event that the descriptors used were not sufficient to differentiate the topoclimates, altitude (h), latitude (l), and distance to the coast (c) were also considered in the nomenclature.
The classification of the descriptive variables and the associated nomenclature was adapted from Uribe et al. (2012) and is shown in Table 1.
The four topographic variables are represented by two principal components, which account for 94.26% of the accumulated variance ( Table 2). The third and fourth components were not considered because of their low contributions to the variance ( Figure 2b). The variables that contribute the most to the first and second component are convexity, slope, and roughness; and altitude, respectively.
Cluster analysis
Twenty climate clusters were identified (Figure 3), representing the climatic variability on a national scale. A marked longitudinal distribution of the climates can be observed, which is affected by the altitudinal gradient in the east-west direction. In addition, a notorious coastal strip is observed, which is associated with the coastal plains, extending from the Arica and Parinacota regions to the Araucanía region. Because the Cordillera de la Costa is less developed in the Araucanía region, the coastal plains extend toward the interior of the region (except for the Cordillera de Nahuelbuta sector). Another trait that can be identified is the presence of well-defined central and transverse valleys (the former are mainly associated with the semi-arid sector of Norte Chico; the latter exhibit Mediterranean climate of the central zone) and Andean strip, associated with high mountain climates present in the Norte Grande, which is associated with the highest altitudes in the country. In addition, a transitional climatic strip can be observed before the high mountain range sector, which corresponds to the Andean Highlands and extends into the Atacama Desert (cold desert climate).
Based on the topographic zoning, eight clusters can be identified (Figure 4), which can be used to delimit the main relief formations at the national level and topographic features, such as the central and transverse valleys associated with the Intermediate Depression and poorly developed coastal plains in the Norte Grande, due to the presence of the Farellón Costero (coastal cliff in Northern Chile). However, these coastal plains can be viewed in sectors containing the main cities of the extreme north of the country (Arica, Iquique, and Antofagasta). The Andean Plateau can also be delimited as well as the high peaks of the Norte Grande, Patagonia, and Eastern Plateau in the Magallanes Region. The latter can be differentiated from the mountain range sector and coastal area with irregular geography (numerous fjords, channels, and archipelagos). These topographic features are directly related to the climate in these areas.
Topoclimatic zoning
A total of 96 topoclimatic areas were obtained nationwide (Main Map). Longitudinally, the greatest topoclimatic variability was identified in the foothills and mountain ranges of the country and toward inland areas, where the effect of ridge lines and mountain ranges create scenarios with greater variability for the local topoclimates. This variability differs from that of areas near the coast, which exhibit homogeneous topoclimates due to the moderating effect of the seawater on the temperature regime. At the latitudinal level, the greatest dispersion of topoclimatic units occurred in the extreme south of the country, especially in the Aysén and Magallanes regions (with the exception of the Magellanic Pampa) due to the rugged topography, which leads to highly variable topoclimatic phenomena.
In the Norte Grande and central zone of the country, the topoclimatic units are uniform with respect to both extension and distribution. It has been estimated that the total areas of 37 of the 96 homogeneous units (46.8%) are below 2,000 km 2 ( Table 3) and 22 of those units (22.9%) are below 1,000 km 2 . Only one topoclimatic unit, the Magellanic Pampa, has a total surface area above 50,000 km 2 , accounting for 6.6% of the total surface area of continental Chile. In contrast, the smallest topoclimatic unit (170.5 km 2 ) was observed in the northeastern mountain range sector of the Santiago Metropolitan Region.
In northern Chile (Table 3), the topoclimates of high mountain ranges (those with altitudes above 1800 m) had the lowest temperature accumulation, followed by foothill topoclimates; while the intermediate plains areas had the highest temperature accumulation. In addition, the northern zone mainly presented an arid climate, with the exception of the Andean plateau zone and a coastal strip less than 20 km in width, which presented higher humidity.
In central Chile, mountain range topoclimates (those at altitudes higher than 1200 m) also presented lower temperature accumulation; temperature accumulation increased in the coastal area and reached its maximum values in the central valley zone. Regarding the hydric conditions of the area, mountain range topoclimates presented humid conditions, while the interior zone was humid subhumid, and the coastal zone was semi-humid. In southern Chile, topoclimates with low temperatures (microthermals) predominated, and a high contrast was observed between rainfall in the western and eastern areas of this region, which ranged from hyper-humid to semi-humid regimes.
Discussion
The methodology used for zoning was suitable for identifying the topoclimates of continental Chile. PCA has been used to address complex problems in climatology (Reusch et al., 2005;Uddin et al., 2019). It allows the volume of information to be reduced by obtaining new uncorrelated variables. These variables retain the information that explains the maximum variability of the sample, which is fundamental for the application of CA (Rueda et al., 2016;Vyas & Kumaranayake, 2006). CA has been used to determine and map climatic zones (Mahlstein & Knutti, 2010; Nojarov, 2017). In this study, it was used to identify groups of continuous climatic and topographic data based on a grid with 1 km resolution, yielding climatic and topographic particularities at the national level. Both statistical methods are useful for the classification of climates in small areas (Cortez et al., 2020); however, they also provide consistent results in more extensive and complex territories in terms of the climate variability and relief such as in the case of Chile. Based on the generated cartographies, Chile's climate zoning (Figure 3) obtained in this study is consistent with the Köppen climate classification presented by Rioseco and Tesser (2006). A spatial conformation of the climate was observed, with marked latitudinal and altitudinal influence and a generic distribution in several interior areas of the Norte Grande, central valleys, and Magellanic Pampa. However, the results of this study differ from those based on the Köppen climate classification with respect to the level of detail; in the present study, the zoning has a higher spatial resolution.
The updated Köppen-Geiger climate classification for Chile (Sarricolea et al., 2017) contains 25 different climatic denominations, which agree with the 20 homogeneous areas obtained in this study. Regarding the distribution, similarities were observed with respect to the identification of climatic convexity in the mountain range and foothills sectors, mainly in Norte Grande and Chico, Andean plateau sectors, cross valleys of Norte Chico, and central valleys extending from the Metropolitan region of Santiago to the Araucanía region. The main differences were observed in the extreme south of the country because the climate in the present zoning has a significant longitudinal distribution.
The zoning of continental Chile (Figure 4) yields topographic patterns similar to those obtained by Riedemann et al. (2010). However, Riedemann et al. (2010) only considered altitude as a topographic parameter. In addition, the topographic maps were compared with the geomorphological regions of Chile developed by Börgel (1983). Both studies yield a longitudinal distribution of the relief types; however, Börgel's (1983) study is more detailed in the north of Chile, whereas it is more generalized in the south. The materials and processes associated with the formation of these geomorphological regions were not considered in this study because the zoning is solely based on the relief parameters, without considering their origin.
The topoclimatic zones (Main Map) were compared with those of the Agroclimatic Atlas of Chile by Santibáñez et al. (2017), who identified agroclimatic districts using PCA and CA methodologies at the national level using only climatic variables that considered the topographic component in the model. This implies that the distribution of the topoclimatic areas of our study differs from that of agroclimatic districts. The delimitation of agroclimatic districts are considerably affected by altitude levels because the climatic variables were modeled based on a DEM. In addition, based on the aforementioned study, topoclimatic areas exhibit a more discontinuous behavior because they were intended to reflect the variability of the relief while considering additional parameters.
Mapping the climatic types of a location is important for decision-makers; it facilitates the analysis of spatial information and provides a basis for decision-making (Carver, 1991;Malczewski, 2006) regarding issues that affect the country such as the development, water scarcity, and conservation of natural resources. In addition, the methodology can be used for zoning global climate change scenarios as Chile is a country with high vulnerability (Araya-Osses et al., 2020).
Conclusions
Ninety-six homogeneous topoclimatic areas were identified in continental Chile. The knowledge of their spatial distribution contributes to various areas of interest and supports the zoning, planning, and management of the territory considering the current difficulties and threats that can impact natural systems and the development of productive activities. The mapping of topoclimatic zones can be used for the analysis of other climatic processes such as the identification of zones of influence for meteorological stations, proposals of new stations in areas with little or no coverage, strengthening climatic prediction and phytosanitary alert models, and phenological crop forecasting.
Software
The maps were created using ESRI ArcGIS 10.4 software and the climate surfaces provided by Uribe et al. (2012). Statistical procedures were performed using the R-Studio 3.4.3 software.
Data availability
The datasets (digital layer of the spatial information in shapefile format) generated for this study can be found in the following repository: https://doi.org/10.34691/ FK2/ULMNHV.
Open Scholarship
This article has earned the Center for Open Science badge for Open Data. The data are openly accessible at https:// doi.org/10.34691/FK2/ULMNHV.
Disclosure statement
No potential conflict of interest was reported by the author(s). | 2021-05-05T00:09:51.663Z | 2021-03-10T00:00:00.000 | {
"year": 2021,
"sha1": "56a0e30c6ccd0d88f5ff61d0a9e727556914af68",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17445647.2021.1886188?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "8a07460a71d8e0d6e951ddd960aae7b31bd79024",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Geology"
]
} |
73562830 | pes2o/s2orc | v3-fos-license | Development of Wavelength Shifters for the ArDM Argon Dark Matter Detector
Wavelength shifting organic fluors such as tetraphenyl butadiene (TPB) can shift VUV and UV light into the visible blue high quantum efficiency region of low cost borosilicate windowed bialkali photomultiplier tubes (PMTs). Various thicknesses of TPB were deposited by spraying and vacuum evaporation onto both specular 3MTM-foil and diffuse TetratexTM (TTX) reflectors. 128 nm VUV light generated in 1 bar argon gas by a 5.4 MeV α source was detected by a 3-inch bialkali borosilicate PMT within 1 m tube lined internally with a TPB coated reflector. The light collection was recorded as a function of separation between source and PMT for each combination of coating and reflector for distances up to 1 m. Finally the PMT window was coated in order to shift direct VUV light and the relative efficiencies of each application were compared. The optimum coating and reflector combination was TPB evaporated on TTX. Measurements with coating thicknesses of 0.2 mg/cm2 and 1.0 mg/cm2 yielded similar performance. The best PMT window coating is obtained by TPB evaporation of 0.05 mg/cm2.
Introduction
Tonne scale liquid argon (LAr) targets such as that used in the Argon Dark Matter (ArDM) experiment [1,2] typically require in excess of ten large area PMTs for acceptable light readout.The argon scintillation light due to neutral or charged particle excitation is in the vacuum ultraviolet (VUV) centered at about 128 nm [3,4,5].Currently large MgF 2 windowed PMTs that can detect 128 nm light are commercially not available.A common alternative technique is to apply wavelengh shifting chemicals on borosilicate windowed PMTs thus shifting VUV argon scintillation into the visible spectrum, the typical quantum efficiency (QE) of a borosilicate windowed PMT being from 10 to 15% at approximately 430 nm.Tetraphenyl butadiene (TPB) powder has been used which has an above average Stokes shift and can absorb 128 nm light, emitting in the required visible PMT region [6,7,8,9].The argon scintillation light, is characterised by two distinct decay times -a slow component, τ 2 (triplet eximer), and a fast component, τ 1 (single eximer) [4,10].Determination of the time constants is through a multiple parameters nonlinear least square fit with additional degrees of freedom related to the height, start time and baseline of the pulses.The decay time of the slow component, τ 2 , increases with the increase of argon purity and therefore can be used as a measure of the purity of argon.The observed effect of the purity on the slow component decay time has been hypothesized to be due to water impurities colliding with the long lived triple state [11].From the literature the purest gas argon has a τ 2 of 3200 ± 300 ns [10].The analysis of the measurements described in subsections 3.1, 3.2 was based around this property of argon scintillation.Section 2 describes TPB coating methods and sample preparations.Section 3 details two experiments aiming for reflector selection, optimisation of TPB coating thickness and deposition technique and presents the results obtained.
Reflector type
Our research focussed on the two materials ESR (Vikuiti TM Enhanced Specular Reflector foil) from the company 3M and Tetratex TM (TTX) from the company Donaldson Membranes.3M foil is a multilayer specular reflecting polymer film and as such is likely to be of high radio-purity.Its appearance is that of a polished metal although the material is non conducting by its nature.It has a specular reflection coefficient of approximately 100% in a large region of the optical spectrum.TTX is an aligned polytetrafluoroethylene (PTFE) fibrous cloth and is nearly a 100% diffuse lambertian reflector.The TTX cloth used during these measurements was of 254 µm thickness, similar types previously used to wrap NaI crystals.
TPB deposition techniques on substrates and PMT windows
TPB powder can be applied to a reflector or PMT window by vacuum evaporation, spraying, or by dissolving in a polymer matrix [6,7].Vacuum evaporation was performed in an Edwards model E308 evaporation chamber.TPB powder, which has a melting point of 207 o C, was heated electrically in the vacuum chamber by applying 24 A current to a molybdenum sample holder containing up to 3 g of powder.The reflector/PMT window was placed above the TPB powder at a fixed distance and the coating thickness was controlled by varying this distance and the weight of the powder.Sprayed coatings were prepared by dissolving TPB in toluene in a ratio of 1 to 40.This solution was then airbrushed onto the substrate using 1.2 bar argon gas.The polymer matrix coatings were prepared using long chain paraloid or polystyrene plastic fragments dissolved in toluene.An amount of TPB was added and dissolved isotropically.A known amount of liquid was then syringed onto the substrate.The TPB concentration within the solution was varied, as was the amount of liquid applied to the substrate.The solution was left for three hours to allow the toluene to evaporate, forming clear TPB impregnated plastic.A schematic illustration of the argon gas apparatus which was used is shown in Figure 1.This apparatus consisted of a sealed polyvinyl chloride (PVC) tube containing a 3 inch uncoated PMT (electron tube type 9302KB).An α source located at the centre of a TPB coated reflector disk was placed within the tube at a fixed distance from the PMT (the total deposition of alpha energy occurs within 4 cm in 1 bar of gaseous argon).Samples of either 3M foil or TTX cloth coated with TPB were placed around the interior walls of the tube.A delivery tube was inserted into the PVC chamber and 99.9999% pure argon gas at 1 bar flowed throughout the apparatus.The argon flow rate was used to control the argon purity.Measurements were taken for varied TPB thicknesses between 0.2 mg/cm 2 and 4.0 mg/cm 2 , which were deposited both via evaporation and spraying.Additionally, the distance between the α source and the PMT was altered in order to investigate the effect of both the attenuation of light following multiple reflections and the reduction in direct VUV light incident on the PMT.The number of photoelectrons collected at the PMT for each separation (defined as the total area of the light pulse) was then plotted against the slow component decay time (τ 2 ) for various distances d.
Global efficiency of wavelength shifting and reflection with distance
The results of the analysis are presented in Figure 2. The reduction in the total light collection with increasing distance was found, within errors, to be independent of TPB thickness and substrate.Evaporated coatings on 3M foil consistently underperformed irrespective of coating thickness compared to TTX cloth.Thicker coatings on 3M foil yielded higher light collection whereas light collection from TPB coated on TTX substrates was found to be almost independent of thickness.The 0.2 mg/cm 2 TPB on TTX yielded within errors an identical result to the 1.0 mg/cm 2 coating.
PoS(idm2008)099
Development of WLS for the ArDM detector Konstantinos Mavrokoridis Figure 2: Total photoelectrons for 3200 ns purity against separation from the alpha source to PMT for TPB coated reflector walled tube.
Wavelength shifting direct light incident on the PMT
The argon gas apparatus for direct light measurements was constructed with a similar aspect ratio as the full scale ArDM target (Figure 3a).The experiment consisted of a sealed PVC tube containing a 3 inch coated PMT.The PMT window was coated with TPB powder with thicknesses ranging from 0.02 mg/cm 2 to 2 mg/cm 2 via evaporation, spraying and application of a polymer matrix containing TPB.The sides and base of the PVC tube were covered with 3M foil reflector coated with 1 mg/cm 2 TPB powder by evaporation.TPB coated reflector walls were used as the ability of the window coating to shift VUV light is equally important as its ability to allow shifted visible light from the walls to penetrate.An α source was positioned 10 cm away from the PMT window and argon gas was flowed continually.The effect of various PMT window coatings on the total light collection was then recorded by plotting the slow component decay time (τ 2 ) against the total number of photoelectrons collected at the PMT.
Figure 3b presents the results for the optimum PMT coatings.The optimum thickness was 0.05 mg/cm 2 by evaporation, improving the total light collection by 28 % ± 0.8 % at 1000 ns purity compared to that collected with no PMT coating.To avoid TPB crystallisation, deposition by spraying must be slow allowing evaporation of toluene.For polymer matrix coatings, crystallisation of TPB could be avoided with the addition of a plasticiser, which cross-links paraloid/polysterene chains with TPB, thus forming a rigid lattice while the solvent evaporates.Although the polystyrene matrices perform better than the paraloid, they may be unsuitable for use in ArDM target as PMTs are a source of radioactivity and polystyrene is a weak scintillator.
Figure 1 :
Figure 1: A schematic illustration of the argon gas apparatus used to determine the wavelength shifting and reflection efficiency with distance.
Figure 3 :
Figure 3: a) A schematic illustration of the argon gas apparatus used to determine the wavelength shifting efficiency of direct light incident on a TPB coated PMT.b) Results for the best PMT window coatings (polysterene and paraloid matrices, evaporation and spray). | 2018-12-26T23:33:09.584Z | 2009-08-24T00:00:00.000 | {
"year": 2009,
"sha1": "86793c10dda439abea2c105851592a52fdf1d4b8",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/064/099/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "86793c10dda439abea2c105851592a52fdf1d4b8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
118645810 | pes2o/s2orc | v3-fos-license | Non-relativistic limit of Randall-Sundrum model: solutions, applications and constraints
In the Randall-Sundrum model with one brane, we found the approximate and exact solutions for gravitational potentials and accelerations of test bodies in these potentials for different geometrical configurations. We applied these formulas for calculation of the gravitational interaction between two spheres and found the approximate and exact expressions for the relative force corrections to the Newton's gravitational force. We demonstrated that the difference between relative force corrections for the approximate and exact cases increases with the parameter $l$ (for the fixed distance $r$ between centers of the spheres). On the other hand, this difference increases with decreasing of the distance between the centers of the spheres (for the fixed curvature scale parameter $l$). We got the upper limit for the curvature scale parameter $l\lesssim 10\, \mu$m. For these values of $l$, the difference between the approximate and exact solutions is negligible.
Introduction
The idea of the multidimensionality of our Universe has been attracting continuous interest for many years. It takes its origin from the pioneering papers by Th. Kaluza and O. Klein [1], and now the most self-consistent modern theories of unification such as superstrings, supergravity and M-theory are constructed in spacetime with extra dimensions [2]. In Kaluza-Klein models, our spacetime is effectively four-dimensional due to compactness and smallness of the extra dimensions (internal spaces). The size of the extra dimensions is restricted by the electroweak scales 10 −17 cm. However, our spacetime can be effectively four-dimensional even in the case of infinite extra dimensions. This interesting scenario is realized in recently proposed brane world models (see, e.g., the reviews [3,4]). Here, matter fields from the Standard Model are trapped to a three-dimensional submanifold (brane) embedded in the fundamental multidimensional space (bulk), but gravity may move in the bulk. Localization of massless gravitons on a brane results in effective four-dimensional Einstein gravity in the low energy limit. Certainly, large and infinite extra dimensions are potentially detectable. This was one of the main reasons for the great interest in this scenario. Therefore, it is very important to suggest experiments which can reveal such extra dimensions.
In our paper, we consider the scenario that was first proposed in [5]. Here, the brane is embedded in the five-dimensional anti-DeSitter spacetime, which allows the extra dimension to be infinite. A negative bulk cosmological constant Λ 5 and a brane tension σ are fine tuned to each other. Clearly, this is a very simplified scenario. However, it gives a possibility to reveal some general features of the brane world models, in particular, the localization of the massless graviton on the brane that restores the Newtonian limit on the brane at large distances from the gravitating matter source. It was shown [5] that at distances greater than a curvature scale of anti-DeSitter spacetime r ≫ l ∼ |Λ 5 | −1/2 , the gravitational potential takes an approximate form with a cubic additive ∼ 1/r 3 to the usual Newtonian potential ∼ 1/r. This approximate solution is much simpler than the exact one that makes the investigation of the effects of the extra dimension much easier. In some papers (see, e.g., [6,7]) this approximation was used to calculate the gravitational interaction between gravitating test bodies of different geometrical form. But we should analyze the difference between the approximate and exact solutions to find out where the application of the approximate solution is appropriate. This is one of the main motivations of this work. To perform such analysis, we obtain two types of solutions (approximate and exact) for gravitational potentials and accelerations of test bodies in these potentials for different geometrical configurations. Then, we apply these formulas to the most interesting for experiments case of gravitational interaction between two massive spheres. We calculate approximate and exact corrections to the Newton's gravitational force and show that the difference between relative force corrections for the approximate and exact cases increases with the parameter l (for the fixed distance r between centers of the spheres). On the other hand, this difference increases with decreasing of the distance between the centers of the spheres (for the fixed curvature scale parameter l). The relative force corrections also allow us to get the experimental constraint on the curvature scale parameter: l 10 µm. To get it, we use the results of the table-top inverse square law experiments for the measurements of the Newton's gravitational constant. This is one of the main results of our paper.
The paper is structured as follows. In section 2 we describe briefly the Randall-Sundrum model with one brane. Here, we consider non-relativistic limit of this model and present approximate and exact solutions for the gravitational potential on the brane. These formulas are applied to some practical problems in section 3 to get approximate and exact expressions for the gravitational potential and acceleration of a point mass for these problems. In section 4 we investigate the gravitational interaction of two spherical shells. Then, in section 5 we compare the relative corrections to the gravitational force between two spheres in approximate and exact cases. Here, we also get the constraint on the curvature scale parameter in the Randall-Sundrum model. A brief discussions of the obtained results is presented in the concluding section 6.
Non-relativistic limit of Randall-Sundrum model
The one-brane Randall-Sundrum metrics is [5] where η µν is the flat four-dimensional spacetime metrics and the parameter l is defined via the 5-dimensional cosmological constant: i.e. l is the curvature scale of 5-dimensional anti-DeSitter spacetime. The brane is embedded in this spacetime at ξ = 0 and has fine tuned tension where G 5 is the 5-dimensional gravitational constant. In one-brane Randall-Sundrum scenario, the extra dimensions is infinite: ξ ∈ (−∞, +∞). Now, we want to probe this model with the help of the gravitational terrestrial experiments, e.g., the inverse square law experiments. Certainly, this is the case of non-relativistic limit of the model. In this limit, we need to get the gravitational potential ϕ(r) on the brane. Following, e.g., the calculations in [4], we obtain where r is the magnitude of the radius vector on the brane and where J and Y are Bessel functions of the first and second kinds, respectively. In the short and long distance limits the equation (2.4) reads, respectively: where we have introduced the Newton's gravitational constant and the parameter ‡ Obviously, (2.6) corresponds to the strong deviation from the Newtonian gravity but the formula (2.7) describes the smooth transition to the Newtonian limit. The exact expression (2.4) (the solid line) and its asymptotes (2.6) and (2.7) (the shortdashed and long-dashed lines, respectively) are depicted on figure 1. Here, we introduce the dimensionless distance argument η = r/l and dimensionless potentials ϕ(η) = ϕ(r)/(G N m/l).
Applications
Now, we want to apply the obtained formulas to terrestrial gravitational experiments. Obviously, the gravitational field on the Earth should not considerably differ from the Newtonian one. Therefore, we should use either the exact expression (2.4) or the approximate formula (2.7). Therefore, we shall get two classes of solutions: exact and approximate, respectively. Obviously, the approximate formula (2.7) looks much more simple. However, it is necessary to check the deviation of expressions based on it from the exact ones for real gravitational experiments. This is one of the main aims of the paper.
Infinitesimally thin shell
Let us consider first an infinitesimally thin shell of the mass m = 4πR 2 σ, where R and σ are the radius and the surface mass density of the shell. Then, the gravitational potential of this shell in a point with the radius vector r (from the center of the shell) ‡ It is worth noting that in the pioneering paper [5] α = l 2 . The brane-bending effect [8] gives α = 2l 2 /3. In the paper [9], the authors have pointed out that different schemes of regularization result in different values of α. In our paper we follow calculations in [4], where α = l 2 /2.
for the approximate solution is Obviously, these expressions are divergent when r → R: In the case of the exact solution we get It is not difficult to verify that these exact solutions are also divergent when r → R. Formulas (3.2) and (3.4) demonstrate that inside of the shell the gravitational potential is not a constant. Thus, a test body undergoes an acceleration (see (3.6) and (3.8) below) in contract to the Newtonian case, i.e. the Birkhoff's theorem is violated. The acceleration outside and inside of the shell is which is divergent when r → R: − dϕ dr (r ≷ R) → ∓ GN mα 2R 2 (r−R) 2 → ∓∞ (the upper and lower signs correspond to r > R and r < R, respectively).
In the case of exact solutions we have The exact solutions (3.7) and (3.8) are also divergent for r → R.
Spherical shell of finite thickness
Here, we consider a spherical shell of the inner radius R 1 and the outer radius R 2 and the mass m = (4πρ/3) R 3 2 − R 3 1 with a constant volume density ρ §. For this geometry, the approximate gravitational potential reads (3.10) These expressions are logarithmically divergent in the vicinity of R 1 and R 2 : For the exact solution we get In contrast to the approximate formulas (3.9) and (3.10), these exact expressions are convergent in the limits r → R 1 , R 2 . The acceleration of a test body outside and inside of the shell is . (3.14) These formulas are divergent in the limits r → R 1 ,
R1
(3.15) § It is clear that the limit R 2 → R 1 is incorrect because, for constant ρ, it results in vanishing m and, vice versa, for fixed m the volume density ρ goes to infinity. Therefore, such a naive limit does not provide us the correct transition to the formulas from the previous subsection.
These integrals are divergent in the vicinity of R 1 and R 2 .
Sphere
Obviously, all formulas for a sphere of the radius R and the mass m = 4πρR 3 /3 with a constant volume density ρ can be easily obtained from the equations (3.9), (3.11), (3.13) and (3.15) with the help of the evident substitutions: R 1 = 0 and R 2 ≡ R.
Gravitational interaction of two spherical shells
Let us consider now two spherical shells with radii R 2 > R 1 and the mass m = (4πρ/3) R 3 2 − R 3 1 for the first shell and radii R ′ 2 > R ′ 1 and the mass m for the second shell. Then, the potential energy of gravitational interaction between these shells for the approximate solution reads where r R 2 + R ′ 2 is the distance between the centers of the shells and . In the case of the exact solution we get (4. 2) The additional analysis shows that both of these expressions (4.1) and (4.2) are convergent in the limit r → R 2 + R ′ 2 . With the help of these formulas, we can obtain the absolute value of the gravitational force between two shells: where δ F defines the relative deviation from the Newtonian expression G N mm ′ /r 2 . For the approximate and exact solutions we have, respectively: and (4.5) These relative corrections δ F are also convergent in the limit r → R 2 + R ′ 2 . In the limit of large separation between the shells r ≫ R 1,2 , R ′ 1,2 we obtain from (4.4) δ F = 3α/r 2 + O(1/r 3 ).
Constraints
The obtained above formulas can be used for the experimental restrictions on the parameters of the model. In our case, it is the curvature scale l. To get it, we can use the inverse square law experiments for two spheres. The potential energy of interaction and the gravitational force between two spheres follow from the previous section with the help of the substitutions: R 1 = R ′ 1 = 0 and R 2 ≡ R, R ′ 2 = R ′ . For example, the relative corrections to the gravitational force in approximate and exact cases read, respectively: We remind that α = l 2 /2 (see the equation (2.9)). For definiteness, we shall use the parameters of the spheres from the Moscow Cavendish-type experiment [10]: R 1 ≈ 0.087 cm for a platinum ball with the mass m 1 = 59.25 × 10 −3 g, R 2 ≈ 0.206 cm for a tungsten ball with the mass m 2 = 706 × 10 −3 g and the distance between their centers r = 0.3773 cm. It is clear that the use of the approximate solution for the gravitational interaction force makes the calculations much easier. But we should analyze the distinction between the approximate and exact solutions to find out where the application of the approximate solution is appropriate. The difference between the approximate and exact solutions for the relative force corrections is shown on figures 2 and 3. These figures demonstrate that the difference between these solutions increases with the parameter l (for the fixed distance r between centers of the spheres) (figure 2). On the other hand, this difference increases with decreasing of the distance between the centers of the spheres (for the fixed curvature scale parameter l) ( figure 3). Now, we want to estimate the curvature scale parameter l with the help of our formulas (5.1) and (5.2). To get it, we can use the value of the Newton's [11,12]. They are G N /10 −11 m 3 kg −1 s −2 = 6.674 215 ± 0.000 092 and 6.674 252 ± 0.000 124, respectively. The relative errors △G N /G N show the accuracy of the measurements of the gravitational constant in the inverse square law experiments. If the correction δ F due to the extra dimension is greater than these values, then we can detect the deviation from the Newton's law. Up to now, there is no experimental evidence for such deviations. Therefore, the relation |△G N /G N | = δ F gives the upper limit for δ F . In turn, the equations (5.1) and (5.2) show that δ F ∼ l 2 . Therefore, from these equations we can get the upper limit for l, substituting there for definiteness values for the radii of the spheres and the separation between them from the Moscow experiment. Thus, for the Washington and Zürich experiments, in the case of the approximate formula (5.1) we get respectively 9.067 µm and 10.527 µm and in the case of the exact formula (5.2) we obtain respectively 9.070 µm and 10.531 µm.
Of course, we get rather rough estimates for the upper limit of l. Anyway, we think that it gives more or less correct value of the order of magnitude of l in the Randall-Sundrum model with one brane: l 10 µm. Figure 2 shows that for such values of l the difference between the approximate and exact formulas is negligible. It is worth noting that close constraints were found in the table-top inverse square law experiments [13] and from astrophysical observations [14]- [18].
Conclusion
In our paper we have considered the one-brane Randall-Sundrum model. In the weak-field limit, we obtained the approximate and exact expressions for gravitational potentials and accelerations of test bodies in these potentials for different geometrical configurations. Some of these approximate formulas were already known (see, e.g., [6,7]), but the exact ones were found for the first time. We applied these equations for calculation of the gravitational interaction between two spherical shells of finite thickness that can be easily reduced to the case of spheres. Then, we found the approximate and exact expressions for the relative force corrections to the Newton's gravitational force between two massive spheres. It is clear that the use of the approximate solution makes the calculations much easier. But we should analyze the difference between the approximate and exact solutions to find out where the application of the approximate solution is appropriate. We found that the difference between relative force corrections for the approximate and exact cases increases with the parameter l (for the fixed distance r between centers of the spheres). On the other hand, this difference increases with decreasing of the distance between the centers of the spheres (for the fixed curvature scale parameter l). Using the results of the tabletop Cavendish-type experiments measuring the Newton's gravitational constant, from the equations for the relative force corrections we got the upper limit for the curvature scale parameter l 10 µm in the Randall-Sundrum model. For these values of l, the difference between the approximate and exact solutions is negligible. | 2011-11-17T10:09:01.000Z | 2011-11-17T00:00:00.000 | {
"year": 2011,
"sha1": "f904760f27141794ea7178cf92652368df77df78",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1111.4046",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f904760f27141794ea7178cf92652368df77df78",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119078189 | pes2o/s2orc | v3-fos-license | The moduli spaces of S-fold CFTs
An S-fold has played an important role in constructing supersymmetric field theories with interesting features. It can be viewed as a type of AdS_4 solutions of Type IIB string theory where the fields in overlapping patches are glued by elements of SL(2,Z). This paper examines three dimensional quiver theories that arise from brane configurations with an inclusion of the S-fold. An important feature of such a quiver is that it contains a link, which is the T(U(N)) theory, between two U(N) groups, along with bifundamental and fundamental hypermultiplets. We systematically study the moduli spaces of those quiver theories, including the cases in which the non-zero Chern-Simons levels are turned on. A number of such moduli spaces turns out to have a very rich structure and tells us about the brane dynamics in the presence of an S-fold.
Type IIB string solutions with monodromies 1 in K 6 in the S-duality group SL(2, Z). These solutions were obtained by quotienting the solutions corresponding to the holographic dual of Janus interfaces in 4d N = 4 super-Yang-Mills [22,23]. The former type of solutions is referred to as the S-fold in [17]. The S-fold solutions can be divided into two classes, known as the J-fold and the S-flip.
The J-fold solutions are those associated with a monodromy given by an element J ∈ SL(2, Z) with tr J > 2. The corresponding geometry can be constructed by using AdS 4 ×S 2 ×S 2 ×Σ 2 , where Σ 2 is a non-compact Riemann surface with the topology of a strip. The ends of the strip are then identified with a J-twisted boundary condition. It was shown in [17] that this type of solutions preserve OSp(4|4) symmetry and thus are dual to 3d N = 4 superconformal field theories. The J-fold solutions can, in fact, be obtained as a quotient of a Janus interface solution. As a result, the quiver field theory dual of such a solution contains a component corresponding to such an interface, namely the T (U (N )) theory. From the brane perspective, one can introduce a five-dimensional surface implementing the monodromy under the action of J into the brane system. Among the possible choices of the SL(2, Z) elements, we may take the monodromy to be associated with J k = −ST k in this case, the corresponding J-fold gives rise to a Chern-Simons level k to one of the U (N ) gauge groups. An example of such a configuration and the corresponding quiver theory is given by (2.18) and (2.19).
The S-flip solutions can be discussed in a similar way as for the J-folds. In this case, the SL(2, Z) element implementing the monodromy is taken to be S. Geometrically, we need to perform an exchange of coordinates corresponding to the two S 2 in AdS 4 × S 2 × S 2 × Σ 2 , together with a flip at the S-interface such that Σ 2 becomes a Möbius strip topologically. Similarly to the J-fold, the insertion of the S-flip into a brane system gives rise to a T (U (N )) link between two U (N ) gauge groups, where the Chern-Simons levels of those are zero. It was shown in [17] that the S-fold solutions preserve OSp(3|4) and the dual superconformal field theory is expected to have N = 3 supersymmetry.
In this paper, we consider the Hanany-Witten brane systems with an insertion of S-flips or J-folds, as well as the three dimensional quiver theories that arise on the worldvolume of the D3 branes. Let us summarise the main points. For the system with an S-flip, the quiver consists of a T (U (N )) link between two U (N ) gauge groups with zero Chern-Simons level. We find that such a theory has two branches of the moduli space, namely the Higgs and the Coulomb branches. The Higgs branch of such theories is given by a hyperKähler quotient described at the beginning of section 3. The Coulomb branch, on the other hand, can be computed in a very similar way to the usual 3d N = 4 gauge theories [24], with the remark that the Coulomb branch dynamics does not receive a contribution from the vector multiplet from the gauge groups that are linked by T (U (N )). In other words, the segment of the D3 branes passing through the S-flip does not move along the Coulomb branch directions. We also check that these results are consistent with mirror symmetry, namely the Higgs (resp. Coulomb) branch of a given theory agrees with the Coulomb (resp. Higgs) branch of the mirror theory, obtained by applying S-duality to the original brane system. Subsequently, we turn on non-zero Chern-Simons levels for the U (N ) gauge groups in the quiver. We focus on the abelian theories in section 4. The models analysed in this section are, in fact, a generalisation of those studied in [16,17,25] in the sense that we also include bifundamental and fundamental matter, along with the J-fold, in the quivers. This makes the moduli space become highly non-trivial; for example, it may contains many non-trivial branches. We, however, do not have a general prescription to compute the moduli space for non-abelian theories with T (U (N )) links and non-zero CS levels. Nevertheless, in section 5, we show that, for theories that arise from N M2-branes probing Calabi-Yau 4-fold singularities, it is possible to compute the Hilbert series for each configuration of magnetic fluxes.
The paper is organised as follows. In section 2, we give a brief summary of the brane configurations for linear quivers and compact models, as well as a brief review on the S-fold solutions and (p, q) fivebranes. In section 3, quiver theories corresponding to the brane systems with S-flips are examined. The Higgs and the Coulomb branches of the moduli space are studied using the Hilbert series. We also provide a consistency check of our results against mirror symmetry. In section 4, we then consider abelian theories arise from the brane systems with J-folds, along with NS5 and D5 branes. We systematically analyse various branches of the moduli space. In section 5, we examine an example of non-abelian theory with T (U (N )) links that can be realised on M2-branes on a Calabi-Yau four fold singularity. In this example, we compute the Hilbert series of the moduli space and analyse the contribution from each configuration of magnetic fluxes. We conclude the paper in section 6 and discuss about some open problems for future work. The technical analysis for theories with many J-folds is collected in Appendix A.
S-fold solutions and their SCFT duals
A large class of N = 4 quiver gauge theories in three dimensions can be engineered using brane systems involving D3, D5, NS5 branes [4]. Each type of branes spans the following directions: 0 1 2 3 4 5 6 7 8 9 D3 X X X X NS5 X X X X X X D5 X X X X X X (2.1) The x 6 direction can be taken to be compact or non-compact.
2.1 Linear quivers: T σ ρ (SU (N )) and its variants If x 6 direction is non-compact, we obtain a linear quiver of the form where a circular node with a label N denotes a U (N ) gauge group and a square node with a label M denotes a U (M ) flavour symmetry. This class of linear quivers was studied in [6] and each of the theories in this class is represented by T σ ρ (SU (N )) for some N , with σ and ρ partitions of N .
From the brane perspective, if we move the D5-branes to one side and the NS5branes to the other side, N is the total number of D3-branes in the middle, σ contains the differences between the number of D3-branes on the left and on the right of each D5-brane, and ρ contains the differences between the number of D3-branes on the left and on the right of each NS5-brane. Let us provide an example for N = 6, σ = (3, 2, 1) and ρ = (2 2 , 1 2 ): To read off the quiver gauge theory, it is convenient to move the D5-branes inside the NS5-brane intervals as follows: 1 1 1 Since three dimensional mirror symmetry [26] exchanges D5-brane and NS5-branes [4], it also exchanges σ and ρ. A quiver description of T σ ρ (SU (N )) for a general σ and ρ can be found in, for example, [27, sec. 2] or [28, sec 2.1].
The T (SU (N )) theory. A theory that plays an important role in this paper is that with σ = ρ = [1 N ]. Such a theory is denoted by T (SU (N )) and its quiver description is (2.5) As an explicit example, the brane configurations for T (SU (3)) are as follows: In general T (SU (N )) is invariant under mirror symmetry. The Higgs and the Coulomb branches of this theory are both isomorphic to the closure of the maximal nilpotent orbit of SU (N ) [6], which is denoted by N SU (N ) . We can conveniently define N SU (N ) as a set of N × N complex matrices M such that tr(M p ) = 0, for p = 1, . . . , N ; the quaternionic dimension of this space is therefore 1 2 N (N − 1). For quiver (2.5), the symmetries of the Higgs and Coulomb branch are thus both SU (N ); the former is manifest in the Lagrangian (or quiver) description as a flavour symmetry, whereas the latter is not manifest but gets enhanced from the topological symmetry U (1) N −1 in the infrared.
The T (U (N )) theory. An important variant of the T (SU (N )) theory is the T (U (N )) theory [6, sec 4.4]. The latter is defined as a product between the T (SU (N )) theory and an "almost trivial" T (U (1)) theory, where the latter can be characterised as follows. The Coulomb and Higgs branches of T (U (1)) are trivial; each of them consists of only one point. Nevertheless, T (U (1)) comes with a U (1) × U (1) background vector multiplet, along with an N = 4 background mixed Chern-Simons term with level 1 between such U (1) vector multiplets. Explicitly, the action for the following quiver 1 k1 1 k2 T (U (1)) (2.7) in the N = 2 notation is given by (see e.g. [29, (4.4) where Σ i , V i (with i = 1, 2) are, respectively, the N = 2 linear multiplet and vector multiplet of the i-th gauge node, and Φ i are the N = 2 chiral multiplets of the N = 4 vector multiplets of the i-th gauge group. In the above equation, we highlight the contribution from the mixed Chern-Simons terms due to T (U (1)) in blue. We emphasise that the mixed Chern-Simons terms come with the level −1 in our convention for T (U (1)). Thus, one may view the T (U (N )) theory as having a global symmetry U (N ) × U (N ), such that the two U (1) subgroups of each U (N ) acts trivially on the theory, and that an N = 4 background mixed Chern-Simons term with level −N is added for the two corresponding U (1) background vector multiplets. It should be mentioned that there is a close cousin of the T (U (1)) theory. This theory is called T (U (1)) in [11]. This theory can be defined almost in the same way as above, except that the minus signs in the blue terms of (2.8) are changed to plus signs. In other words, the level of the mixed Chern-Simons terms is +1. One can then define T (U (N )) theory as a product between T (SU (N )) and T (U (1)). As a consequence, T (U (N )) has a global symmetry U (N ) × U (N ), such that the two U (1) subgroups of each U (N ) acts trivially on the theory, and that an N = 4 background mixed Chern-Simons term with level N is added for the two corresponding U (1) background vector multiplets.
Compact models
Let us now take x 6 to be a circular direction. We refer to this type of configurations as compact models. An example of this is as follows: where the loop around the node denotes a hypermultiplet in the adjoint representation of the U (N ) gauge group. The mirror theory can be obtained simply by applying S-duality to the above brane system in the usual way:
The holographic duals of linear quivers and compact models
Both linear quivers and compact models have known holographic duals in sting theory. Type IIB supergravity solutions have been found in [27,30]. Historically, these solutions descend from the seminal work [22,23], where AdS 4 × S 2 × S 2 × Σ 2 backgrounds have been found, with Σ 2 a non-compact Riemann surface with the topology of infinite strip R × I with coordinates (y, x), where I is an interval. The dual field theory is supposed to be four-dimensional SYM with space-dependent coupling constant, since the ten-dimensional metric is actually asymptotically AdS 5 × S 5 in the limit y → ∞. The metric, the dilaton and the fluxes are completely determined in terms of two harmonic functions A i on Σ 2 . These functions can admit suitable singularities on the boundary of the strip. Those are interpreted as the singularities coming from D5 and NS5 branes, like those presented in example (2.6). We illustrate this in figure (2.11).
(2.11) Backgrounds dual to 3d N = 4 linear quiver theories can be obtained by picking suitable harmonic functions on Σ 2 : specifically, we can make a choice of harmonic functions such that I shrinks to zero as y → ±∞. The resulting topology is AdS 4 × B 6 where B 6 ≈ S 5 × I is the six-dimensional ball. This is illustrated in (2.12). (2.12) Getting holographic duals of 3d N = 4 compact models is more subtle and a quotient procedure is involved. Harmonic functions on Σ 2 can be chosen to have an infinite number of singularities, but in such a way to be periodic along the infinite direction with period T : The whole solution is invariant under this translation, being completely determined by A i . At this stage, we can perform a quotient with respect to "T -symmetry" ending with a configuration where points (x, y) and (x, y + T ) of the Riemann surface are identified; we end up with a surface with the topology of the annulus; see figure (2.13). (2.13)
J-folds
A more general quotient procedure can, in fact, be implemented. In particular, one may introduce an SL(2, Z) duality-twisted boundary condition [16,17] upon identifying the two ends of the aforementioned Riemann surface. This can be done as follows. As before, the starting point is a choice of harmonic functions A i , that completely fixes the physical fields of the solution. For instance, let us focus on the axio-dilaton τ = C 0 + i e −2φ where C 0 is the potential of the one-form flux F 1 and φ is the dilaton. As it is well-known, Type IIB supergravity admits a non-trivial action of SL(2, Z), generating orbits of equivalent solutions; the axio-dilaton is not invariant under this SL(2, Z) action. We can imagine to pick harmonic functions A i such: τ (y + T ) = M τ (y) (2.14) where M represents the action of SL(2, Z) on the axio-dilaton and we require that similar relations hold for all other fluxes, with an appropriate element of SL(2, Z) acting on them. If such a choice can be performed, we can imagine to quotient with respect to the joint action of SL(2, Z) and translation by T along the non-compact direction y. Points (x, y + T ) and (x, y) are again identified; the Riemann surface has a cut along (x, T ), passing through the fields undergo an SL(2, Z) transformation. We end up with a Riemann surface with the topology of the annulus and a non-trivial monodromy under SL(2, Z). This is illustrated in (2.15).
It turns out that such a quotient is related to a particular choice of SL(2, Z) element. Let satisfying S 2 = −1 and (ST ) 3 = 1, be the generators of SL(2, Z). Then the aforementioned quotient can be performed for every element of SL(2, Z) of the form: This kind of solutions was studied in the context of abelian theories in [16] and is referred to as the J-fold in [17]. These are often regarded as non-geometrical, in the sense that we performed a quotient with respect to some symmetry of the theory not descending from isometries of the metric. The quotient also admits a realisation at the level of brane configurations: it corresponds to a five-dimensional surface implementing the aforementioned monodromy under SL(2, Z) action. As we have seen, Σ 2 has the topology of the annulus, thus corresponding to circular brane configuration with an insertion of J-folds. An example of a brane configuration with a J-fold is as follows: The insertion of the J k -fold in such a brane system can be viewed as introducing a 3d interface, with a non-trivial SL(2, Z) action J k , to the 4d N = 4 super-Yang-Mills theory living on the D3-branes on the circle. The theory on such a 3d interface was studied in [6, sec. 8]. This is, in fact, the T (U (N )) theory with a Chern-Simons level k for one of the flavour U (N ) symmetry, whereas the other U (N ) flavour symmetry has Chern-Simons level zero. One can then couple this 3d theory to the theory on the D3-brane on a circle. The U (N ) k and the U (N ) 0 flavour symmetries 2 are then coupled to the U (N ) L and U (N ) R gauge fields on the left and on the right of the interface, respectively 3 . For instance, the three dimensional quiver theory associated to the brane system (2.18) is where N k and N 0 denotes gauge groups U (N ) with Chern-Simons levels k and 0 respectively. We emphasise that there is a mixed CS term with level −N between the two gauge groups. Due to the presence of the T (U (N )) theory as a link, this is not a conventional Lagrangian theory, because only one U (N ) symmetry is manifest in the Lagrangian description of the T (U (N )) theory, whereas the other U (N ) symmetry emerges in the infrared 4 .
S-flips
Another type of quotients that is similar to the J-fold is possible. In this case we select the SL(2, Z) element implementing the monodromy to be S. However, in order to have a desired symmetry of the supergravity solution, we have to perform an exchange of coordinates corresponding to the two S 2 in AdS 4 × S 2 × S 2 × Σ 2 and a reflection of x coordinate, being identified at the S-interface in an antipodal way, as depicted in (2.20). The Riemann surface now has the topology of the Möbius strip. This type of solutions is referred to as an S-flip in [17]. Similarly to the J-fold, the S-flip has an avatar at the level of circular brane configuration, as five-dimensional surface passing through the configuration undergoes an SL(2, Z) transformation and a rotation of coordinates such that (x 3,4,5 , x 7,8,9 ) → (x 7,8,9 , − x 3,4,5 ). When an S-flip is inserted into a brane system, the corresponding quiver diagram can be obtained in the same way as that with the J-fold, except that the Chern-Simons level is set to zero. An example for this type of configurations is depicted in (3.1).
(p, q) fivebranes
Let us now consider (p, q) fivebranes [33,34], where (1, 0) denotes an NS5 brane and (0, 1) denotes a D5 brane. For a given ordered pair (p, q), we can write this as for some k 1 , k 2 , . . . k r . Thus, any (p, q) brane is related to an NS5 brane by an SL(2, Z) transformation. Using this realisation, we can convert a (p, q) brane to an equivalent configuration involving J-folds as follows: From the perspective of the quiver diagram, each J k gives rise to a T (U (N )) link with a Chern-Simons level k for the U (N ) group on the left, whereas each J −1 −k gives rise to a T (U (N )) link with a Chern-Simons level k for the U (N ) group on the right. In particular, the corresponding quiver theory for the following SL(2, Z)-equivalent brane systems N D3 (p, q) NS5 NS5 (2.23) is as follows:
Models with zero Chern-Simons levels
In this section, we consider theories with zero Chern-Simons (CS) levels and with certain links between gauge nodes in the quiver being T (U (N )). From the brane perspective, such a theory arises from the Hanany-Witten brane configuration [4], namely a system of D3, NS5 and D5 branes that preserves eight supercharges, with an insertion of S-flips [17]. The presence of an S-flip gives rise to the aforementioned T (U (N )) link in the quiver. The moduli space of such quiver theories is studied below. The main result can be summarised as follows.
We find that these theories have two branches of the moduli space, namely the Higgs and the Coulomb branches. Let us first discuss about the Higgs branch. We propose that this is given by the hyperKähler quotient of a product of each component in the quiver by the gauge symmetry. By each component, we mean a bi-fundamental hypermultiplet, a fundamental hypermultiplet and a T (U (N )) link that connects two U (N ) groups together. The former two can be treated in the usual way as in a Lagrangian theory. whereas each T (U (N )) link contributes two copies of the closure of the maximal nilpotent orbit of SU (N ), denoted by N SU (N ) . The reason for latter is two-fold: (1) the Higgs and the Coulomb branches of T (U (N )) are both isomorphic to N SU (N ) , and (2) in order to realise the two U (N ) groups connected by T (U (N )), we need two copies of SU (N ) subgroups, one arises from the Higgs branch and the other arises from the Coulomb branch of T (U (N )).
The Coulomb branch is similar to the usual 3d N = 4 gauge theories, but with the following important remark. We propose that the scalars in the vector multiplets of any two gauge nodes that are connected by a T (U (N )) link are frozen and do not contribute to the Coulomb branch. The other gauge nodes in the quivers still give rise to vector multiplets that contribute to the Coulomb branch. From the brane perspective, this proposal implies that the D3-brane segment between two NS5-branes that is stretched through the S-flip cannot move along the NS5-brane directions (i.e. the Coulomb branch directions).
We check that the descriptions of the Higgs and the Coulomb branches mentioned above are consistent with S-duality and mirror symmetry. Given a brane system, say of theory A, we can obtain a brane system of the mirror theory, say theory B, using S-duality. We find that the moduli space of theories A and B are related by mirror symmetry [1,4]. in the following sense. The Higgs branch (resp. Coulomb branch) of theory A computed by using the above proposal is in an agreement with the Coulomb branch (resp. Higgs branch) of theory B.
Below we provide examples to demonstrate the above discussion.
Example 1: A flavoured affine A 1 quiver
Let us consider the following brane set-up and the following theory.
where, throughout this section, we denote a gauge group U (N ) with zero CS level by a circular node with the label N . The flavour symmetry U (N f ) is denoted by a square node with the label N f .
The mirror theory can be derived by applying the S-duality to the brane system The Higgs branches We claim that the Higgs branch of (3.1) is given by 3) where N SU (N ) denotes the closure of the maximal nilpotent orbit of SU (N ). The quaternionic dimension of this is Similarly, we claim that the Higgs branch of (3.2) is The dimension of this space is
The Coulomb branches
Since mirror symmetry identifies the Coulomb branch C (3.1) of (3.1) with the Higgs branch H (3.2) of (3.2), it follows that and hence C (3.1) is trivial. We see that even though the theory (3.1) has gauge group U (N ) × U (N ), its Coulomb branch is trivial. This is consistent with our proposal: the scalars in the vector multiplets of U (N )×U (N ) gauge group in (3.1) are frozen to a particular value, because they are linked by T (U (N )). From the brane perspective, this means that the D3-branes do not move along the direction of the S-flip, but get stuck at a particular position in the x 3,4,5 directions. On the other hand, since the Higgs branch of (3.1) is non-trivial, this means that the D3-branes that align along the direction of the S-fold and NS5-branes can move along the x 7,8,9 directions. By the same token, We see that even though (3.2) has gauge group U (N ) × U (N ) × U (N ), its Coulomb branch has dimension N , rather than 3N (which is the sum of the ranks of the gauge groups). This is indeed again consistent with our proposal: the scalars of the two U (N ) gauge groups connected by T (U (N )) are frozen, but those of the remaining U (N ) gauge group can acquire VEVs. The latter gauge group has rank N and contributes N to dim H C (3.2) . From the brane perspective, the D3-brane segment between two NS5 branes that stretch across the S-flip get stuck at a particular position along the x 3,4,5 directions. On the other hand, the segment that does not stretch across the S-flip can move along the latter.
The Hilbert series
To confirm these statements, we compute the Hilbert series of the Higgs branch of (3.1) using the description (3.3): 5 (3.9) 5 The plethystic exponential (PE) of a multivariate function f (x 1 , x 2 , . . . , x n ) such that f (0, 0, . . . , 0) = 0 is defined as PE[f (x 1 , x 2 , . . . , where the U (N ) Haar measure is given by (3.10) and the Hilbert series of the closure of the maximal orbit of SU (N ) is (see [35, (3.4)] and [36]): (3.14) The Higgs branch of (3.1) thus has an SU (2) isometry; this is manifest as a flavour symmetry in the quiver. In fact, this Hilbert series is equal to that of the Coulomb branch of U (N ) gauge theory with 2N flavours (also known as the T
Example 2: Another flavoured affine A 1 quiver
Let us now consider the following theory: The mirror theory can be obtained by applying the S-duality to the brane system (3.17): We claim that the Higgs branch of (3.16) is given by the following quotient: The quaternionic dimension of H (3.16) is N . Similarly, the Higgs branch of (3.17) is given by The dimension of this space is 0. Since mirror symmetry identifies the Higgs branch of (3.17) with the Coulomb branch of (3.16), this means that the Coulomb branch of theory (3.16) is trivial. This supports our proposal that the scalars in the vector multiplets of the gauge groups connected by T (U (N )) are frozen and do not contribute to the Coulomb branch.
Similarly to the previous example, the Higgs branch Hilbert series of (3.16) is equal to where x and y are the two U (1) flavour fugacities. This turns out to be equal to The Higgs branch of (3.16) thus has a U (1) isometry. This Hilbert series, in fact, is equal to that of the Coulomb branch Hilbert series of the U (N ) gauge theory with 2N + 1 flavours ,N ] (SU (2N + 1)) theory in the notation of [6]). This suggests that This, again, confirms the statement that the scalars in the vector multiplet of the gauge groups connected by the red line T (U (N )) are frozen and do not contribute to the Coulomb branch dimension. This statement can be clearly seen in quiver (3.17): since the two U (N ) gauge groups connected by the red line do not contribute to the Coulomb branch, we can effectively think of them as flavour symmetries, and so the U (N ) gauge group on the lower right hand corner has effectively 2N + 1 flavours transforming under it. In terms of branes, the segment of the D3-branes between two NS5 branes that is cut by the S-flip does not have any motion along the x 3,4,5 directions, whereas the other D3-brane segment still has a motion along those directions.
Example 3: Quivers with a T (U (N )) loop
We consider the following brane set-up and the following corresponding theory.
The mirror theory can be obtained by applying S-duality to the above system: The Higgs branch of (3.23) is given by the following description The quaternionic dimension of which is equal to Observe that for n = 1, the Higgs branch is trivial for any N . On the other hand, the Higgs branch of (3.24) is given by the following description where we quotiented by U (N ) n+1 /U (1) N because at a generic point on the Higgs branch, the gauge symmetry U (N ) n+1 is not completely broken but it is broken to U (1) N (see e.g. [37]). The dimension of this space is actually zero: From mirror symmetry, C (3.23) is identified with H (3.24) , and so This is consistent with our proposal because (3.23) has a single circular node that is connected by the T (U (N )) link and so it does not contribute to the Coulomb branch dynamics.
On the other hand, it can be checked using the Hilbert series that the Higgs branch H (3.23) is in fact isomorphic to the Coulomb branch of the following quiver 6 (3.30) This quiver can be derived from (3.24) using our proposal: since the vector multiplets two gauge nodes linked by T (U (N )) in (3.24) are frozen, we can take them to be flavour nodes, and quiver (3.30) thus follows. Amusingly, using brane and mirror symmetry (see [38, (2.5)]), we also know that In a special case of or n = 1, the quiver on right of the above equation is the star-shaped quiver that is mirror [39] to the S 1 compactification of a clsss S theory of type A N −1 associated with a sphere with two maximal and one minimal puncture. The latter is actually a theory of free hypermultiplets. Thus, the spaces in (3.31) are zero dimensional; this is in agreement with (3.26).
Abelian theories with non-zero Chern-Simons levels
In this section, we focus on field theories that arise from Hanany-Witten brane configurations, with a single D3-brane on S 1 and with an inclusion of J-folds. These can be represented as abelian quiver theories with non-zero Chern-Simons (CS) levels 7 , and T (U (1)) connected between quiver nodes. The presence of a T (U (1)) link between two quiver nodes gives rise to a mixed CS level between them. In fact, the systems consisting only a D3-brane on the circle and J-folds (but with no D5 and no NS5 brane) were studied in [16]. Such systems give rise to pure CS theories. In order to make the moduli space more interesting, we may also include NS5 and D5 branes in the system. These introduce bi-fundamental and fundamental hypermultiplets into the quiver theory. The moduli space of theories in this section is more sophisticated to analyse than those in section 3. This is because the vacuum equations may admit many sets of non-trivial solutions, in which case the moduli space has many branches. Below we systematically analyse such branches, and provide necessary conditions on the CS levels in order to have a non-trivial moduli space. As a warm-up, we first analyse linear quivers without a T (U (1)) link in section 4.1. This also serves as a generalise of the analysis in [40] and a complement to the analysis of [25], where in this paper we provide direct analyses of the moduli space from the vacuum equations and compute the Hilbert series. Subsequently in section 4.2, we introduce a J-fold in to the brane system. Finally, in section (4.3), we add flavours in to the quiver. In the latter, under some conditions, the fundamental hypermultiplets may contribute nontrivially to the moduli space. The analysis for theories with more than one J-fold is more technical and we postpone the discussion to Appendix A.
Warm-up: Theories without a J-fold
Before adding a J-fold to the brane systems, it is instructive studying in a systematic way the moduli space of linear quivers without fundamental matter.
This is made up of n U (1) gauge nodes with Chern-Simons levels k i , i = 1, . . . n. The i-th node is connected to the (i − 1)-th one by an hyper-multiplet (A i , A i ). In N = 2 language, the quiver appears as: with the superpotential Due to N = 3 supersymmetry of the theory, we are allowed to collect at the same time both F-terms and D-terms, in such a way we really need to solve a unique set of equations. Let us call : the whole set of F -terms and D-terms now read Moreover, the R-charge and gauge charges of the monopole operators with flux (m 1 , . . . , m n ) read, respectively: where m i is the magnetic flux of the i-th gauge group.
Cutting the quiver
It is convenient to study the solutions to the vacuum equations according to the vanishing of the VEVs of the bi-fundamental hypermultiplets. In particular, the vacuum equations may admit the solutions in which In which case, the quiver diagram in question is naturally divided into sub-quivers, and we shall henceforth say that the quiver is "cut" at the positions l 1 , l 2 , · · · , l m . If the vacuum equations do not admit such a solution, we say that the quiver cannot be cut. As we shall see in explicit examples below, the vacuum equations of certain quivers may admit more than one option of cuts, in which case, each option gives rise to a branch of the moduli space.
In order to determine whether we need to cut the quiver, we can proceed as follows. Suppose that the quiver cannot be cut, i.e. all A i and A i are non-zero. This implies that Φ i = Φ = 0 for all i. If the system of equations (4.5) admits a solution in which µ j = 0 for some j, then our initial assumption that the quiver cannot be cut is contradicted, and we need to cut a quiver somewhere. However, it should be emphasised that if the aforementioned system of equations have a solution in which µ j = 0 for all j, what we can infer is that there is a branch of the moduli space corresponding to no cut; however, there may exist another branch of the moduli space corresponding to a cut in the quiver.
Let us now cut the quiver in question at two positions, namely l and m with m > l. This divides the the orginal quiver into three sub-quivers that we will denote as: "left", collecting the nodes first l nodes, "central", collecting the node l + 1 , . . . , l + m, and finally "right" encoding the last n − l − m nodes, as depicted below.
Below we derive necessary conditions for each sub-quivers to contribute non-trivially to the moduli space. Let us consider the left sub-quiver. We fix A l = A l = 0 and assume that A i and A i are non-vanishing for all i = 1, 2, . . . , l. Then (4.85) implies that Φ i = Φ = (ϕ , σ) ∀i = 1, 2, . . . , l. The sum of the first l equations in (4.5) provides the following constraint Since ϕ = 0 (otherwise A l−1 A l−1 would be zero, contradicting our assumption), we see that a necessary condition for the left sub-quiver to contribute non-trivially to the moduli space of vacua is A similar argument also applies for the right sub-quiver. We fix A l+m = A l+m = 0 and assume that A i and A i are non-vanishing for all i = l + m + 1, . . . , n. A necessary condition for this sub-quiver to contribute non-trivially to the moduli space is If the central sub-quiver contains a sub-quiver whose CS levels sum to zero, we may cut the former further into smaller sub-quivers. Otherwise, a necessary condition for the central sub-quiver to contribute non-trivially to the moduli space is This again follows from the sum of the (l + 1)-th to the (l + m)-th equations in (4.5), with µ l = µ l+m = 0. Note that there can be many ways in cutting a given quiver into sub-quivers. Consider the following gauge theory as an example There are two ways in cutting such a quiver in order to obtain a non-trivial moduli space, namely I : In case I, both left and right sub-quivers contribute non-trivially to the moduli space, whereas in case II, only the central sub-quiver contributes non-trivially. We shall refer to the vacuum spaces corresponding to these two options as branches of the moduli space for (4.13). We shall go over the detailed computation of the moduli space later.
The Hilbert series
Let us consider quiver (4.8) and assume that the left, central and right sub-quivers cannot be cut further. Using (4.85), we see that σ 1 = σ 2 = . . . = σ l , σ l+1 = σ l+2 = . . . = σ l+m , and σ l+m+1 = . . . = σ n . In other words, the magnetic fluxes for the monopole operators for all nodes in each sub-quiver are equal: The R-charge of the monopole operator with the flux (m 1 , . . . , m n ) is therefore The Hilbert series can be computed using the same procedure as presented in [40, sec. 4-sec. 6]. The idea is to count the monopole operators dressed by appropriate chiral fields in the theory such that the combination is gauge invariant. The appropriate combination of chiral fields that are used to dress the monopole operators are counted by the baryonic generating function [46].
Let g L (t, B), g C (t, B) and g R (t, B) be baryonic generating functions for the left, central and right sub-quivers, respectively. Then, the Hilbert series for the moduli space for quiver (4.8) is given by where z L,C,R are fugacities for the topological symmetries. The first line is the contribution from the monopole operators and the second and third lines are the contribution from an appropriate combination of chiral fields in the quiver that will be used to dress the monopole operators.
Example 1: Quiver (4.13) The two non-trivial cuts depicted in (4.14) corresponds to two non-trivial branches of the moduli space.
Branch I. This corresponds to the top diagram in (4.14), where the VEVs of A 2 and A 2 are zero, and the VEVs of other bifundamentals are non-zero. The cut splits the quiver (4.13) into two sub-quivers, each of which can be identified as the half-ABJM theory 8 (4.18) where g ABJM/2 (t; B) is the baryonic generating function of the half-ABJM theory (4.19) and the character of the adjoint representation [1,1] The last line indicates that this branch is isomorphic to the reduced moduli space of one SU (3) instanton on C 2 [41], or equivalently the closure of the minimal nilpotent orbit of SU (3). The eight generators can be written in terms of a traceless 3 × 3 matrix as where ϕ L = ϕ 1 = ϕ 2 and ϕ R = ϕ 3 = ϕ 4 . The Hilbert series indicates that the matrix M satisfies the following conditions [42]: Branch II. This corresponds to the bottom diagram in (4.14), where the VEVs of A 1 , A 1 , A 3 and A 3 are zero, and the VEVs of other bifundamentals are non-zero. In this case, only the central sub-quiver contributes to the computation of the Hilbert series. The magnetic fluxes associated with the four nodes of the quiver from left to right can be written as (0, m, m, 0), with m ∈ Z, where the zeros follow from the D-term equations. The Hilbert series for this branch of the moduli space is then given by This indicates that this branch is isomorphic to C 2 /Z 3 . The generators of this moduli space are V (0,1,1,0) , V (0,−1,−1,0) and ϕ ≡ ϕ 2 = ϕ 3 , satisfying the relation Branches I and II of (4.13) are indeed the Higgs and Coulomb branches of 3d N = 4 U (1) gauge theory with 3 flavours, as pointed out in [7, sec. 4.2]. The brane system of the former can be obtained by applying the SL(2, Z) action T T to the brane system of the latter.
Example 2: No cut in the quiver (4.1) We assume that A i and A i are non-vanishing for all i = 1, . . . , n, i.e. there is no cut in the quiver. In this case, (4.85) implies that As a consequence, the magnetic fluxes are constrained to be all equal m 1 = m 2 = . . . = m. The equations (4.5), instead, simply constrain the bilinears µ i in terms of ϕ. Summing over the n equations, we obtain the following condition Let us assume (4.27) in the subsequent discussion. If K i ≥ 0 for all i = 1, . . . , n − 1, we can form the following gauge invariant dressed monopole operator: (4.29) Note that if K j < 0 for some j, we replace A . In any case, the R-charges of the above dressed monopole operators are The chiral ring is generated by the three operators {ϕ , V + , V − }, statisfying the following relation: Thus, the variety associated to this branch is: We can obtain the same result using the Hilbert series. Let us call {q 1 , q 2 , . . . , q n } the fugacities associated to the n gauge nodes and t the fugacity associated to the R-symmetry. The ingredients entering the Hilbert series are: • The n − 1 bifundamental hypermultiplets contribute as: • There is also a contribution from ϕ which gives PE[t 2 ].
• The F -terms (4.5) impose further (n − 1) constraints on the former, after taking into account the condition (4.26), which is the overall sum of (4.5). These contribute PE[−(n − 1)t 2 ] to the Hilbert series.
The baryonic generating function is thus: Thus, the baryonic function becomes: The previous integrals are known: and then the baryonic generating function simplifies to Recall that the charge of the monopole operator under the U (1) i gauge symmetry is q i [V (m,...,m) ] = −k i m. As a consequence, the Hilbert series reads: where B n in (4.39) is m n i=1 k i = 0 and hence the Kronecker delta gives 1. Here z is the fugacity for the topological symmetry. We obtained exactly the Hilbert series of C 2 /Z K .
Example. Let us consider the following quiver.
This quiver has two non-trivial branches. One corresponds to no cut at all and the other corresponds to the cuts in the first and the third position. As we discussed above, the former branch is isomorphic to C 2 /Z 4 . The second branch is the same as that discussed around (4.23) and (4.24); it is isomorphic to C 2 /Z 3 .
Theories with one J-fold
In this section we want to present the analysis of moduli space of a class of theories dual to a brane configurations with one J-fold and a collection of (1, k) branes. The associated quiver is In the 3d N = 2 notation, this can be rewritten as with the superpotential where we emphasise the contribution from the mixed CS term due to the T (U (1)) theory in blue. Let us (4.47) The R-charges of V (m 1 ,...,mn) is given by
Cutting the quiver
The process of cutting the quiver works similarly as in the previous subsection. However, since there are non-trivial contributions from the T (U (1)) theory, some conditions must be modified.
Cutting at one point
Let us consider a case in which A l = A l = 0 and other bifundamental hypermultiplets are non-zero. In other words, we cut the quiver precisely at one point where A l and A l are located. In this case equations (4.45) implies The system (4.46) then becomes: The sum of the first l equations and the sum of the remaining n − l ones provide two constraints: Since Φ and Φ are non-zero (otherwise, this would violate the assumption that A j and A j are non-zero for j = l), we arrive at the following necessary condition for the existence of a non-trivial solution of the vacuum equation: Since all Chern-Simons levels are integers, the above equation is equivalent to The system of equations (4.51) is now simply solved by Φ = ±Φ. Let us analyse separately the two cases: This moduli space is parametrised by ϕ and the two basic dressed monopole operators. Let us define for convenience If K j < 0 for some j, we replace A . In any case, the R-charge of the above dressed monopole operators are where the bare monopole operators have R-charge R[V ±(1,1,...,1) ] = 0, and we define Thus, V ± satisfy This branch of the moduli space is therefore Let g L (t, B) and g R (t, B) be baryonic generating functions for the left sub-quiver (containing nodes 1, . . . , l) and the right sub-quivers (containing nodes l + 1, . . . , n), respectively. Then, the Hilbert series for this case is given by where z is a fugacity for the topological symmetry. Using the expressions for g L and g R given by (4.39). we obtain otherwise . (4.62) The Hilbert series in the first line in the second equality is indeed that of C 2 /Z K . Let us define for convenience k j = (k 1 + 1 , k 2 , . . . , k n−1 , k n + 1) , For K i > 0 for i = 1, . . . , l − 1 and K j < 0 for j = l + 1, . . . , n − 1, the basic dressed monopole operators can be written as where it should be noted that in this case l i=1 k i = n i=l+1 k i = −1. Similarly as before, V ± satisfy where we define This branch of the moduli space is therefore The Hilbert series for this case is given by where z is a fugacity for the topological symmetry. Using the expressions for g L and g R given by (4.39). we obtain otherwise .
(4.71)
The Hilbert series in the first line in the third equality is indeed that of C 2 /Z K+2 .
Cutting at two points
Let us consider a case in which A l = A l = A m = A m = 0 (with m > l) and other bifundamental hypermultiplets are non-zero. In other words, we cut the quiver precisely at one point where A l , A l and A m , A m are located. This naturally divides the quiver in question into 3 sub-quivers, which we shall refer to as left (L), central (C) and right (R). The central sub-quiver is the same as that is considered in section 4.1. In this case equations (4.45) implies (4.72) The system (4.46) then becomes: The sums of the equations in the first, the second and the third lines give Since Φ L , Φ C and Φ R are non-vanishing (otherwise, this would violate the assumption that A j and A j are non-zero for j = l), a necessary condition for the existence of a non-trivial solution of the vacuum equation: (4.75) Let g L (t, B), g C (t, B) and g R (t, B) be baryonic generating functions for the left, central and right sub-quivers, respectively. Then, the Hilbert series, corresponding to + or − sign in (4.75), is where z L,C,R are fugacities for the topological symmetries.
Cutting at more than two points
The above discussion can be easily generalised to the case of cutting the quiver at more than two points. For the moduli space to be non-trivial, the sum of the CS levels in the two sub-quiver that are connected with T (U (1)) must be ±1, and the sum of the CS levels in the other sub-quiver must be zero.
No cutting at all
Assume that A i and A i are non-zero for all i. In this case, a necessary condition for the non-trivial moduli space is Since the R-charges of V ±(1,...,1) are zero, the R-charges of V ± are 1 2 n−1 i=1 |K i |. The moduli space is thus generate by the operators {V + V − , ϕ} subject to the quantum relation this is the algebraic definition of: Example. Let us consider the following quiver It is not possible to introduce a cut to this quiver. As a result, from (4.77), it is necessary that k 1 + k 2 = 2 for this theory to have a non-trivial moduli space. Let us assume this.
Therefore the moduli space of this theory is C 2 /Z |k 1 −1| .
Adding flavours
Let us now add fundamental flavours to the previous discussion.
Suppose that there are n gauge groups in total. In the N = 2 notation, this quiver can be written as The vacuum equations read also with A ↔ A, also with Q ↔ Q, and (4.87) where we define The R-charge of the monopole operators V m with flux m = (m 1 , . . . , m n ) is Equation (4.86) admits two non-trivial possibilities: If we set Q i = Q i = 0, the analysis is similar to the linear quiver without flavours. We will instead focus on Φ i = 0. The remaining constraints in (4.85) and (4.86) are thus: also with A ↔ A, Q ↔ Q. Each column of previous set of equations admit two solutions: The case {A i−1 = 0 , Q i−1 = 0} obviously induce a cut in the quiver and set to zero the adjacent fundamental matter; the same for {A i = 0 , Q i+1 = 0}. Let us focus on Φ i−1 = Φ i+1 = 0. Now, we have the vacuum equations Again, the solutions that do not induce a cut are Φ i+2 = Φ i−2 = 0 and so on. The above procedure divides the initial quiver in "Higgs" and "Coulomb" sub-quivers, defined as follows. In the Coulomb one, fundamental matter is set to zero while in Higgs one, all the vector multiplet scalar are set to zero. For instance, we divide the following quiver such that the the first l nodes constitute a Coulomb sub-quiver, the (l + 1)-th to the (l + m)-th nodes constitute a Higgs sub-quiver, and the (l + m + 1)-th to the (n)-th nodes constitute a Coulomb sub-quiver.
where the purple nodes indicate that Φ i = 0 (with i = l + 1, . . . , l + m), and the red lines indicate that Q j = Q j = 0 (with j = 1, . . . , l, l + m + 1, . . . , n) and A l = A l = A l+m = A l+m = 0 (we shall discuss about this later). For the sake of readability, in the above diagram, we indicate only the CS level in each circular node and omit the rank, which is 1 for each U (1) gauge group.
Since in the Higgs sub-quiver, Φ i = 0 for all i = l + 1, . . . , l + m; as a consequence, the magnetic flux is set to zero for all gauge nodes in the sub-quiver. Thus, introducing a cut within the Higgs sub-quiver does not produce anything new. For simplicity, we also assume that there is no further cut in the Coulomb branch sub-quiver.
Moreover, a Higgs sub-quiver cannot end with a node without flavours. This can be seen as follows. Suppose, on the contrary, that we cut the quiver at the (l + m)-th position, namely set A l+m = A l+m = 0, with f l+m = 0. In this case, (4.87) implies: (4.95) Since we cut the quiver at the (l + m)-th position, A l+m = A l+m = 0. We also have Q l+m = Q l+m = 0 since f l+m = 0. Also, Φ l+m = 0 since we are looking at the Higgs sub-quiver. Thus the previous condition becomes: implying a cut at A l+m−1 . This procedure must be continued until we have f i = 0.
Let us assume that f l+1 and f l+m are non-zero. In transiting from the Coulomb subquiver to Higgs sub-quiver and vice-versa, we need to introduce a cut at the transition point; this is because from (4.85), we have, e.g., 0 = A l (Φ l − Φ l+1 ) = A l Φ l which indeed implies A l = 0. Indeed we need to set In the Higgs sub-quiver, we have the vacuum equation whereas in the Coulomb sub-quiver, we have and The sums of these two sets of equations tell us that necessary conditions for the existence of non-trivial moduli spaces of the Coulomb sub-quivers are The R-charge of the monopole operator V m is where we define F L,R as the total number of flavours in the left and right Coulomb subquivers: The Hilbert series for the Higgs sub-quiver can be written as where the first PE is related to fundamental matter and the second one to bi-fundamental matter; the overall (1 − t 2 ) m is due to the m F -term constraints. Observe that the Hilbert series of this sub-quiver does not depend on the CS levels. It is also worth noting that (4.106) takes the same form as the Higgs branch Hilbert series of 3d N = 4 T σ ρ (SU (N )) theory [6] for some σ and ρ [43]; for example, for m = 3 and f l+1 = f l+2 = f l+3 = 1, (4.106) is equal to the Higgs branch Hilbert series of T (3,2,1) (2 2 ,1 2 ) (SU (6)). Let us now focus on the Coulomb sub-quiver. The analysis is very similar to that described in the case without flavours, discussed earlier. We emphasise that even if all the fundamental matter is set to zero, it still contributes to the dimension of the monopole operators. For example, if there is no cut in the left and right Coulomb sub-quivers in (4.94), the baryonic generating function of each of these Coulomb sub-quivers are similar to (4.40): The total Hilbert series of (4.94) is therefore (4.110) and the same for (L ↔ R). The moduli space of quiver (4.94) is therefore where M Higgs denotes the moduli space of the Higgs sub-quiver, which is isomorphic to the Higgs branch moduli space of T σ ρ (SU (N )) for some appropriate N , σ and ρ.
Adding flavour with one J-fold
Now we want to study the branches of a theory with one J-fold and fundamental matter: If all Φ i (with i = 1, . . . , n) are set to zero and the presence of the T (U (1)) link does not affect the moduli space, the analysis is the same as that discussed in the previous subsection. On the other hand, if all Q i and Q i are set to zero, the analysis is similar to that discussed in section 4.2; one needs to take into account of the contribution from the fundamental matter to the R-charge of the monopole operator.
Example. Let us consider a simple example with a U (1) k gauge group, one T (U (1)) link and n flavours.
It is not possible to introduce a cut to this quiver. T (U (1)) is an almost empty theory; it contributes the CS level −2 to the U (1) gauge group, so effectively the CS level is k − 2.
We have the F -term equations: The vacuum equations involving the real scalar field σ in the vector multiplet is The D-term equation reads If k = 2, the superpotential and the moduli space are the same as that of 3d N = 4 U (1) gauge theory with n flavours. The F -term with respect to φ implies that Q i Q i = 0. The Higgs branch is generating by the mesons M i j = Q i Q j ; this meson matrix has rank at most 1 and subject to the matrix relation M 2 = 0, which follows from the F -term. Thus, the Higgs branch is isomorphic to the closure of the minimal nilpotent orbit of SU (n). On the other hand, the Coulomb branch of this theory is C 2 /Z n ; this is generated by the monopole operators V + and V − , carrying the topological charges ±1 and R-charges 1 2 n, subject to the relation V + V − = ϕ n . Note that for n = 1 and k = 2, this theory has no Higgs branch and its Coulomb branch is isomorphic to C 2 .
Let us now suppose that k = 2. If (ϕ, σ) is non-zero, (4.115) and (4.116) implies Q i and Q i are zero, but this is in contradiction with the D-term. Hence (ϕ, σ) = 0 and the Coulomb branch is trivial in this case. However, there is still the Higgs branch generated by M i j = Q i Q j . As before, this meson matrix has rank at most 1 and subject to the matrix relation M 2 = 0 (since Q i Q i = 0). The Higgs branch is therefore isomorphic to the closure of the minimal nilpotent orbit of SU (n). Note that for n = 1 and k = 2, this theory has a trivial moduli space.
The case with one cut
For simplicity, let us first focus on the case of precisely one cut. In this case we have two sub-quivers, left and right, connected by the T (U (1)) link. We have three possibilities: • Both the sub-quivers are in the Coulomb sector: this require the usual analysis as in section 4.2.
• Both the sub-quivers are in the Higgs sector: all Φ i are set to zero and the T-link does not affect the moduli space.
• One is a Higgs sub-quiver (say, the left one) and the other is a Coulomb sub-quiver (say, the right one).
The last case is the interesting one.
where the dashed circles mean that their vector multiplet scalars are zero, and the red lines mean that the hypermultiplets are set to zero: The first set of vacuum equations are As a consequence, we see that The latter set of equations say that we need to introduce a cut in transiting from the Higgs sub-quiver to the Coulomb sub-quiver and vice-versa. The other vacuum equations are where the contribution from the T (U (1)) link is denoted in blue. We denote the vanishing terms in grey in (4.122) and (4.123). The sum of (4.122) gives Moreover, a necessary condition for a non-trivial moduli space for the Coulomb sub-quiver can be determined by summing (4.123) and requiring that ϕ = 0: The R-charge of the monopole operator V m is where we define F C as the total number of flavours in the Coulomb sub-quiver: We can construct the dressed monopole operators that are gauge invariant as follows. (4.130) where α = 1, . . . , l and As in the preceding subsection, if f l = 0 (which means Q l = Q l = 0), then the Higgs sub-quiver cannot end at the l-th position because from (4.122) we have A l−1 A l−1 = 0, i.e. we need to introduce a cut at the (l − 1)-th position. However, if f 1 = 0 (which means Q 1 = Q 1 = 0), the Higgs sub-quiver still can end at the 1st position because The Hilbert series of quiver (4.118) can be obtained as follows. The baryonic generating function for the Higgs sub-quiver is G Higgs (t; x (1) , . . . , x (l) ; m) where we indicated m in blue to emphasise that this is due to the presence of the T (U (1)) link. The baryonic generating for the Coulomb sub-quiver is similar to (4.40): (4.135) The total Hilbert series of (4.118) is therefore Example. Let us consider the following quiver Assume that k ≥ 0. In this case, we have K = k and F C = f + f . The Hilbert series is then (4.138) Hence, the moduli space of this quiver is C 2 /Z 3+k+F C . It is generated by ϕ and V (2) subject to the relation V The case with more than one cuts In this case, the original quiver is divided into many sub-quivers. The parts that are not connected to T (U (1)) can be analysed as in section 4.3, and the parts that are connected to T (U (1)) can be analysed as in section 4.4.
More examples
4.5.1 One J k fold and one NS5 or D5-brane Let us consider the following model: Upon applying S-duality to the above system, we obtain 1 D3 Both of these models are analysed in detail around (4.82) and (4.113), respectively. The moduli spaces these model are non-trivial if and only if k = 2. In which case, they are isomorphic to C 2 .
One (p, q)-brane and one NS5-brane
The techniques that we introduced in the section 4 are particularly useful to study in a systematic way the moduli space of quiver gauge theories associated to (p, q)-brane systems. Let us consider for instance the following brane system For simplicity, let us take (p, q) to be the following value: (p, q) = J k 3 J k 2 J k 1 (1, 0), so that Performing a duality transformation, J −1 k 3 , we can study the following SL(2, Z) equivalent problem: The associated quiver is In N = 2 language, this can be written as The vacuum equations are where we emphasised the contributions due to the mixed CS levels in blue. We have two branches as will be analysed as follow.
Branch I: A A = 0 and B B = 0
In this case the F -terms implies: moreover, two constraints are still present, fixing ϕ , ϕ in terms of the mesons: The gauge charges and the R-charges of V m are Let us now determine the moduli space and compute the Hilbert series of this theory. The baryonic generating function is given by (4.155) The Hilbert series of (4.147) is thus: This is the Molien formula for the Hilbert series of C 4 /Γ(p, q) [44], with p = k 1 and q = k 1 k 2 − 1, where Γ(p, q) is a discrete group acting on the four complex coordinate of C 4 as: This is in agreement with [25,45].
Branch II: A A = 0 or B B = 0 The second branch appears when we set one of the bi-fundamental hypers to zero, say A A = 0. In this case, (4.148) implies again that: Moreover, we have 9 : The gauge charges of V m are The R-charge of V m is R[V m ] = 0. The gauge invariant dressed monopole operators are for k 1 k 2 − 1 > 0. If k 1 k 2 − 1 < 0, we replace B k 1 k 2 −1 by B −(k 1 k 2 −1) and B k 1 k 2 −1 by B −(k 1 k 2 −1) in the above equations. They carry R-charges R[V ± ] = |k 1 k 2 −1| 2 . Since (k 1 k 2 − 1)ϕ = B B, we see that these dressed monopole operators satisfy the quantum relation (4.165) Hence the moduli space is C 2 /Z |k 1 k 2 −1| . Note that (4.162) implies that the magnetic lattice given by m jumps by a multiple of k 1 , since m ∈ Z. If we further require that the magnetic lattice do not jump, we can impose a further condition that k 1 = ±1. In this case, the brane system contains a (±1, 1)brane and a (−1, −k 2 )-brane. Applying T ∓1 to this system, (±1, 1) becomes (±1, 0), and (−1, k 2 ) becomes (−1, −k 2 ∓ 1). This gives rise to the ABJM theory with CS level k 2 − 1 and −k 2 + 1. Indeed, Branch I (which is C 4 /Z |k 2 −1| ) and Branch II (which is C 2 /Z |k 2 −1| ) are the geometric branch of the ABJM theory and the moduli space of the half-ABJM theory, respectively.
Multiple (p, q) and NS5-branes
An interesting generalisation of the example we presented in the previous subsection is the following brane configuration: As before, let us take for simplicity (p, q) = J k 3 J k 2 J k 1 (1, 0). Performing a transformation, J −1 k 3 , we can study the following SL(2, Z) equivalent systems: The quiver associated with the brane system on the right is where the numbers of gauge nodes are l 1 + 1 and l 2 + 1 on the upper and the lower sides of the quiver, respectively. In the N = 2 notation, this can be written as The vacuum equations are where we highlighted in blue the contributions from the mixed CS terms due to T (U (1)) and T (U (1)). We focus on the geometric branch, corresponding to the case ϕ i = ϕ for all i = 1 . . . l 1 + 1 and ϕ i = ϕ for all i = 1 . . . l 2 + 1. Imposing these conditions, we are left with the following constraints of the mesons: The R-charge of V m is zero: and the gauge charges are Now we have all the ingredients in order to compute the baryonic generic function: The Hilbert series of the geometric branch of (4.168) is then (4.176) Note that for l 1 = l 2 = 1 we recover (4.156) as expected.
The brane configuration on the left in (4.182) is that of the ABJM theory with CS level (k 1 , −k 1 ). Thus, we expect that the moduli space of the field theories associated with these brane configurations has two branches, namely (1) C 4 / Z |k 1 | , which is the geometric moduli space of the ABJM theory, and (2) C 2 / Z |k 1 | , which is the moduli space of the half-ABJM theory, where a pair of bi-fundamental chiral multiplets of the ABJM theory is set to zero.
Let us derive these moduli spaces for the theory associated with the configuration on the right in (4.182). The quiver diagram is given by In the N = 2 notation, this quiver can be rewritten as The vacuum equations are where we indicate the contributions from the mixed CS terms due to T (U (1)) and T (U (1)) in blue. Let us assume that A and A are non-zero. Therefore ϕ 1 = ϕ 2 = ϕ (and the corresponding magnetic fluxes are set equal: m 1 , = m 2 = m). Thus, we have two branches: (1) Q = Q = 0, and (2) ϕ 3 = 0.
Branch I: Q = Q = 0 The moduli space is parametrised by A A , ϕ , ϕ 3 and the monopole operators, with the following constraint from the vacuum equations: (4.186) The monopole operator V m , with flux m = (m, m, m 3 ), carries gauge and R charges: where we stress that q 3 [V m ] = 0 since T (U (1)) and T (U (1)) contribute m and −m respectively, and the non-trivial contribution to the R-charge is due to the presence of the flavour. The baryonic generating function is given by where the overall (1 − t 2 ) −1 is due to the the fact that only one among ϕ and ϕ 3 gets fixed. The Hilbert series is thus given by (4.189) This turns out to be equal to the following Hilbert series of C 4 /Z |k 1 | : z) . (4.190) This is in agreement with the geometric branch of the ABJM theory.
Branch II: ϕ 3 = 0 In this case, the vacuum equations imply that Q Q = 0. The moduli space is generated by These dressed monopole operators satisfy the quantum relation Hence, this branch is isomorphic to C 2 / Z |k 1 | , which is the moduli space of the half-ABJM theory.
Comments on abelian theories with zero Chern-Simons levels
Let us now revisit abelian theories with zero CS levels, namely those studied in section 3 with N = 1, from the point of view of this section. One can start by taking simple examples: comparing (3.23) to (4.113). We set N = 1 and n = 1 in the former and set k = 0 In the latter. Indeed, as we discussed below (4.113), such theory has a trivial Coulomb branch, because the scalar in the vector multiplets are set to zero by the vacuum equations. This is perfectly consistent with the proposal in section 3, namely the scalar fields in the vector multiplet of the gauge nodes that are connected by T (U (N )) are frozen. Moreover, from (3.26), we see that when n = 1 the Higgs branch is also trivial; this is also in accordance with the analysis below (4.113), where the meson vanishes. Hence the two approaches, one presented in section 3 and the other presented in this section, yield the same results. The same result can be derived easily for the mirror theory (3.24), with N = 1 and n = 1, and (4.82) with k 1 = k 2 = 0.
This analysis can be generalised to other models discussed in this section. When we set all CS levels to zero, the vacuum equations set the scalars in the vector multiplets corresponding to the gauge groups that are connected by T (U (1)) to zero. Other parts of the quiver may still contribute non-trivially to the moduli space.
Non-abelian theories with non-zero Chern-Simons levels
In this section, we focus on non-abelian quiver theories that contain T (U (N )) and/or T (U (N )) theories as edges of the quiver. In terms of a brane system, these theories involve multiple D3-branes, along with J-folds and possibly with other types of branes. In contrast to the abelian case, we do not have a general prescription of computing the Hilbert series of the geometric branch of non-abelian theories. Nevertheless, for theories that arise from N M2-branes probing Calabi-Yau 4-fold singularities, we expect that the geometric branch is the N -fold symmetric product of such a Calabi-Yau 4-fold. In such cases, we can analyse the Hilbert series for each configuration of magnetic fluxes. Let us demonstrate this in the following example.
One (k, 1) and one (1, k ) brane Let us consider the generalisation of (4.146) for non-abelian gauge groups.
In section 4.5.2, we see that the geometric branch of the moduli space for the abelian theory (N = 1) is a Calabi-Yau 4-fold (this is referred to as Branch I in that section); the latter is identified to be C 4 /Γ(k 1 , k 1 k 2 − 1). For a general N , we expect that the geometric branch of (5.1) is the N -th fold symmetric product of C 4 /Γ(k 1 , k 1 k 2 − 1), namely Sym N C 4 /Γ(k 1 , k 1 k 2 − 1) . Let us focus on N = 2 in the following discussion. The Hilbert series of Sym 2 C 4 /Γ(k 1 , k 1 k 2 − 1) is given by where H (4.147) (t, z) is given by (4.156). This computation can be split into five different cases depending on the fluxes and the residual gauge symmetries.
1. The magnetic fluxes for the two nodes on the upper edge are both (m, m), and the magnetic flux for the two nodes on the lower edge are both (n, n). In this case, the residual gauge symmetry is U (2) × U (2) × U (2) × U (2). The Hilbert series in this case can be computed as a second rank symmetric product of the abelian case (which is a product of two half-ABJM theories). The result is m,n∈Z g ABJM/2 (t; k 1 m−n) 2 g ABJM/2 (t; k 2 n−m) 2 + g ABJM/2 (t 2 ; k 1 m−n)g ABJM/2 (t 2 ; k 2 n−m) z 2(m+n) , where the terms indicated in blue are due to the mixed CS terms due to the presence of T (U (2)) and T (U (2)) and Let us report the unrefined Hilbert series, for k 1 = 1 and k 2 = 2, for this case up to order t 12 : N =2,k=(1,2) (t, z = 1) = 1 + 6t 2 + 22t 4 + 62t 6 + 147t 8 + 308t 10 + 588t 12 + . . . .
(5.5)
In fact, we can also compute (5.5) using the Molien integration [46] as follows: We have checked that (5.6) agrees with (5.5) up to order t 20 . Here z 1 , . . . , z 4 are fugacities for the gauge groups SU (2) 1,2,3,4 that are subgroups of U (2) 1,2,3,4 gauge groups corresponding to top left, top right, bottom left and bottom right nodes in (5.1) respectively. The fugacities q 1 and q 2 corresponds to the two diagonal U (1) gauge groups that are subgroups of diag(U (2) 1 × U (2) 2 ) and diag(U (2) 3 × U (2) 4 ) of (5.1) respectively. H[C 2 /Z 2 ](t, z) denotes the Hilbert series of the space C 2 /Z 2 , which is the Higgs and the Coulomb branches of T (U (2)) and T (U (2)), and its expression is given by The first and the second terms in the PE denote the contributions from the bifundamental hypermultiplets under U (2) × U (2). The last line of (5.6) deserves some comments. For a theory with Lagrangian, these terms would represent the contribution from the F -terms. In this case, however, T (U (2)) and T (U (2)) do not have a manifest Lagrangian description in the quiver. Nevertheless, such terms can still be interpreted as "effective F -terms", where at t 2 there are relations that transform in the adjoint representations of diag(SU (2) 1 ×SU (2) 2 ) and diag(SU (2) 3 × SU (2) 4 ). There is also a relation at order t 4 and a syzygy (relation among the relations) at order t 8 . 10 2. The magnetic fluxes for the two nodes on the upper edge are both (m 1 , m 2 ), with m 1 > m 2 , and the magnetic flux for the two nodes on the lower edge are both (n, n).
In this case, each of the U (2) gauge groups on the upper edge is broken to U (1) 2 . Each of the U (2) gauge groups on the lower edge remains unbroken. In this case, T (U (2)) is expected to become T (U (1)) 2 (and similarly T (U (2)) becomes T (U (1)) 2 ).
Conclusion and open questions
In this paper, we study the moduli space of quiver theories arising from the Hanany-Witten brane system, with an insertion of S-folds. In the case of S-flips, the quiver contains a T (U (N )) links between two U (N ) groups both with zero Chern-Simons levels. We find that such theories have the Higgs and the Coulomb branches. The Higgs branch is given by the hyperKähler quotient described in the beginning of section 3 and the Coulomb branch can be computed in a similar way to the usual 3d N = 4 gauge theories, with the remark that the vector multiplets of the gauge nodes linked by T (U (N )) are frozen and do not contribute to the Coulomb branch. We check that this proposal is consistent with mirror symmetry. In the case of J-folds, we examine the moduli space of the abelian theories with T (U (1)) links and non-zero Chern-Simons levels systematically. With the inclusion of bifundamental and fundamental hypermultiplets into the quiver, the moduli space can be non-trivial, and in many cases the vacuum equations admit many branches of solutions. Finally, for the case of non-abelian theories with T (U (N )) links and non-zero Chern-Simons levels, we do not have a general prescription to compute the moduli space of such theories. Nevertheless, we demonstrate the computation of the Hilbert series for an example that belongs to a special class of models arising from multiple M2-branes probing Calabi-Yau 4-fold singularities. The results in this paper leads to a number of open questions. First of all, it would be nice to find a general prescription to compute the moduli space of non-abelian theories with T (U (N )) links, non-zero Chern-Simons levels and possibly with bifundamental and fundamental hypermultiplets. Secondly, one could introduce an orientifold place into the brane system and study the corresponding quiver theories. For example, if we introduce an O3 − plane on top of the D3 brane segment that passes through the S-fold, an expectation is that we should have a quiver that contains a T (SO(2N )) link connecting two SO(2N ) gauge groups. Finally, one could ask if one can replace the T (U (N )) link between two U (N ) gauge groups by the T σ σ (U (N )) link, with an appropriate σ, between two G σ gauge groups (where G σ is a subgroup of U (N ) that is left unbroken by σ). Since T σ σ (U (N )) is invariant under mirror symmetry, we expect this to be a good candidate to replace T (U (N )) in the quiver diagram. We hope to address these problems in future work.
A Theories with multiple consecutive J-folds
In this section, we generalise our discussion to theories dual to brane system containing (m + 1) consecutive J-folds.
T (U (1)) T (U (1)) T (U (1)) (A.1) The vacuum equations are As in the preceding subsection, we analyse the solution of these equations according to the VEVs of bi-fundamental fields that are set to zero (i.e. the cuts in the quiver). the above system of equations can be written in a compact way as:
No cut in the quiver
where we define the matrix M CS as Let us now compute gauge invariant dressed monopole operators. The last three sets of equations, setting to zero, constitute m equations in total; they give a unique solution for m = ( m 1 , . . . , m m ) in terms of the flux m. We denote such a solution by m * (m). It should be emphasised that m, m * i (with i = 1, . . . , m), and the CS level K n , must be integers. Such integrality and equations (A.8), (A.11) put a constraint on the possible values of ( k 1 , . . . , k m ), as well as their relation to K n , in order to obtain a non-trivial moduli space. Note also that m * (1) + m * (−1) = 0.
For convenience, let us define The generators of the moduli space are ϕ, V ± subject to the quantum relation The moduli space is indeed C 2 /Z K . We emphasise that the dependence of K on k 1 , . . . , k m is due to m * 1 (1).
This can easily be generalised to an arbitrary number of J-folds. The generalisation of (A.29) is minor 1,1 M CS = minor m+1,m+1 M CS = ±1 (A.31) These two choices correspond to m = ± m. The integrality of m * (m, m) and m * (m, − m) impose further constraints on k j . The analysis of the moduli space is similar to that presented after (4.53). | 2019-01-27T11:28:36.000Z | 2018-10-29T00:00:00.000 | {
"year": 2019,
"sha1": "d664bb9730d29846e2326b07b710f840a61d5f6e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP01(2019)046.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "d664bb9730d29846e2326b07b710f840a61d5f6e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
221949358 | pes2o/s2orc | v3-fos-license | Standard bases for the universal associative conformal envelopes of Kac--Moody conformal algebras
We study the universal enveloping associative conformal algebra for the central extension of a current Lie conformal algebra at the locality level $N=3$. A standard basis of defining relations for this algebra is explicitly calculated. As a corollary, we find a linear basis of the free commutative conformal algebra relative to the locality $N=3$ on the generators.
Introduction
Conformal algebras also known as Lie vertex algebras were introduced in [16] as an algebraic tool to study the singular part of the operator product expansion (OPE) of chiral fields in 2-dimensional conformal field theory coming back to [5]. From the categorical point of view, a conformal algebra is just an algebra in the appropriate (pseudo-tensor) category M * (C[∂]) of modules over the polynomial algebra C[∂] in one variable [2]. The pseudo-tensor structure (see [4]) reflects the main features of multi-linear maps in the category of linear spaces: composition, identity, symmetric structure. These features are enough to define the basic notions like what is an algebra (associative, commutative, Lie, etc.), homomorphism, ideal, representation, module, cohomology. Therefore, the notion of a conformal algebra is a natural expansion of the notion of an "ordinary" algebra over C to the pseudo-tensor category M * (C[∂]). Namely, as an ordinary algebra is a linear space equipped with a bilinear product, a conformal algebra is a C[∂]-module V equipped with a C[∂]-bilinear map (pseudo-product) A more convenient presentation for the operation * uses the language of a λ-product or a family of n-products for all integer n ≥ 0 ( [16], see also Section 2). Conformal algebras representing the singular part of OPE in vertex algebras are Lie algebras in the category M * (C[∂]), i.e., Lie conformal algebras. For example, if g is a Lie algebra then the free module C[∂] ⊗ g equipped with the pseudo-product a * b = (1 ⊗ 1) ⊗ C[∂] [a, b], a, b ∈ g, is a Lie conformal algebra denoted Cur g (current conformal algebra). If ·|· is a bilinear symmetric invariant form on g then Cur g has a 1-dimensional central extension K(g) defined by where e is a central element and ∂e = 0. For example, in the Kac-Moody vertex algebra V (g) [12] the singular part of the OPE on the generating fields is described by this particular structure K(g) called a Kac-Moody conformal algebra.
As in the case of ordinary algebras, an associative conformal algebra C turns into a Lie one with respect to the commutator [a * b] = (a * b) − (τ ⊗ C[∂] 1)(b * a), a, b ∈ C, where τ is the switching map on C[∂] ⊗2 . However, not all Lie conformal algebras embeds into associative ones in this way [26]. This is an open problem whether every finite (i.e., finitely generated as a C[∂]-module) Lie conformal algebra embeds into an associative conformal algebra with respect to conformal commutator. Even for the class of quadratic conformal algebras [27] (see also [15]) it remains unknown in general if every such Lie conformal algebra embeds into an appropriate associative one.
A routine way to solve this kind of problems is to construct a universal envelope. In general, such an algebra is defined by generators and relations. For a Lie conformal algebra, there exists a lattice of universal enveloping associative conformal algebras, each related to an (associative) locality bound on the generators ( [26], see also Section 3.4). In order to prove (or disprove) the embedding of a Lie conformal algebra into its universal enveloping associative conformal algebra one needs to know the normal form of elements in the last algebra.
A general and powerful method for finding normal forms in an algebra defined by generators and relations is to calculate a standard (or Gröbner-Shirshov) basis of defining relations. The idea goes back to Newmann's Diamond Lemma [23], see also [7,6]. In the recent years, the Gröbner-Shirshov bases theory was developed to serve the problem of combinatorial analysis of various algebraic structures, see [9]. For associative conformal algebras it was initially invented in [8], later developed in [24] and [20]. In this paper, we use the last approach exposed in a form convenient for actual computation: we consider defining relations in a conformal algebra as rewriting rules on a module over an appropriate associative algebra (the Gröbner-Shirshov basis of the last algebra is known).
A series of particular observations made in [21], [22] shows that for all considered examples of quadratic Lie conformal algebras L it is enough to consider universal associative conformal envelopes U relative to the locality bound N = 3 to get an injective mapping L U. This is one of the reasons why we focus on the locality bound N = 3 for the envelopes of current conformal algebras as they are particular examples of quadratic conformal algebras.
The main purpose of this paper is to find a standard (Gröbner-Shirshov) basis of defining relations for the universal enveloping associative conformal algebra of a Kac-Moody conformal algebra at locality level N = 3. As a corollary, we get an analogue of the Poincaré-Birkhoff-Witt Theorem (PBW-Theorem) stating that the associated graded conformal algebra obtained from the universal envelope of a current Lie conformal algebra with respect to the natural filtration is isomorphic to the free commutative conformal algebra. Note that the classical PBW-Theorem may be interpreted as a conformal one at the locality level N = 1: for a Lie algebra g, its "ordinary" universal envelope U(g) gives rise to the conformal algebra Cur U(g) 0 which is exactly the universal enveloping conformal algebra of Cur g with N = 1 (here U(g) 0 is the augmentation ideal of U(g)).
There are several reasons for studying the universal envelopes of Cur g at higher locality than N = 1.
First, the N = 1 envelope Cur U(g) does not reflect the homological properties of Cur g. For example, if g is a simple finite-dimensional Lie algebra then the second cohomology group H 2 (Cur g, k) is one-dimensional [3]. The corresponding central extension is the Kac-Moody conformal algebra K(g) representing the singular part of the Kac-Moody vertex algebra [12]. On the other hand, it is easy to find that the second Hochschild cohomology group of Cur U(g) 0 with coefficients in the trivial 1-dimensional module is zero: there are no nontrivial central extensions. Our results show that the universal enveloping associative conformal algebras for Cur g at locality level N = 2, 3 do have a nontrivial central extension which is exactly the universal envelope of K(g).
The second reason is related with Poisson algebras. Assume P is an ordinary commutative algebra with a Poisson bracket {·, ·}. Then Cur P may be considered as a Lie conformal algebra since P is a Lie algebra relative to the Poisson bracket. There is a conformal representation of Cur P on itself given by the rule The study of this representation provides a way to get new results in (quadratic) conformal algebras as well as in Poisson algebras [21,22]. The conformal linear operators ρ(a) ∈ Cend(Cur P ), ρ(a) λ : f → (a (λ) f ), are local to each other, and the locality bound is N = 3. Indeed, according to the definition of a conformal representation [11,16] we have for a, b, f ∈ P . If abf = 0 then right-hand side is of degree 2 in λ that means N(ρ(a), ρ(b)) = 3 in Cend(Cur P ). Therefore, the corresponding associative envelope belongs to the class of envelopes with locality N = 3.
The third reason to study the case N = 3 comes from the following relation between commutative conformal and Novikov algebras. Suppose C is a commutative conformal algebra and M is a subset of C such that N C (a, b) ≤ 3 for all a, b ∈ M. Then M generates an ordinary (nonassociative) subalgebra N(M) in the space C considered relative to the single product x•y = x (1) y. Indeed, all elements of N(M) are local to each other with locality bound 3. Moreover, the following relations hold: for all x, y, z ∈ N(M). These identities are known to define the variety of Novikov algebras initially appeared in [1], [13]. In order to perform a systematic study of this relation, one needs to know the structure of the universal object in the category of commutative conformal algebras with locality bound N = 3 on the generators.
For all these reasons, we study the universal enveloping associative conformal algebras for Kac-Moody conformal algebras K(g) relative to the locality level N = 3. The corollaries of the main result of the paper (Theorem 2) allow us to get the structure of the universal envelopes for current Lie conformal conformal algebras at N = 3 and also describe the free commutative conformal algebra at the same locality level. Practically, we find a standard (Gröbner-Shirshov) basis of defining relations for these conformal algebras and derive an analogue of the PBW-Theorem.
Preliminaries in conformal algebras
The definition of a conformal algebra as an algebra in an appropriate pseudotensor category [2] corresponds to the convenient algebraic approach using λ-brackets [16] if it is presented in terms of operads associated with linear algebraic groups [18].
Let G be a linear algebraic group over a field k of characteristic zero, and let H G = k[G] be the Hopf algebra of regular functions on G. For every H G -module V there is a non-symmetric operad (let us denote it V G ) defined as follows. Given n ∈ {1, 2, . . . }, set The condition of regularity means that f may be presented by a polynomial function with coefficients in the space Hom(V ⊗n , V ) of k-polylinear maps on V , and the 3/2linearity (sesqui-linearity) may be expressed as In particular, V G (1) is the space of all H G -linear transformations of V thus contains the identity map id.
The composition rule in V G is defined by the following partial composition. If f ∈ V G (n), g ∈ V G (m), i ∈ {1, . . . , n} then (1) for λ i , µ j ∈ G, where • i in the right-hand side stands for the ordinary partial composition of polylinear maps. In particular, for i = n the partial composition is equal to f (λ 1 , . . . , λ n−1 ) • n g(µ 1 , . . . , µ m−1 ). It is easy to see that the resulting maps are indeed regular and 3/2-linear.
One may easily check that the partial composition in V G defined above meets the sequental, parallel, and unit axioms [10, Definition 3.2.2.3] and thus this is indeed an non-symmetric operad with a well-defined composition rule Suppose the group G is abelian. Then V G (n) has a natural action of the symmetric group S n defined in the following way. If f ∈ V G (n) and (1i), i = 2, . . . , n, is a transposition in S n then for i < n (here the action of (1i) in the right-hand side is just the permutation of arguments in a polylinear map). For i = n, the definition is slightly more complicated: if f is presented by a polynomial function then f (1n) (λ 1 , . . . , λ n−1 ) is given by . . , λ n−1 ). The composition rule γ r m 1 ,...,mr is equivariant (see, e.g., [10, Definition 5.2.1.1]) since the structure obtained is equivalent to the structure of an H G -module operad defined over a cocommutative Hopf algebra [2]. Namely, one may identify a map The classes of conformal [16] and Z-conformal [14] algebras naturally appear in the next step, if we choose G to be a connected linear algebraic group of dimension 1 (the affine line and GL 1 , respectively). For a non-connected group G with the identity component denoted G 0 , the structure of a conformal algebra over G is naturally interpreted as a G/G 0 -graded conformal algebra over G 0 [19]. If G = A 1 = (k, +), For example, if O = As is the operad governing the variety of associative algebras (generated by µ = x 1 x 2 ∈ As(2) modulo the relation µ • 1 µ = µ • 2 µ) then an associative conformal algebra structure on a k[∂]-module V is given by an image of µ, for u, v ∈ V , and associative in the sense that By (1), the latter means (to compute the right-hand side, put µ 1 = λ and λ 1 = µ in (1)). According to the same scheme, a Lie conformal algebra structure on a k[∂]-module V is a morphism from the operad Lie governing the variety of Lie algebras to V A 1 . To define such a morphism, it is enough to fix a 3/2-linear map µ ∈ V A 1 (2) The last two relations represent anti-commutativity and Jacobi identity, respectively: In the sequel, we will use the notation (· (λ) ·) for the operation on an associative conformal algebra and [· (λ) ·] for Lie conformal algebras.
Since there is a morphism of operads (−) : Lie → As sending µ to f − f (12) , every associative conformal algebra turns into a Lie conformal algebra relative to the operation For an associative conformal algebra V defined via a morphism of operads As → V A 1 , let V (−) stand for the Lie conformal algebra obtained as a composition Lie The property of a commutator to be a derivation on an associative algebra may also be expressed as a relation in As (3). Being translated to conformal algebras it turns into the following identity on an associative conformal algebra V : As in the case of ordinary algebras, V → V (−) is a functor from the category of associative conformal algebras to the category of Lie algebras. In contrast to the case of ordinary algebras, this functor does not have a left adjoint one when considered on the entire category of associative conformal algebras. However, if we restrict the class of associative conformal algebras by means of locality on the generators ( [26], see Section 3.4 for details) then there is an analogue of the universal enveloping associative algebra for Lie conformal algebras.
In terms of "ordinary" algebraic operations, a conformal algebra is a linear space V equipped with a linear operator ∂, the generator of , and a series of bilinear operations (· (n) ·), n ∈ Z + , given by These operations are called n-products. They have to satisfy the following properties: The property (C1) is known as the locality axiom, (C2) and (C3) represent 3/2linearity. For every conformal algebra V , the locality function for all u, v, w ∈ V and n, m ∈ Z + . In a similar way, one may rewrite the identities defining the class of Lie conformal algebras. Given a set X and a function N : X × X → Z + , there exists a unique (up to isomorphism) associative conformal algebra denoted Conf(X, N) which is universal among all associative conformal algebras V generated by X such that N V (x, y) ≤ N(x, y) for all x, y ∈ X [25]. The details of the construction of Conf(X, N) are stated in Section 3.3.
Gröbner-Shirshov bases for associative conformal algebras
3.1. Rewriting system and standard bases for associative algebras. In this section, we briefly describe the well-known technique of standard bases (Gröbner-Shirshov bases) in associative algebras in order to fix the notations. The usual exposition of this technique requires a proper ordering of the monomials. However, the core statements laying in the foundation of the approach do not need a monomial ordering.
Let B be a set and let B * stand for the set of all words in B (including the empty word). The free associative algebra (with a unit) over the field k generated by B is denoted k B . Suppose Σ is a family of pairs (u, f ) called rewriting rules, where u ∈ B * , f ∈ k B . We will write a pair like this as (u → f ) since the family Σ determines an oriented graph G(B, Σ) as follows. The vertices of G(B, Σ) are the elements of k B ; two vertices g and h are connected with an edge (g → h) if and only if there is a rewriting rule u → f in Σ and a summand of the form αw in g In other words, h is obtained from g by replacing an occurrence of the subword u with the polynomial f .
The graph G(B, Σ) splits into connected components (in the non-oriented sense) which explicitly correspond to the elements of the quotient k B | Σ = k B /(Σ), where (Σ) stands for the ideal in k B generated by all u − f for (u → f ) ∈ Σ. In some cases, there is a way to check algorithmically whether two vertices g, h ∈ k B belong to the same connected component of G(B, Σ), i.e., if the images of f and g are equal in k B | Σ .
An oriented graph is called a rewriting system if there are no infinite oriented paths (in particular, no oriented cycles). In a rewriting system, for every vertex g there is a nonempty set T (g) of terminal vertices t attached to g, i.e., such that there is a path g → · · · → t, but there are no edges originated at t. A rewriting system is confluent if for every vertex g the set T (g) contains a single vertex. The most natural way to guarantee that G(B, Σ) is a rewriting system is to make the set B * well-ordered relative to an order ≤ such that u ≤ v implies wu ≤ wv and uw ≤ vw for all u, v, w ∈ B * (i.e., ≤ is a monomial order), and u > f for all (u → f ) ∈ Σ (i.e., u is greater than every monomial in f ).
To check the confluence of a rewriting system G(B, Σ) one may apply the Diamond Lemma originated to [23]. The latter states that a rewriting system is confluent if an only if for every "fork" (a pair of edges w → g 1 , w → g 2 ) there exist a vertex h and two oriented paths g 1 → · · · → h, g 2 → · · · → h. If the rewriting system is G(B, Σ) then it is enough to check the Diamond condition for the following two kinds of forks: In both cases, if there exit oriented paths g 1 → · · · → h and g 2 → · · · → h for an appropriate polynomial h then we say that the composition of u 1 → f 1 and u 2 → f 2 relative to the word w is confluent modulo Σ. Denote the polynomial Theorem 1 ( [6,7]). Suppose a set of rewriting rules Σ in the free associative algebra k B defines a rewriting system G(B, Σ). If every composition of rewriting rules from Σ is confluent modulo Σ then G(B, Σ) is a confluent rewriting system, i.e., Σ is a GSB.
Let Σ respect a monomial order ≤ on B * . Then G(B, Σ) is a rewriting system and the confluence of a composition may be replaced with a more convenient condition.
Corollary 1 ([7]). If for every rewriting rules
where In the actual computation, we will often apply the following trick to show the confluence of a fork w → g 1 , w → g 2 : find some paths g 1 → · · · → h 1 and g 2 → · · · → h 2 and then present h 1 − h 2 in the form (4).
3.2.
Rewriting system for bimodules over associative algebras. Let A be an associative algebra (with a unit) and let M be a bimodule over A. Suppose A is generated by a subset B ⊂ A as an algebra and M is generated by a subset Y as an A-module. Then A is isomorphic to a quotient of the free associative algebra k B modulo an ideal generated by a set of defining relations R ⊂ k B , i.e., Similarly, M is a quotient of the free A-module A ⊗ kY ⊗ A generated by Y modulo a family of defining relations S. One may identify an element of S with a noncommutative polynomial in the variables B ∪ Y which is linear in Y .
The split null extension A⊕M is an associative algebra isomorphic to the quotient of the free algebra generated by B ∪ Y modulo the ideal generated by the union of R, S, and yb 1 . . . b n z, y, z ∈ Y, b i ∈ B, n ≥ 0.
These relations reflect the properties of multiplication in A ⊕ M: M 2 = 0.
Remark 1.
To consider left modules, it is enough to add relations yb, y ∈ Y , b ∈ B to reflect MA = 0.
Suppose we may choose a monomial u in each defining relation u − f of A ⊕ M (up to a scalar multiple) in such a way that the family Σ of all rewriting rules u → f defines a rewriting system G(B ∪ Y, Σ). Note that the defining relations of A ⊕ M are homogeneous relative to Y . All monomials that are of degree ≥ 2 in Y belong to the same connected component as zero, so it is enough to consider only the relations of degree 0 and 1 in Y , these are exactly the defining relations of A and of M, respectively. Therefore, the confluence test needs to be applied to the forks started at a word w which either belongs to B * or contains only one letter from Y . Hence, the compositions emerging in this rewriting system are exactly those described in [17].
3.3. Free associative conformal algebras. Recall the construction of a free associative conformal algebra Conf(X, N) generated by a set X relative to a given locality function N : X × X → Z + . From now on, denote by H the polynomial algebra k[∂].
By definition, Conf(X, N) is an associative conformal algebra generated by X which is universal in the class of all associative conformal algebras C generated by X such that the mutual locality of elements from X in C is bounded by N. Namely, for every associative conformal algebra C and for every map α : X → C such that N C (α(x), α(y)) ≤ N(x, y) for all x, y ∈ X there exists unique homomorphism of conformal algebras ϕ : Conf(X, N) → C such that ϕ(x) = α(x) for all x ∈ X.
Proposition 1 ([25]
). The free associative conformal algebra Conf(X, N) is a free H-module with a basis a 1 (n 1 ) (a 2 (n 2 ) (a 3 (n 3 ) · · · (n k−1 ) (a k (n k ) a k+1 ) . . . )), Remark 2. In a similar way, one may define the free associative commutative conformal algebra Com Conf(X, N) generated by a set X relative to a locality function N [26]. However, there was no explicit description of a linear basis of Com Conf(X, N) for N > 1. We will obtain such a description for N = 2, 3 as a byproduct in Section 4.
The conformal algebra Conf(X, N) may be presented in a more convenient form as a (left) module over an appropriate associative algebra [20]. Given a set X, let A(X) denote the associative algebra generated by the set B = {∂} ∪ {L a n , R a n | a ∈ X, n ∈ Z + } relative to the defining relations L a n ∂ − ∂L a n − nL a n−1 , R a n ∂ − ∂R a n − nR a n−1 , where a, b ∈ X, n, m ∈ Z + . The free associative conformal algebra Conf(X, N) is a left module over A(X) if we define the action as follows: L a n u = a (n) u, R a n u = {u (n) a} for a ∈ X, n ∈ Z + , u ∈ Conf(X, N). Therefore, Conf(X, N) considered as a left A(X)-module is a homomorphic image the free left A(X)-module M(X) generated by the set X. It is not hard to find explicitly the kernel of that homomorphism M(X) → Conf(X, N).
Fix a function N : X × X → Z + and consider the quotient M(X, N) of M(X) relative to the A(X)-submodule generated by the following elements: where a, b ∈ X. Obviously, there is a homomorphism M(X, N) → Conf(X, N) of A(X)-modules extending x → x. This homomorphism is actually an isomorphism since (5) and (8) imply the following relations in M(X, N): where a, b ∈ X, n ≥ N(a, b), m ∈ Z + , u ∈ M(X). Consider the relations (5)-(10) as rewriting rules in such a way that the first monomial is always a principal one. The terminal words in M(X) of the rewriting system obtained are ∂ s L a 1 n 1 L a 2 n 2 . . . L a k n k a k+1 , k ∈ Z + , a i ∈ X, 0 ≤ n i < N(a i , a i+1 ), s ∈ Z + . The images of these words in Conf(X, N) are linearly independent by Proposition 1, hence we obtain the following It follows from the definition of the action of A(X) on Conf(X, N) that every conformal ideal of Conf(X, N) is an A(X)-submodule and vice versa. Hence we may replace the study of conformal ideals with the study of "ordinary" submodules.
Example 1 ([8]
). Let us determine the structure of an associative conformal algebra C generated by the set X = {a} relative to N = N(a, a) = 2 with one defining relation a (1) a − ∂(a (0) a).
The algebra A(X) is generated by L n = L a n , R n = R a n , and ∂ satisfying (5). Namely, consider these relations as rewriting rules Similarly, define the free conformal algebra Conf(X, N) as a module over A(X) generated by a single element a relative to the following rewriting rules (8): The compositions (10) of these relations include The defining relation a (1) a − ∂(a (0) a) is naturally written as Consider the composition of R 2 L 1 → L 1 R 2 and (11) relative to w = R 2 L 1 a. On the one hand, on the other hand, Hence, we should add a new rewriting rule The latter has a composition with (11) relative to w = L 0 L 1 a: Hence, we should add ∂L 0 L 0 a → 0.
Next, consider the composition of R 1 L 1 → L 1 R 1 and (11) relative to w = R 1 L 1 a. In a similar way, we obtain that −L 1 L 1 a and L 0 L 0 a are connected by a (nonoriented) path, so add (the choice of the principal part is voluntary since we have not fixed an order on the words).
Let us calculate the composition of R 0 L 1 a → L 1 R 0 a and (11) relative to w = R 0 L 1 a in more details: Hence, we should add There exist compositions between (10) and (11). For example, the composition of L 2 L 1 a → 0 and (11) is trivial: However, the composition of (11) and L 3 L 1 a → 0 is not trivial: Hence, we should add L 1 L 1 a → 0 (15) and (13)
Universal associative conformal envelopes of Lie conformal algebras.
Suppose L is a Lie conformal algebra generated by a set X. Thus L is a quotient of an appropriate free Lie conformal algebra by the ideal generated by a set Σ of defining relations stated in terms of Lie conformal operations [x (n) y]. The structure of free Lie conformal algebras was described in [25].
For a given function N : X × X → Z + , the universal enveloping associative conformal algebra U(L; X, N) of L relative to the locality level N on X is defined as the quotient of Conf(X, N) relative to the same defining relations Σ rewritten by the rules where the upper limit of the summation is determined by the Dong Lemma.
The main purpose of this paper is to study universal enveloping associative conformal algebras for Kac-Moody conformal algebras. The latter are central extensions of current Lie conformal algebras. For this particular class of problems, the Gröbner-Shirshov bases method described above may be slightly modified. The main advantage of the modification is that the relations (10) become not necessary.
Suppose L is a Lie conformal algebra with an H-torsion L 0 such that the torsionfree L 1 = L/L 0 is a free H-module (for example, every finite Lie conformal algebra has that property). Assume X = X 1 ∪ X 0 , where X 1 is an H-basis of L 1 and X 0 is a k-basis of X 0 . Then the structure of L is completely determined by relations f e (∂)e = 0, e ∈ X 0 , x,y (∂)e, x, y ∈ X 1 , for appropriate f e , f n,z x,y , g n,e x,y ∈ k[∂]. These relations describe the structure of L 0 as of a torsion H-module, the multiplication table in the Lie conformal algebra L 1 , and the structure of the extension Then, for a given function N : X 1 × X 1 → Z + , the conformal algebra U(L; X, N) may be considered as an ordinary left module over the associative algebra A(X; L) generated by {∂, L x n , R x n | n ∈ Z + , x ∈ X 1 } relative to defining relations (5) (for a ∈ X 1 ) along with the following ones: where L ∂z n is naturally understood as −nL z n−1 . The relations (17) reflect the property (3) of associative conformal algebras. So U(L; X, N) is a left module over A(X; L) generated by the entire set X relative to the relations (8) (for a, b ∈ X 1 ) together with f e (∂)e, e ∈ X 0 , L a n e, R a n e a ∈ X 1 , e ∈ X 0 , n ∈ Z + , Since the defining relations of A(X; L) already form a Gröbner-Shirshov basis, in order to determine the structure of U(L; X, N) one needs to find a confluent system of rewriting rules in this A(X; L)-module. In the next section, we solve this problem for a Kac-Moody conformal algebra. for every a, b ∈ g is a Lie conformal algebra with 1-dimensional torsion ke and the torsion-free image isomorphic to Cur g.
Let us fix a linear basis X 1 of g. Then X = X 1 ∪ {e} is a generating set of K(g). The purpose of this section is to calculate the Gröbner-Shirshov basis for U = U(K(g); X, N) for N = 3 and prove the Poincaré-Birkhoff-Witt Theorem for this universal enveloping associative conformal algebra.
According to the scheme described in the previous section, U is a module over the associative algebra A = A(X; K(g)) generated by the set B = {∂, L a n , R a n | a ∈ X 1 , n ∈ Z + } modulo the relations L a n ∂ − ∂L a n − nL a n−1 , R a n ∂ − ∂R a n − nR a n−1 , R a The set of generators of U as of an A-module is X = X 1 ∪ {e}, and the defining relations of this module are L a n b, R a n b n ≥ N = 3, L a n e, R a n e, n ≥ 0, for all a, b ∈ X 1 . In order to translate these defining relations into rewriting rules we need to choose a principal monomial in each relation. The choice of principal parts affects on the resulting system of rewriting rules obtained in a process of adding compositions similar to Example 11.
We will always choose a principal term in a rewriting rule as a leading monomial relative to an appropriate order ≤ on the monomials in the free k B -module generated by X. Namely, suppose the set X 1 is linearly well ordered and e < X 1 . Induce an order on B by the rule assuming L a n < R b m and L a n < L b n iff a < b, for a, b ∈ X 1 (this ordering turns to be the most convenient for our purpose). Extend the order on the set of monomials in B * by the deg-lex principle, i.e., first compare the lengths and then lexicographically.
For two monomials ux and vy in k B X, u, v ∈ B * , x, y ∈ X, set ux < vy iff (u, x) is lexicographically less than (v, y).
Proof. First, we will show how to derive the rules (31)-(37) as compositions of the initial relations. Next, we will check the triviality of compositions obtained in further iterations.
Since the calculations are routine, we will state them in details for several particular cases, other cases are essentially the same and may be processed in a similar way.
For the purpose of clarity, we will use a brief notation to point a rule applied for rewriting (e.g., (RL) stands for R a m L b n → L b n R a m , n, m ≥ 0, (∂L 2 ) for ∂L a 2 b → L a 1 b + L b 1 a − a|b e, etc).
The rule (31) for s = 2 appears from the intersection of (∂L 1 ) and (L 1 ∂). Then, by induction, the intersection with (∂L 1 ) produces (31) for s > 2: The rule (32) fairly simply derives from (∂L 2 ) in (30) and (L n ∂) for n = 2. The next example of an intersection of (L 1 ∂) and (RL) produces the rest of the required rules. On the one hand, we have 1 ∂R a n c + nL b 1 R a n−1 c, on the other hand, . In order to apply (L 1 ∂) we have to assume b > c. However, the composition obtained by subtracting the right-hand sides of the two expressions above is 1 ∂R a n c + nL b 1 R a n−1 c − L c 1 ∂R a n b − nL c 1 R a n−1 b + 3L c 0 R a n b − 3L b 0 R a n c + 2R a n [b, c], (38) it is (skew-)symmetric relative to the permutation of b and c. Hence, we may assume the relation (38) holds on U for every a, b, c ∈ X 1 .
For n ≥ 4 the composition is trivial due to the locality. For n = 3, apply the rules (R 3 ) and (R 2 ) to get the following: For a fixed order on a, b, c ∈ X 1 , use (L 2 ) if necessary to obtain (33) or (34). Consider (38) for n = 2. For convenience of the exposition, let us split the polynomial into two summands and process the summands separately: Therefore, (40) modulo (L 2 ) (i.e., L ) implies the following relation: for all a, b, c ∈ X 1 . Now we can switch a and b in (41) and subtract the relation obtained from (41). In this way, we actually apply the rule (LL) without fixing an order on a, b ∈ X 1 . As a result, we obtain a] Let us apply (LL) and (L 2 ) to write the last relation in a more convenient form: Given a fixed order on a, b, c ∈ X 1 , use (LL) and (L 2 ) if necessary to obtain (35) or (36). For n = 1, proceed with (38) in a similar way: On the one summand, we have Once the expressions subtracted, the terms L c 1 L b 1 ∂a cancel among other similar terms, and the result may be rewritten via (L 2 ∂) and (∂L 2 ). The resulting expression is symmetric in a, b, c ∈ X 1 , so as we fix the order c < b < a the principal part of the relation obtained is L a 0 L b 1 c, and the rule (37) follows. For n = 0, the relation (38) turns into Without loss of generality, assume b > c. Then the two summands in the above relation may be rewritten as follows: Hence, the composition (38) for n = 0 is trivial due to the skew-symmetry and the Jacobi identity on [·, ·].
In order to finish the proof one needs to check that the family of rewriting rules obtained is complete, i.e., forms a Gröbner-Shirshov basis. Let us consider several intersections as examples, other possible intersections may be processed in a similar way.
As a first example, consider the intersection of (33) and (∂L 1 ): Subtract the relations obtained to get a composition If b < c then apply (36) to rewrite the composition (43) into The latter reduces to zero by (L 2 ). If b = c in then apply (35) and (L 2 ) to reduce (43) to zero. As a more complicated example, consider the intersection of (RL) with (35). On the one hand, on the other hand, Here a ≤ c < b, as in (35). The composition obtained should be considered for different n's. If n ≥ 3 then all terms reduce to zero by the locality. For n = 2 we have The latter is trivial modulo (39) and thus reduces to zero by means of (33) or (34) depending on the order on b, c, d.
since the terms containing e annihilate under the action of L n . Continue reducing (46) with the rules (LL) and (41): → 0.
For n = 0, first continue reducing (44) and (45) as follows: Now subtract the relations obtained, apply the Jacobi identity to rewrite The first group is exactly (41) under the action of L d 0 , other three groups coincide with (42). Therefore, the composition reduces to zero by means of (35) and (36).
Observe that the principal parts of the rewriting rules from the Gröbner-Shirshov basis found in Theorem 2 do not depend on the multiplication table of the original Lie algebra g as well as on the choice of the form · | · . In particular, if g is an abelian Lie algebra and x | y = 0 for all x, y ∈ g then K(g) is an abelian Lie conformal algebra and U coincides with the 1-dimensional split null extension 0 → ke → U → Com Conf(X 1 , N = 3) → 0 of the free commutative conformal algebra Com Conf(X 1 , N = 3) generated by a linear basis X 1 of g relative to the locality function N(x, y) = 3, x, y ∈ X 1 .
Hence, the linear basis of Com Conf(X 1 , N = 3) consists of all those conformal monomials described in Proposition 1 that are terminal relative to the rewriting rules stated in Theorem 2.
The conformal algebra U has a natural filtration by degree in X. Denote gr U the corresponding associated graded conformal algebra.
Note that every rule in Theorem (2) has the following property: all terms of highest degree in X do not depend on [·, ·] and · | · . Therefore, if we choose two basic monomials of U described by Corollary 3 and rewrite their conformal product as a linear combination of basic monomials then the terms of highest degree in the expression obtained would be the same as we get for the product of the same monomials in Com Conf(X 1 , N = 3).
As a result, we obtain the following analogue of the Poincaré-Birkhoff-Witt Theorem.
Let us derive another corollary of Theorem 2. Since the case of locality N = 3 has already been considered we may add a new series of relations to U(K(g); X, N = 3) to get a Gröbner-Shirshov basis of the universal enveloping associative algebra U(K(g); X, N = 2). Namely, it is enough to add relations to the Gröbner-Shirshov basis calculated in Theorem 2 and compute all intersections. Since there are no essentially new manipulations, let us just enlist the resulting relations.
The list of rewriting rules obtained in Theorem 3 has the same properties as in Theorem 2: the principal parts as well as all summands of maximal degree in each rule do not depend neither on a multiplication table on g nor on ·, · . Hence, the analogue of Corollary 4 holds for N = 2. In order to complete the description of U(K(g); X, N = 2) we need to state explicitly the set of reduced monomials in the free commutative conformal algebra of locality N = 2. | 2020-09-28T01:01:07.167Z | 2020-09-25T00:00:00.000 | {
"year": 2020,
"sha1": "4f74dc833c9719857efcfd4bcce5355a78458388",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2009.12062",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d9ddf82b181a9d74cdc3fd7c58984c427068724b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
248264104 | pes2o/s2orc | v3-fos-license | A loss-of-function IFNAR1 allele in Polynesia underlies severe viral diseases in homozygotes
Bastard et al. report a loss-of-function IFNAR1 variant that is surprisingly common (allele frequency >1%) in individuals of western Polynesian ancestry, while it is absent or extremely rare elsewhere. Homozygotes for this allele are prone to severe viral diseases.
Introduction
Inborn errors of genes controlling the cellular response to type I IFNs underlie life-threatening viral diseases. IFNAR1 and IFNAR2 encode the two chains of the type I IFN receptor (Duncan et al., 2015;Hernandez et al., 2019); JAK1 and TYK2 encode the constitutively associated tyrosine kinases (Eletto et al., 2016;Minegishi et al., 2006); and STAT1, STAT2, and IRF9 encode the three components of ISGF-3, a multimeric transcription factor that binds to IFN-stimulated response elements (ISREs) to mediate the induction of IFN-stimulated genes (ISGs; Dupuis et al., 2003;Hambleton et al., 2013;Hernandez et al., 2018). Autosomal recessive (AR) complete STAT1, STAT2, and IRF9 deficiencies impair cellular responses not only to type I IFNs, but also those to type II IFNs (for STAT1) and/or type III IFNs (for STAT1, STAT2, and IRF9). AR complete TYK2 deficiency impairs cellular responses to IL-12 and IL-23 more profoundly than the response to type I IFNs (Kreins et al., 2015), whereas no complete defect of JAK1 has been reported (Eletto et al., 2016). Patients with any of these four recessive inborn errors of immunity underlying a complete deficiency of cellular responses to type I IFNs are prone to various viral diseases, with AR STAT1 deficiency underlying the most diverse and severe set of viral diseases, including those caused by live attenuated virus (LAV) vaccines (Gothe et al., 2021;Poyhonen et al., 2019).
The crucial role of type I IFNs in susceptibility to LAV, herpes simplex virus 1 (HSV-1), and SARS-CoV-2 was revealed by the discovery of patients with AR IFNAR1 or IFNAR2 deficiency (Casanova and Abel, 2021). AR complete IFNAR1 deficiency was discovered in a 9-yr-old Iranian boy who suffered severe measles, mumps, and rubella (MMR) vaccine disease at the age of 1 yr (Hernandez et al., 2019). Nine other patients with suspected (n = 1) or confirmed (n = 8) IFNAR1 deficiency have been reported: a 14-yr-old child from Brazil with yellow fever virus (YFV) live vaccine-associated disease (Hernandez et al., 2019); a 2-yr-old child from Palestine with HSV-1 encephalitis (HSE); two cousins, aged 1 and 17 yr, with fatal complications following MMR vaccination and deafness following mumps, respectively (Bastard et al., 2021b); a 15-mo-old boy with severe disease following MMR vaccination (Gothe et al., 2022); a 13-yr-old boy from Saudi Arabia with critical COVID-19 pneumonia (Khanmohammadi et al., 2022); two adults, aged 26 and 38 yr, from Saudi Arabia and Turkey, with critical COVID-19 pneumonia ; and one 3-yr-old child from Iran with critical COVID-19 pneumonia and features of multisystem inflammatory syndrome (Abolhassani et al., 2022).
One patient with IFNAR2 deficiency and MMR disease has been described (Duncan et al. 2015), with two other patients more recently reported to have suffered from YFV vaccine or MMR disease (Bastard et al., 2021c;Passarelli et al., 2020). The clinical penetrance of IFNAR1 and IFNAR2 deficiencies for severe and life-threatening disease appears to be high for adverse reactions to LAV, including the YFV and MMR vaccines, and critical COVID-19 pneumonia (Bastard et al., 2021c;Duncan et al., 2015;Gothe et al., 2022;Hernandez et al., 2019;Khanmohammadi et al., 2022;Passarelli et al., 2020;Zhang et al., 2018). By contrast, penetrance seems to be lower for severe disease caused by at least some WT viruses, such as HSE (Bastard et al., 2021b). This is consistent with the surprisingly narrow range of severe and life-threatening viral illnesses in these patients, individually and collectively (Meyts and Casanova, 2021). However, it is not possible to draw any firm conclusion about the penetrance of IFNAR1 deficiency for most viral diseases at the moment, owing to the small number of patients diagnosed and the associated ascertainment bias.
The essential and nonredundant roles of human type I IFNs in antiviral defense were further delineated by the discovery of autoantibodies neutralizing type I IFNs, especially the 13 IFN-α and/or -ω, and less frequently IFN-β, in ≥15% of adults with lifethreatening COVID-19 pneumonia (Bastard et al., 2021a;Bastard et al., 2020). These autoantibodies were also found to underlie adverse reactions to YFV vaccine in three adults (Bastard et al., 2021c). Interestingly, the patients with critical COVID-19 pneumonia or YFV vaccine disease, including many elderly patients, were previously healthy and had not suffered from unusually severe viral diseases. These findings are reminiscent of those for patients with inherited IFNAR1 or IFNAR2 deficiency. Together with our previous observation of previously healthy 26-and 38yr-old IFNAR1-deficient patients with critical COVID-19, these findings raise the possibility that inborn errors of type I IFN responses might be clinically silent for many years, or even decades in some patients, until exposed to specific viral pathogens. We studied seven children from five unrelated kindreds of western Polynesian ancestry who developed severe or fatal complications following exposure to LAV and/or WT viruses.
Life-threatening viral diseases in seven patients of western Polynesian ancestry
We studied seven children from five unrelated kindreds with severe or life-threatening clinical disease temporally associated with exposure to LAV and/or WT viruses ( Fig. 1 A; Table 1, Tables S1, S2, S3, and S4). In brief, patient 1 (P1), from kindred A, was a previously healthy 12-mo-old girl of western Polynesian ancestry with non-consanguineous parents who presented with a pronounced injection site reaction 5 d after her first MMR vaccination. She had previously tolerated live attenuated bacillus Calmette-Guerin (BCG) vaccine and oral polio vaccine. Her condition deteriorated, with fever, thrombocytopenia, anemia, splenomegaly, marked hyperferritinemia, dyslipidemia, coagulopathy, and multiorgan failure, but with normal lymphocyte subsets. She was diagnosed with and treated for hemophagocytic lymphohistiocytosis (HLH) on day 14, but she died 18 d after MMR vaccination, at which time positive PCR results were obtained for vaccine-strain MMR viruses and human herpesvirus 6 (HHV6) on whole blood and a nasopharyngeal swab. Postmortem examination confirmed extensive hemophagocytosis and abnormal histiocytic infiltrates and revealed widespread giant multinucleate Warthin-Finkeldey cells (WFCs) associated with measles infection (Fig. 1 B; Nozawa et al., 1994). The patient's (Laksono et al., 2016). WFC was determined for the tonsils and adenoids during prodromal measles (Nozawa et al., 1994) and in the regional LNs after immunization and after fatal measles infection (Becroft and Osborne, 1980). Scale bar represents 400 µm. (C) Sanger sequencing results for IFNAR1 in leukocyte gDNA from the patients, their parents, and healthy controls. (D) Schematic diagram of the WT and mutant (MT) IFNAR1 proteins. SD1-4, extracellular subdomains 1-4; SP, signal peptide; TM, transmembrane region. The mutation reported here is indicated in red, and the previously reported mutations are indicated in violet. older brother (P2) had also died at 12 mo of age, 21 d after his first MMR vaccination, with a similar clinical presentation.
Patient 3 (P3), from kindred B, was a 15-mo-old girl born to non-consanguineous parents who were also of western Polynesian ancestry, predominantly of Tongan and Niu¯ean descent; one great-great-grandfather was New Zealand M¯aori. P3 had presumed enteroviral meningoencephalitis at the age of 2 mo and a mild developmental delay, but otherwise normal neurological function. She presented with encephalopathy 11 d after initial exposure to the LAV in the MMR/V vaccine. She developed HLH-like symptoms of fever, hyperferritinemia, high levels of soluble CD25 (sCD25), organomegaly, and lymphadenopathy, and her bone marrow displayed signs of hemophagocytosis. She also developed bilateral arthritis. Steroid treatment was initiated, and the patient then developed PCR-positive cutaneous varicella reactivation that responded to acyclovir. Her peripheral blood was subsequently tested positive, by PCR, for the vaccine-strain measles virus. Retrospective PCR tests on joint fluid were positive for mumps viruses, and those on cerebrospinal fluid (CSF) were positive for vaccine-strain measles and mumps viruses 8 wk after MMR/V vaccination. Encephalopathy persisted, and P3 deteriorated further, dying 72 d after initial presentation.
Patient 4 (P4), from kindred C, was a 13-mo-old boy born at 24 wk of gestation to non-consanguineous parents of Niu¯ean ancestry. Premorbid conditions included stable chronic lung disease of prematurity and mild developmental delay without seizure disorder. 14 d after receiving the first dose of MMR vaccine, P4 developed status epilepticus, encephalopathy, and acute respiratory distress syndrome. This presentation was accompanied by fever, thrombocytopenia, anemia, hepatomegaly, hepatitis, hyperferritinemia, and high levels of sCD25. PCR on CSF was positive for vaccine-strain mumps virus, and PCR on peripheral blood was positive for vaccine-strain measles and Patient 6 (P6), from kindred D, is a 4-yr-old boy born to Niu¯ean and Tongan parents. He suffered recurrent viral pneumonia in the first year of life, but was otherwise reported to be developing normally. At the age of 14 mo, he received the MMR vaccine, and, 5 d later, he developed fever, lymphadenopathy, a generalized rash, and symptoms of viral encephalitis associated with CSF pleocytosis. Brain magnetic resonance imaging (MRI) findings were normal and, 4 wk after MMR vaccination, a PCR on nasopharyngeal aspirate (NPA) was positive for measles virus and negative for all other viruses tested. Following recovery, there were concerns about new-onset gross motor, hearing, and language delay. At the age of 21 mo, P6 developed viral pneumonia associated with positive PCR results for rhinovirus, parainfluenza, RSV, and bocavirus on NPA. He required venovenous extracorporeal membranous oxygenation (ECMO). He has since been diagnosed with profound bilateral sensorineural hearing loss requiring cochlear implants, global developmental delay (315.8 Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition [DSM-5]), and autistic spectrum disorder (299.00 DSM-5), and he currently requires substantial support.
Patient 7 (P7), from kindred E, is a 13-yr-old girl born at term, in New Zealand, to non-consanguineous parents of Samoan ancestry, following an uneventful pregnancy. At 10 mo of age she presented with hemophilus influenza type B (Hib) bacteremia and meningitis. This illness was complicated by seizures and bilateral infected subdural hygromas requiring multiple washouts. This illness was probably due, in part, to the patient not having been vaccinated before this infection. At the age of 12 mo, 2 wk after her discharge from hospital following her Hib illness, she again presented with rapidly progressive respiratory and multiple-organ failure requiring 9 d of ECMO support. RSV was identified on PCR and immunofluorescence analyses. P7 moved to Australia at the age of 3 yr. She received the MMR vaccine at 4 yr of age, and the MMR/V vaccine 6 mo later, without complications. She remained relatively well until she presented at the age of 7 yr with acute respiratory distress syndrome and multiorgan failure requiring inotropic support. She was intubated for a total of 33 d and required escalation to high-frequency oscillation ventilation and nitric oxide treatment. Imaging showed extensive air-space opacification and a background of extensive bronchiectasis. No causal organism was identified.
LAVs were associated with severe complications in six of the seven children, three of whom died. The remaining patient tolerated LAV exposure. Features of hyperinflammation were present in five patients, possibly due to excessive production of cytokines other than type I IFNs (Gothe et al., 2022;Passarelli et al., 2020).
A homozygous predicted to be loss-of-function (pLOF) variant of IFNAR1 in the seven patients We performed whole-exome sequencing (WES) on P1. Principal component analysis suggested that the patient was of possible Polynesian ancestry (Fig. S1 A), and WES data confirmed that the parents were non-consanguineous. We analyzed the WES results, focusing on known monogenic defects compromising cellular responses to type I IFNs (IFNAR1, IFNAR2, JAK1, TYK2, STAT1, STAT2, and IRF9; Bastard et al., 2021b;Duncan et al., 2015;Duncan et al., 2021;Dupuis et al., 2003;Eletto et al., 2016;Hambleton et al., 2013;Hernandez et al., 2019;Hernandez et al., 2018;Minegishi et al., 2006). We searched for compound heterozygous or homozygous single-nucleotide variants or large deletions (copy number variants; Fig. S1 B). We also analyzed all genes underlying known inborn errors of immunity Notarangelo et al., 2020;Tangye et al., 2020) and searched for homozygous or compound heterozygous variants of any of the protein-coding genes considered that had been pLOF. We found that P1 carried a homozygous variant of IFNAR1 (GRCh37 NC_000021.8:g.34725076G>T), leading to the creation of a premature stop codon, p.Glu386*. P1 also carried a heterozygous missense variant of XIAP, GRCh37 NC_000023.10: g.123019849G>C, p.(Gly113Arg), which was predicted to be benign and was confirmed experimentally to be biochemically neutral (Fig. S2, A-H). Sanger sequencing of IFNAR1 in the index patient and her parents was consistent with an AR mode of inheritance ( Fig. 1 C). We also sequenced IFNAR1 in the other six affected patients (P2-P7) by Sanger sequencing and nextgeneration sequencing methods, as part of clinical care. We found that all six patients were homozygous for the same IFNAR1 variant as P1. This pLOF variant is predicted to encode a truncated protein, lacking the transmembrane and intracellular domains ( Fig. 1 D). These findings suggest that the seven patients had AR IFNAR1 deficiency, which was causal for the severe or fatal adverse events temporally associated with exposure to LAV and/or WT viruses.
The p.Glu386* IFNAR1 protein is not expressed at cell surface and is loss-of-function We then studied the expression of WT or p.Glu386* IFNAR1 following the transient transfection of HEK293T cells with plasmids containing the corresponding cDNAs. As a control, we used the previously reported p.Val225Alafs*228 IFNAR1 variant (referred to here as p.V225fs; Hernandez et al., 2019). Quantitative RT-PCR (RT-qPCR) revealed that IFNAR1 mRNA levels were similar for the WT and p.Glu386* forms (Fig. 2 A). Western blotting of extracts of these cells with an antibody specific for the N-terminal region of IFNAR1 revealed a truncated protein Figure 2. The IFNAR1 p.Glu386* variant results in a truncated protein that is not expressed at cell surface and is loss-of-function. (A) IFNAR1 mRNA levels, determined by RT-qPCR, in HEK293T cells transiently transfected with WT or MT IFNAR1 cDNA constructs; β glucuronidase (GUS) was used as an expression control. EV, empty vector; NT, nontransfected; p.V225fs variant from a previously reported IFNAR1 −/− patient. Error bars indicate SD. (B) Western blot of IFNAR1 in HEK293T cells transiently transfected with WT and mutant IFNAR1 cDNA constructs. An antibody recognizing the IFNAR1 protein was used. GAPDH was used as a loading control. One blot representative of two independent experiments is shown. (C) Immunofluorescence staining as assessed by confocal microscopy in HeLa cells transiently transfected with IFNAR1 cDNA constructs. An antibody against the N-terminus of IFNAR1 was used (green), and membranes were stained with wheatgerm agglutinin (WGA; purple). The nuclei were stained with DAPI (blue). Scale bar represents 10 µm. The images shown are representative of two independent experiments. (D) Graphical representation of extracellular FACS staining and the mean fluorescence intensity (MFI) for IFNAR1 in HEK293T cells transiently transfected with IFNAR1 cDNA constructs, performed with an antibody recognizing the N-terminus of the protein. Cells were not permeabilized. Results representative of three independent experiments are shown. Error bars indicate the SD. (E) Luciferase activity after IFN-α2 for p.Glu386* IFNAR1, migrating at a lower molecular weight than the WT IFNAR1 ( Fig. 2 B). We then analyzed cell surface levels of WT and p.Glu386* IFNAR1 by FACS and confocal microscopy. In these overexpression systems, p.Glu386* IFNAR1 was not detected at the plasma membrane (Fig. 2, C and D). Finally, we used the WT or p.Glu386* IFNAR1 cDNA to transfect IFNAR1-deficient HEK293T cells generated by CRISPR/Cas9mediated gene editing. Following transfection with a reporter gene containing five ISREs and stimulation with IFN-α2, luciferase activity levels in the cells expressing WT IFNAR1 were 10 times higher than those in nontransfected cells or cells transfected with an empty vector. By contrast, HEK293T cells transduced with the p.Glu386* variant displayed a total absence of luciferase activity in response to IFN-α2, consistent with the response of cells transduced with the p.V225fs IFNAR1 variant ( Fig. 2 E). Thus, the p.Glu386* IFNAR1 variant encodes a truncated protein that is LOF and not expressed on the cell surface. These in vitro studies of the p.Glu386* allele, in isolation and by overexpression, also suggest that all seven patients had AR IFNAR1 deficiency.
The cells from one patient do not express IFNAR1 and do not respond to type I IFNs We then generated and tested SV40-transformed dermal fibroblasts for P3. We first showed, by RT-PCR, that they produced only small amounts of IFNAR1 mRNA (Fig. 3 A). No IFNAR1 protein was detected, by FACS, at the cell surface ( Fig. 3 B), whereas IFNAR2 levels were normal ( Fig. S3 A). The binding of type I IFNs to the receptor complex leads to the activation of the constitutively associated TYK2 and JAK1 tyrosine kinases, and then to the phosphorylation of STAT1 and STAT2, which associate with IRF9 to form the trimeric ISGF3 complex. The activated ISGF-3 complex then migrates to the nucleus, where it promotes the expression of ISGs. We therefore assessed the cellular responses of the patient's cells to type I IFN (100 ng/ml of IFN-α2). STAT1 phosphorylation in response to IFN-α2 stimulation was completely abolished in the patient's fibroblasts, as reported for a previously described IFNAR1-deficient patient (Fig. 3 C;Hernandez et al., 2019). We then confirmed the defective response to type I IFNs by measuring the induction of transcription for the MX1 ISG. Stimulation with type I IFNs (IFN-α2, IFN-ω, and IFN-β) induced MX1 and CXCL9 mRNA in control SV40-fibroblasts, but this response was abolished in SV40fibroblasts from P3 ( Fig. 3 D and Fig. S3 B). Importantly, STAT1 phosphorylation and MX1 induction in response to IFN-γ were intact in the SV40-fibroblasts of P3 (Fig. 3, C and D). Finally, the lack of IFN-α/ω/β-mediated ISG induction in SV40fibroblasts from P3 was rescued by the overexpression of WT, but not p.Glu386* or p.V225fs IFNAR1 (Fig. 3 E). Overall, the patient's cells had an abolished response to type I IFNs but responded normally to type II IFN. As all seven patients carried the same IFNAR1 variant in the homozygous state, these findings are consistent with the conclusion that all these patients had AR complete IFNAR1 deficiency.
The p.Glu386* IFNAR1 variant is found at an allele frequency of 1.25% in Samoa We then investigated the population genetics landscape of the variant. The IFNAR1 g.34725076G>T variant is absent from gnomAD v2, but is present in one heterozygous individual in v3.1 of this public database (https://gnomad.broadinstitute.org/ variant/21-33352770-G-T?dataset=gnomad_r3). No pLOF variants of IFNAR1 in the homozygous state are present in this database, which includes only seven missense variants in homozygosity (Fig. S4). The five kindreds we report were unrelated to each other, and the parents were all nonconsanguineous and of western Polynesian origin. Western Polynesia is a geographic area thought to be the homeland of the ancestral Polynesian society (Addison and Matisoo-Smith, 2010;Vinton Kirch, 2017). It includes Tonga, the Independent State of Samoa, American Samoa, and Niu¯e ( Fig. 4). We assessed the prevalence of the IFNAR1:p.Glu386* LOF variant in various populations across the Pacific. We analyzed a cohort of 1,285 adult Samoans from the Soifua Manuia ("good health" in Samoan) study who had undergone whole-genome sequencing as part of the Trans-Omics in Precision Medicine (TOPMed) Whole-Genome Sequencing Program (Harris et al., 2020;Taliun et al., 2021). In this cohort, IFNAR1:c.1156G>T p.(Glu386*) (rs201609461) had a frequency of 0.0125 (95% CI: 0.0085, 0.0175) Note that the rs201609461 designation applies to the two observed alternative alleles: G>T and G>A. Only the frequency of G>T is of relevance here. We observed 1,253 G-allele homozygotes, 32 heterozygotes, and no T-allele homozygotes. We found no evidence for a deviation of the distribution of genotypes from Hardy-Weinberg equilibrium (exact test, P = 1). Under Hardy-Weinberg assumptions, the carrier frequency, as estimated from the sequenced alleles, is 0.0246, or 1:41 individuals, and the estimated frequency of the homozygous genotype is 0.000155, or 1:6,450 individuals. A bias in the allele frequencies calculated is possible, due to the presence of related individuals. We therefore also examined a subset of 827 unrelated individuals among the Soifua Manuia study. In this subset, G>T had a frequency of 0.0109 (95% CI: 0.0065, 0.0171), and there were 809 G-allele homozygotes, 18 heterozygotes, and no T-allele homozygotes.
The distribution of the p.Glu386* IFNAR1 variant on other Pacific islands We also analyzed an independent cohort of 266 individuals from the geographic transect used to study the peopling history of the Pacific (Bergstrom et al., 2020;Choin et al., 2021), and 1,354 newly sampled individuals from eastern Polynesia (n = 764 for the Society Islands; n = 211 for the Austral Islands; n = 199 for the Marquesas Islands, n = 165 for the Tuamotu Islands, and n = 15 stimulation in IFNAR1 −/− HEK293T cells generated with CRISPR/Cas-9 technology and transiently transfected with WT or MT IFNAR1 cDNA constructs. The bars represent the means and SEM of the results obtained in three independent experiments. Ctrl, control; RLU, relative light units. Source data are available for this figure: SourceData F2.
Bastard et al.
Journal of Experimental Medicine for the Gambier Islands). We also tested independent population samples from a broader region, including western Polynesia (n = 22 from Tonga; n = 70 Cook Islands) and Fiji (n = 24; Ioannidis et al., 2021). The variant was not observed in most of the Pacific populations studied. This may reflect limitations relating to recruitment criteria, sample size, and the particular archipelagos sampled. However, the variant was present in the Austral Islands (n = 211, AF = 0.00474), the Society Islands (n = 764, AF = 0.000654), the Marquesas Islands (n = 199, AF = 0.00251), the Cook Islands (n = 70, AF = 0.0143), and Fiji (n = 24, AF = 0.0208; Fig. 4 and Table 2). By contrast, the frequency of G>T in gnomAD 2.1.1 (Karczewski et al., 2020) and TOPMed BRAVO freeze 8 (Taliun et al., 2021) combined is four of a total of 485,756 alleles (AF = 0.00000823). The lack of p.Glu386* detection in several of the Pacific island populations sampled here does not exclude its presence on these islands, because, as indicated by the minor allele frequency (MAF) confidence intervals in Table 2, many of our samples were too small to exclude definitively the possibility of the variant being present. Nevertheless, our findings suggest that the MAF of the allele in these populations is <1%, contrasting with the situation in western Polynesia. Our results therefore suggest that the frequency epicenter of the variant may be western Polynesia ( Fig. 4 and Table 2), with this variant being exceedingly rare or absent in the other regions of the world tested. From the data presented here, it is also impossible to draw any firm conclusions about the presence, absence, or frequency of the variant in the unsampled populations, including other populations from eastern Polynesia, New Zealand, Micronesia, and, more generally, other regions of the Pacific.
Discussion
We describe here seven affected individuals from five unrelated kindreds of western Polynesian ancestry who experienced severe LAV and viral infections and were found to be homozygous for the same LOF variant of IFNAR1. There are now 16 reported patients with AR IFNAR1 deficiency from 12 unrelated kindreds (Abolhassani et al., 2022;Bastard et al., 2021b;Gothe et al., 2022;Hernandez et al., 2019;Khanmohammadi et al., 2022), and almost half (7 of 16) have western Polynesian ancestry. There are also three known patients, from three kindreds, in three different countries with AR IFNAR2 deficiency (Bastard et al., 2021c;Duncan et al., 2015;Passarelli et al., 2020). The range of viral illnesses in these patients with type I IFN receptor deficiency is narrower than predicted from previous studies of type I IFNs, which were widely thought to be essential for host defense against many, if not all, viruses Meyts and Casanova, 2021). The penetrance of severe LAV and WT viral disease appears to be incomplete in patients with IFNAR1 or IFNAR2 deficiency. Severe adverse events following MMR or To aid visualization of the variant distribution, a factor 10 correction has been applied to allele frequencies, so that a full pie chart corresponds to MAF = 10% and a quarter of a pie chart to MAF = 2.5% (as denoted in the schematic legend).
Bastard et al. Journal of Experimental Medicine
YFV vaccination have been reported in most cases, but some IFNAR1-deficient individuals may tolerate these vaccines and not be identified due to potential collider or ascertainment bias (Griffith et al., 2020). The severe WT viral diseases occurring in these patients probably include conditions with high penetrance, such as SARS-CoV-2 and critical COVID-19 pneumonia, and some conditions with a lower penetrance, such as HSV-1 and HSE. The mechanisms underlying incomplete penetrance are unknown but may depend on the viral strain, inoculum, and modifier genes governing other antiviral mechanisms, such as genes involved in the type III IFN pathway. IFNAR1 and IFNAR2 deficiencies should be considered in patients with viral illnesses, such as LAV disease, critical COVID-19 pneumonia, and HSE, but also in patients with other, unexplained severe viral illnesses (Casanova and Abel, 2021). LAV disease may unmask IFNAR1 deficiency, but as clinicians, we nevertheless support the continuation of active MMR immunization for individuals of Polynesian ancestry. In 2019, a measles epidemic in Samoa resulted in 83 deaths, mostly in young unvaccinated children. This outbreak occurred due to low vaccination coverage and ended only after a mass MMR vaccination campaign (Champredon et al., 2020;Craig et al., 2020). No severe adverse events were observed during or after this immunization campaign. This highlights the much higher risk of death in unvaccinated populations due to vaccine-preventable disease than of IFNAR1 deficiency leading to severe complications of MMR vaccination. Individuals with IFNAR1 deficiency have a predicted high susceptibility to the WT measles virus, so a small fraction of severe WT measles cases may actually be due to IFNAR1 deficiency. The impact of IFNAR1 deficiency in this recent epidemic-or in the 1893 Samoan measles epidemic, which resulted in ≥1,000 deaths in a population of 34,500 (Davies, 1894)-is unknown.
The IFNAR1 p.Glu386* LOF variant is exceedingly rare or undetected in studied populations outside the Pacific, but is present at a higher-than-expected frequency in individuals of Polynesian ancestry, including the inhabitants of Samoa and the Cook, Society, Marquesas, and Austral islands. Its observation in Fiji is also given by an individual of Polynesian ancestry (Ioannidis et al., 2021), so despite the complex composition of the island, including populations of mostly Papuan-related ancestry, our findings so far indicate a distribution restricted to Polynesian roots. It is not possible to draw broad conclusions about its frequency in the easternmost parts of eastern Polynesia, New Zealand, Micronesia, and other regions from Near and Remote Oceania based on the data presented here. No pLOF variant of IFNAR1 or IFNAR2 has a frequency higher than 10 −4 in gnomAD, implying that the global frequency of individuals with AR deficiencies of the products of either of these genes is very low, although such genomic databases are known to lack diversity and to be strongly biased toward European ancestries (Sirugo et al., 2019). Like the isolated high prevalence of Huntington disease in Zulia, Venezuela; maple syrup urine disease among the Mennonites of Pennsylvania, USA; or complete achromatopsia on Pingelap, Micronesia, our study suggests that the frequency of IFNAR1 or IFNAR2 deficiency may be much higher in small, geographically circumscribed populations that have experienced strong genetic drift due to serial founder (Petchey, 2001). We previously reported evidence of an extreme bottleneck in Samoans that lasted for 2,000 yr, ending about 1,000 yr ago, followed by a rapid population expansion (Harris et al., 2020). This suggests that the higher-than-expected allele frequency of IFNAR1 p.Glu386* among individuals of western or eastern Polynesian ancestries may result from a founder effect followed by an extended bottleneck within which substantial genetic drift may have occurred, coupled with an absence of negative selection against the allele due to the rare or low level of exposure to viruses controlled by type I IFN-dependent immunity in this region before the arrival of Europeans. The frequency of this variant may also have be affected by the population crash accompanying contact with Europeans and the subsequent exponential growth of the population (Harris et al., 2020).
Urgent research and policy priorities will need to be addressed to translate our findings into improvements in health equity for people of Polynesian ancestry, who may be more frequently affected by IFNAR1 deficiency than other populations. Rapid, cost-effective, and accessible diagnostic methods will be required, because IFNAR1 deficiency is difficult to recognize clinically due to the existence of both nonspecific and unknown presentations. In parallel, we will need to harness recent advances in antiviral approaches to provide passive (neutralizing monoclonal antibodies) and active (mRNA vaccination) immunization and treatment (novel antiviral agents). Access to screening, testing, treatment, and follow-up for IFNAR1 deficiency is likely to be inequitable across affected nations and communities, and we call upon well-resourced agencies and organizations to support the establishment of medical and public health infrastructures to enable affected Polynesian communities to benefit equitably from such measures.
Diagnosis in a larger number of kindreds and population studies will facilitate more detailed evaluations of the frequency and clinical phenotypes of IFNAR1-deficient individuals, including the penetrance of each viral illness. These data could then be used to guide decisions about the appropriateness of early screening, and as a basis for prompt diagnosis and potentially life-saving interventions, as has already been clearly demonstrated for neonatal SCID screening (Amatuni et al., 2019;Biggs et al., 2017;Kwan et al., 2014). The early diagnosis of IFNAR1 and other type 1 IFN deficiencies not only would be of direct benefit to the affected individuals, but could also potentially increase confidence in vaccination, thereby resulting in better protection of the community. Approaches to establishing the medical and public health infrastructure necessary to achieve such outcomes must include representatives from affected communities and take into account the inequitable distribution of resources and access. Beyond Polynesians, this study highlights the need to expand population and clinical genetics studies to understudied populations around the globe (Sirugo et al., 2019), particularly isolated populations, in which the frequencies of IFNAR1 and IFNAR2 pLOF variants should be explored. Beyond IFNAR1, this study highlights the importance of combining clinical and evolutionary genetic studies for delineation of the molecular and cellular basis of human immunity
Study and ethics approval
Informed consent was obtained in each country of follow-up, in accordance with local regulations and the requirements for institutional review board (IRB) approval of Rockefeller University and Institut National de la Santé et de la Recherche Médicale (INSERM). Experiments were conducted in the United States and France, in accordance with local regulations and with the approval of the IRB of Rockefeller University and INSERM, respectively. The Soifua Manuia study underwent initial ethical review by the IRB at Brown University and undergoes annual continuing ethical review by the IRB at Yale University. Data analysis activities at the University of Pittsburgh were reviewed by their IRB and were determined to be exempt (IRB #PRO16040077) based on the receipt of only de-identified data. All aspects of the original data collection protocols were reviewed and approved by the Health Research Committee of the Samoan Ministry of Health. All participants provided written informed consent for their participation via forms written in Gagana S¯amoa.
For the collection of French Polynesian samples (eastern Polynesia), all participants provided written informed consent. This work is part of a much broader collaboration (Mata'ea project) between the Institut Louis Malardé (Papeete) and Institut Pasteur (Paris) and received ethics approval from the French Comité de protection de personnes (CPP-OUEST III no. 19.08.60/SICNRIPH 19.07.02.38421) and the Comite d'Ethique de la Polynesie Française (MATAEA ID-CRB2019-AO1793-54/Avis no. 80 CEPF_03/09/2019). Population references for western Polynesia were obtained from samples reported in Ioannidis et al. (2020) with the approval of the Oxford University Tropical Research Ethics Committee (reference no. 537-14) for population genetics and medical studies. Informed consent was obtained from all participants in close coordination with local communities and collaborators, including Nuualofa Tuuau (Samoa), Tamarua Teariki (Cook Islands), and many other liaison officers, whose involvement was essential to ensure a respectful approach to participants and community leaders.
Cells
Peripheral blood mononuclear cells were isolated by Ficoll-Paque density gradient (GE Life Science) centrifugation. Primary fibroblasts and SV40-immortalized dermal fibroblasts were maintained in DMEM (Thermo Fisher Scientific) supplemented with 10% FBS (Thermo Fisher Scientific). PHA-blasts were cultured in RPMI (Thermo Fisher Scientific) supplemented with 10% FBS (Thermo Fisher Scientific).
Plasmids
The IFNAR1 cDNA was inserted into the pGEMT cloning vector (Promega). Site-directed mutagenesis was performed to obtain the indicated mutant constructs. All IFNAR1 constructs were then subcloned into pCAGGS for overexpression studies. All constructs were resequenced to ensure that no adventitious mutations were generated during the cloning process.
Western blotting
Fibroblast cells were left untreated or were treated with IFN-α2 (Miltenyi Biotec), IFN-ω (Peprotec), IFN-β (Miltenyi Biotec), or IFN-γ (Imukin, Boehringer Ingelheim) for the times indicated, before lysis. Cells were lysed in NP-40 lysis buffer (280 mM NaCl, 50 mM Tris, pH 8, 0.2 mM EDTA, 2 mM EGTA, 10% glycerol, and 0.5% NP-40) supplemented with 1 mM dithiothreitol, PhosSTOP (Roche), and cOmplete Protease Inhibitor Cocktail (Roche). The protein lysate was subjected to SDS-PAGE, and the resulting bands were transferred to a nitrocellulose membrane, which was probed with unconjugated primary antibodies and secondary antibodies adapted for LI-COR. An anti-GAPDH antibody (Santa Cruz Biotechnology) was used as a loading control. For endogenous IFNAR1, we used an antibody recognizing the IFNAR1 protein at a dilution of 1:1,000 (64G12 custom antibody), whereas, for the protein overexpressed after transfection, we used a polyclonal anti-IFNAR1 antibody recognizing the C-terminus of IFNAR1 (ab45172; Abcam). Antibodies against p-STAT1 (562070; BD Biosciences) and GAPDH (sc-47724; Santa Cruz Biotechnology) were purchased from commercial suppliers. The membrane was incubated overnight at 4°C with the primary antibodies. SuperSignal West Pico Chemiluminescent substrate (Thermo Fisher Scientific) was used to visualize HRP activity, and the resulting signal was detected with an Amersham Imager 600 (GE Life Sciences). The complete unedited blots are shown in the supplemental material.
Flow cytometry
For measurement of the cell-surface expression of IFNAR1, control or patient EBV-B cells or SV40-fibroblasts were plated in 96-well plates at a density of 5 × 10 5 cells per well and surfacestained with purified mouse anti-IFNAR1 AA3 mAb (a gift from L. Runkel, Biogen, Inc., Cambridge, MA). Cells stained with AA3 were washed once with PBS and incubated with a biotinylated rat anti-mouse secondary antibody (Thermo Fisher Scientific) for 30 min, before being washed once with PBS and incubated for 30 min with PE-conjugated streptavidin (Thermo Fisher Scientific). The cells were then washed twice with PBS and analyzed by flow cytometry. Data were acquired on a Gallios flow cytometer (Beckman Coulter), and the results were analyzed with FlowJo software (TreeStar).
RT-qPCR RNA was isolated from peripheral blood mononuclear cells, fibroblasts, or HEK293T cells with and without plasmid transfection, with a kit, according to the manufacturer's protocol. We extracted mRNA from the cells with the Cell-to-CT kit (AM1729; Thermo Fisher Scientific), according to the manufacturer's instructions. RT-qPCR was performed with Applied Biosystems Taqman assays for MX1 and the β-glucuronidase (GUS) housekeeping gene for normalization. Results are expressed according to the ΔΔCt method, as recommended by the kit manufacturer.
Genotyping
The Soifua Manuia study follows a cohort of adult Samoans who were recruited for a genome-wide association study of adiposity-related phenotypes in Samoa in 2010 (Hawley et al., 2014). Details of DNA extraction for this study were provided by Minster et al. (2016). Of the n = 3,475 participants recruited for the study, DNA was available for n = 3,119. Of these, n = 1,285 participants were selected for whole-genome sequencing via the Trans-Omics in Precision Medicine (TOPMed) Whole-Genome Sequencing Program (Taliun et al., 2021). The allele and genotype counts and frequencies presented here are those from the TOPMed Freeze 9 callset (NHLBI Trans-Omics for Precision Medicine, 2021). There may be a bias in the allele frequencies calculated for research participant samples in which some participants are related. In addition to reporting the counts and frequencies of the entire sequenced sample, we also ascertained a maximum unrelated subset of participants with PRIMUS (Staples et al., 2013). Individuals inferred to be first-or seconddegree relatives were labeled as "related" for the ascertainment of this "unrelated" subset. Freeze 8 of the BRAVO variant browser does not include the participants from Soifua Manuia in the allele counts. However, in all TOPMed freezes, the genotypes of Soifua Manuia participants were called together with the genotypes of all other participants. Thus, the observation of 32 alleles in the Soifua Manuia participants and the observation of an absence of these alleles in the remaining TOPMed participants were made simultaneously. For Europeans and East Asians, information was obtained directly from the gnomAD browser. For Polynesians from Tonga, the Cook Islands, and the islands of Fiji, Sanger analysis was performed. For near and western remote Oceanians, we processed the fastq file for Oceanian individuals from Choin et al. (2021) together with the available whole-genome sequences for New Guineans and Bougainville islanders from Bergstrom et al. (2020), as described previously.
Kindred A, P1
This presentation initiated the investigation of IFNAR1 immunodeficiency in the Pacific. A previously healthy, 12-mo-old girl living in western Polynesia with non-consanguineous Polynesian parents received her first MMR vaccine on day 1. She presented with vaccine site inflammation on day 5 and started oral antibiotics. Further fever and localized swelling led to admission for intravenous antibiotics for presumed abscess. Her vaccine site improved, and there was a transient blanching rash, followed by clinical deterioration with petechial rash, bloody diarrhea, thrombocytopenia, and coagulopathy, requiring fresh frozen plasma and platelet transfusions. All blood cultures were sterile, and dengue serology was negative. On medical history, she had normal growth and previously tolerated live attenuated vaccines BCG and three doses of oral polio vaccine, in addition to scheduled inactivated vaccines. Prior illnesses included conjunctivitis aged 3 mo and a furuncle treated with antibiotics aged 10 mo. On family history, an older brother died at 12 mo of age from overwhelming sepsis 21 d after first MMR as fully described in kindred A, patient 2 (P2) case report.
On day 14, due to continued deterioration, she was transferred to New Zealand and admitted to pediatric intensive care with an oxygen requirement, tachycardia, irritability, hepatosplenomegaly confirmed on abdominal ultrasound, no significant lymphadenopathy, widespread petechiae, and purpura without active bleeding. Her rash was not morbilliform, and the MMR vaccination site was firm and indurated without fluidfilled abscess. Full blood count showed normocytic anemia 81 g/liter, elevated white cell count with left shift, and marked thrombocytopenia with platelets <10 × 10 9 . The peripheral smear showed RBC fragments and thrombocytopenia. Initial coagulation profile, fibrinogen, and ADAMTS-13 were normal. Serum creatinine was normal, and liver transaminases were elevated. Peripheral blood flow cytometry, lymphocyte subsets, and immunoglobulins were also normal. Multiple blood and catheter urine cultures were sterile, and arboviruses were not detected. No LP was performed due to patient instability; however, CT head was normal, and postmortem CSF culture showed no evidence of meningitis. Ferritin was elevated at 4,613 μg/liter, rising to peak ferritin of 38,074 μg/liter. Triglycerides were elevated, fibrinogen was decreased, and complement C3 and C4 were low. Soluble CD25 assay was unavailable.
A diagnosis of HLH was made, and dexamethasone was started on day 14. Broad-spectrum antibiotics and blood products were continued. On day 15, she was intubated and developed progressive circulatory failure requiring inotropes, renal failure requiring continuous filtration, and hepatic failure. She was treated with etoposide with no response, and then alemtuzumab. She developed abnormal neurological signs and died despite maximal supportive care on day 18.
She was PCR positive for MMR in whole blood and nasopharyngeal swab with low cycle time, indicating high viral load. Measles and mumps were both confirmed as vaccine strain (Measles Leningrad-16). Day 14 after MMR, measles IgM was positive, and IgG was negative. HHV6 DNA was PCR positive in whole blood, but HHV6 serology was unavailable. PCR was negative for adenovirus and CMV in plasma and negative for EBV, HSV, and varicella in whole blood. Nasopharyngeal swab and postmortem lung swabs were negative for respiratory viruses.
Postmortem histology showed histiocytic infiltrates of multiple organs consistent with HLH. Multinucleated giant cells, WFCs pathognomic of measles infection were found in multiple organs including lungs, LNs, spleen, liver, thymus, bowel, and adrenals ( Fig. 1 B, callosum, and to a lesser extent the cerebellar hemispheres; it was felt to be impossible to determine). Postmortem examination showed no evidence of malignancy. A commercial pan-hematology and pan-immunology nextgeneration sequencing gene panel identified heterozygous missense variants in XIAP c.337G>G, p.(Gly113Arg), and CXCR4 c.786C>A, p.(Asp262Glu). IFNAR1 was not included in the reported genes.
Kindred A, P2 P2 was the only sibling of P1. P2 was a previously healthy 12-moold male living in Polynesia who died in his domicile country with a diagnosis of overwhelming sepsis 2 yr before P1. He received his first MMR vaccine on day 1 and presented on day 2 with a febrile seizure and respiratory illness and was admitted on broad spectrum antibiotics. He developed fever and a distended abdomen with hepatosplenomegaly, transaminitis, and petechiae. There was anemia, thrombocytopenia, and leukocytosis, and he developed RBC fragments on peripheral smear. Ferritin, triglycerides, and soluble CD25 were unavailable. LP was not performed. Blood cultures were negative.
An exploratory laparotomy on day 16 showed peritoneal free fluid, dilated bowel, multiple enlarged abdominal LNs, and an inflamed appendix and he underwent appendectomy. He continued to deteriorate with high fevers and multiorgan failure, including acute kidney injury, respiratory compromise, shock, and coagulopathy. There was intensive management including invasive ventilation, inotropes, hydrocortisone, blood products, and broad-spectrum antibiotics. He died 21 d after MMR administration.
On medical history, he had normal growth and tolerated live attenuated vaccines: BCG and three doses of oral polio vaccine. There were no prior significant illnesses.
Histology of the appendix was limited by sample degradation but showed giant cells with no definite evidence of measles or HLH. Genomic DNA was isolated from deparaffinized appendix tissue block. Using forward (59-ATTCCCTGATTTCTTGAGG-39) and reverse (59-AGTCAGTGGTTTCAAATTAGG-39) PCR primers, we performed Sanger sequencing and confirmed that the patient was homozygous for IFNAR1 c.1156G>T. Measles was PCR positive on the appendix.
Kindred B, P3
P3, from kindred B, was a 15-mo-old girl born to nonconsanguineous parents of predominantly Tongan and Niuean ancestries. Her great-great-grandfather on the maternal side was of M¯aori ancestry. History was significant for the development of acute seizures in association with suspected enteroviral meningoencephalitis at 8 wk of age (based on clinical features and the detection of enterovirus in the stool; CSF not obtained). She recovered from this illness with good control of seizures on levitiracetam and was making steady neurodevelopmental progress before her latest hospital presentation.
P3 received the MMR/V vaccination at 15 mo of age as per the New Zealand Immunization Schedule. On day 11 after vaccination, she presented to her local hospital with fever and rash and was found to be encephalopathic. Empirical broad-spectrum antibiotics were commenced. Extensive search for infectious organisms was initially negative. This included viral PCR for CMV, EBV, and adenovirus on serially collected plasma samples, and blood cultures and PCR panels for respiratory viral and atypical pathogens (fungi, Nocardia, Legionella, Mycoplasma, Chlamydia, Pneumocystis jeroveci, pertussis, CMV, adenovirus, human parainfluenza virus [HPIV]1/2/3, influenza A virus [IAV], influenza B virus [IBV], SARS-CoV2, RSV, and human metapneumovirus [HMPV]). Attempts at obtaining cerebral spinal fluid (CSF) via lumbar puncture were unsuccessful.
By day 17, she had developed features of hyperinflammation or HLH-like illness with persistent fever, cytopenia, hyperferritinemia (peak level 5,224 μg/liter), and elevated sCD25 (peak level 12,940 pg/ml) and had evidence of hemophagocytosis on bone marrow aspirate. She developed aseptic arthritis involving her knee and ankle joints bilaterally. There was no radiologic evidence of central nervous system (CNS) involvement of HLH. However, CNS neuroimaging did demonstrate an incidental finding of a long segment of cervical and thoracic cystic lesions and a chronic communicating hydrocephalus, possibly explaining difficulties with obtaining CSF samples via lumbar puncture.
For the hyperinflammation, P3 received intravenous methylprednisolone for 3 d followed by a weaning course of steroids for a total duration of 2 wk during which time she defervesced, her cytopenias resolved, and sCD25 reduced to 6,758 pg/ml. Her clinical course was complicated by the development of cutaneous varicella reactivation (skin lesion fluid PCR positive for VZV) on day 6 of steroids. This infection responded appropriately to 10 d of acyclovir. She remained persistently encephalopathic and eventually received an extraventricular device (EVD) from which CSF samples could be obtained on day 58. CSF samples were culture negative and PCR negative for HSV1/2, VZV, enterovirus, parechovirus, Neisseria meningitides, cryptococcus antigen, CMV, EBV, and Toxoplasma. CSF sample was PCR positive for HHV6, but this finding was of uncertain clinical significance.
Around this time, P3 was found to be homozygous for the IFNAR1 p.Glu386* allele on a commercial comprehensive diagnostic primary immune deficiency and cytopenias gene panel. We hypothesized that the presence of an IFNAR1 immune deficiency would explain the patient's susceptibility to LAV, predisposing to the persistence of a viral encephalitis and retrospectively requested MMR viral PCRs on plasma and CSF samples collected at D54 and D58 respectively. She was found to be PCR positive for vaccine strain measles virus in plasma and vaccine strain measles and mumps viruses in the CSF. Knee joint fluid obtained when she developed arthritis was found to be PCR positive for mumps virus on retrospective testing. She was transferred to a tertiary pediatric hospital pediatric intensive care unit for escalation of care. Further neurological workup revealed stable CNS neuroimaging findings and electroencephalogram (EEG) evidence of encephalopathy with no seizure activity. On D62, she developed unexplained acute respiratory deterioration necessitating intubation and commencement of mechanical ventilation. She remained persistently encephalopathic and unresponsive, and on day 72, she died after medically initiated withdrawal of ventilatory support. A postmortem examination was declined by the family.
P3's IFNAR1 p.Glu386* variant was confirmed by Sanger. Parental segregation studies demonstrated that the variant was biparentally inherited. Functional studies using donated skin fibroblasts obtained from P3 supported the variant's loss-offunction nature in homozygous state. Her fully siblings aged 5 and 2 yr are heterozygous carriers of IFNAR1 p.Glu386*. All known carriers in kindred B are healthy and well and tolerated routine childhood immunizations including MMR vaccinations.
Based on the clinical, laboratory, and population genetics information reported in this manuscript, the IFNAR1 c.1156G>T, p.Glu386* variant has been classified as likely pathogenic (ACMG class 4: PVS1_Strong, PS3_Supp, PM3).
Kindred C, P4 and P5 P4, from kindred C, was a 13-mo-old boy born at 24 wk gestation to non-consanguineous parents of Niuean ancestry. His premorbid conditions included premature birth associated chronic lung disease, which was stable on low-flow home oxygen (0.25 liter/min), and CNS changes including a left-sided porencephalic cyst and periventricular white matter changes without seizures. He was followed by neurodevelopmental specialists and was making satisfactory neurodevelopmental progress. In the postnatal period, he spontaneously recovered from a CMV infection. He tolerated routine childhood immunizations including rotavirus vaccines in his first year of life.
On day 10 after receiving his first dose of MMR immunization, he developed a vaccine site reaction and was commenced on a course of antibiotics for presumed cellulitis. He was coincidentally found to have a lower respiratory tract infection with respiratory tract secretion PCR positive for HPIV-1. On day 14, he was admitted to the pediatric intensive care unit for management of acute neurological and respiratory deterioration including status epilepticus. He was encephalopathic, with CSF pleocytosis and EEG changes consistent with encephalopathy. Neuroimaging revealed no acute changes from previous studies. He developed features of hyperinflammation including persistent fevers, bicytopenia (nadir hemoglobulin level 82 g/liter, nadir platelet count 32 × 10 6 /liter), hepatomegaly, biochemical hepatitis (peak ALT 202 U/liter), hyperferritinemia (1,861 μg/ liter), and elevated sCD25 (15,536 pg/ml). Extensive search for infectious organisms in the peripheral blood (blood cultures, PCR for EBV, adenovirus, HHV6, CMV), respiratory tract samples (PCR for adenovirus, CoV, MERS-CoV, SARS-CoV2, HMPV, rhinovirus (RV)/enterovirus (EV), IAV, IBV, HPIV1-4, RSV, Bordetella, Chlamydophila, Mycoplasma), and CSF (stain and culture; PCR for HHV6, Escherichia coli, HiB, Listeria, Neisseria, Streptococcus, CMV, EV, HSV1, HSV2, human parechovirus, VZV, Cryptococcus) were all negative. Given the temporal association between exposure to LAVs and onset of symptoms, we hypothesized that his aggressive clinical course was mediated by the presence of LAVs. CSF sample collected on day 19 returned PCR positive for vaccine strain mumps virus, and a peripheral blood sample collected on day 22 was positive for vaccine-strain measles virus (persistently detectable in peripheral circulation to at least day 45) and mumps virus (persistently detectable in peripheral circulation to at least day 56). We commenced intravenous immunoglobulin (1 g/kg), vitamin A and intravenous ribavirin. Following this, there was gradual improvement of his fever, cytopenias, and hepatitis. However, there was minimal change in his respiratory and neurological status. In week 6, P4 developed biochemical hepatitis attributed to CMV reactivation. There was no acute respiratory or neurological deterioration. He was treated with (val)ganciclovir for 3 wk, during which time the hepatitis resolved. P4 had a protracted recovery from the acute encephalitis and acute chronic respiratory distress syndrome. From week 7 onward, he was more alert and responsive and had a normal EEG on day 56. Noninvasive respiratory support with Hi-Flo was weaned successfully from week 10 onward toward baseline requirements. In week 16 he was stable enough to have short periods of leave from the hospital.
The following week, P4 developed acute bronchiolitis. Respiratory tract sample was PCR positive for RSV (negative for other pathogens including PIV1/2/3, HMPV, adenovirus, RV, IAV, IBV, SARS-CoV2, Legionella species, Mycoplasma pneumoniae, Chlamydophila pneumoniae, P. jiroveci, Bordetella pertussis). Peripheral blood was PCR negative for MMR viruses. Despite aggressive intensive care support and oscillatory ventilation support, P4 died of acute respiratory distress syndrome 4 d after readmission. A postmortem examination was declined by the family.
Patient 5 (P5) is P4's full biological brother and is alive at 7 yr of age. P5 has tetralogy of Fallot, which was surgically corrected at age 6 mo. In early infancy, he had recurrent respiratory tract infections and was briefly hospitalized four times for bronchiolitis. At 15 mo of age, he was hospitalized for viral meningoencephalitis and hyperinflammation, diagnosed at the time as "atypical Kawasaki disease," 17 d after he received his first dose of MMR immunization. He had fevers, coryza, cough, irritability, maculopapular rash, nonpurulent conjunctivitis, and mild peripheral edema. He had preserved peripheral blood cell lines but had a biochemical hepatitis (peak ALT 134 U/liter, peak AST 127 U/liter). CSF sample showed 54 white blood cells (90% lymphocyte predominance) and seven RBCs. CSF cultures and a standard viral PCR panel (HSV, VZV, enterovirus, parechovirus, EBV, Streptococcus pneumoniae, Neisseria meningitidis) were negative. He was hospitalized for 6 d and received intravenous antimicrobial therapy (cefotaxime and acyclovir), immunomodulatory dose of intravenous immunoglobulin (1 g/kg), and aspirin, which was continued for 6 wk. At 2 yr of age, he was hospitalized for 10 d with meningitis accompanied by PCR detection of enterovirus in the CSF. CSF was culture negative and PCR negative for HSV, VZV, parechovirus, EBV, S. pneumoniae, N. meningitides. He was later confirmed to have a unilateral highfrequency sensorineural hearing loss. He subsequently tolerated his second dose of MMR given at 4 yr of age and has not had any further significant infections. He has not received varicella vaccination. He attends a mainstream school and receives speech language therapy and resource teacher support for a mild learning difficulty.
We evaluated for and confirmed homozygosity for IFNAR1 p.Glu386* allele in P4 and P5 by Sanger sequencing. Parental segregation studies demonstrated that P4's and P5's IFNAR1 p.Glu386* variant are biparentally inherited. A commercial comprehensive primary immune deficiency diagnostic gene panel excluded the presence of other pathogenic variants in P4.
Kindred D, P6
P6 from Kindred D is a 4-yr-old boy born in Australia to Niuean and Tongan parents, who has four older siblings, all of whom are well, with two having a history of language disorders. He was born at term with no neonatal concerns and normal newborn hearing screen (otoacoustic emissions). He suffered recurrent viral pneumonitis over the first year of life but was otherwise felt to be developing normally. At 14 mo, some 5 d after receiving the MMR vaccine, he developed fever, lymphadenopathy, and a generalized blanching rash and progressed to have arching movements and lethargy with increased irritability but no localizing signs. 2.5 wk into this illness, he was found to have a CSF pleocytosis (polymorphs = 23 × 10 6 /liter; mononuclear = 43 × 10 6 /liter) consistent with viral encephalitis. At 4 wk after MMR he was positive for measles virus on NPA, while brain MRI performed during the acute phase was said to be normal. He did not receive amino glycoside antibiotics. The encephalitis selfremitted after 4 wk, but following the hospital admission, his family felt that his behavior had changed, with concerns about developmental delay, including walking later than his siblings at 18 mo, and no clear words by 21 mo of age when he was due to see Speech Pathology. However, at 21 mo he developed viral pneumonitis, requiring intensive care and venovenous ECMO, with RSV type A, parainfluenza type 3, rhinovirus, and bocavirus on NPA. Following this admission, his family felt that he returned to his pre-intensive care level of function, and shortly thereafter he was diagnosed with profound bilateral sensorineural hearing loss with no response in either ear on auditory brainstem-evoked responses. MRI scan demonstrated a dysplastic right vestibule and absent right lateral semicircular canal. He was also diagnosed with global developmental delay and autistic spectrum disorder, a finding that has been reinforced by more recent neurodevelopmental assessment performed at age 4 yr and 8 mo following the placement of cochlear implants. He demonstrated skills less than the first centile in all modalities assessed, with a functional level <18 mo based on clinical review of items completed on a standardized developmental assessment tool. The patient's engagement was limited, and this will be reviewed via a normed functional parent questionnaire which is pending, but parental history supports these findings. Behavioral review demonstrated a significant deficit in social communication skills and restricted and repetitive behaviors consistent with DSM-5 criteria for a diagnosis of autism spectrum disorder. He currently needs substantial support with his social communication (level 2) and very substantial support for his behaviors (level 3). This will be reviewed following intervention, which to date has been limited. More recent brain MRI has demonstrated multiple foci of "blooming," in keeping with chronic micro-bleeds, predominantly involving the bilateral cerebral subcortical white matter, splenium of the corpus callosum, and to a lesser extent, the cerebellar hemispheres. It was felt to be impossible to determine whether these were related to sequelae of the encephalitis or the pneumonitis requiring ECMO. He has not had further significant infections in the last 21 mo, and his basic adaptive immune screen is normal including immunoglobulin levels. He is above the 95th centile for height and weight. SNP array at age 4 yr showed that despite no history of consanguinity, 8% of his DNA represented long continuous segments of homozygosity consistent with identity by descent, including across the chr22q21.3-22q22.2 region which includes the IFNAR1 gene locus. He was then diagnosed with homozygous IFNAR1 deficiency by targeted Sanger sequencing. He is awaiting WES to exclude an additional/alternative etiology to his developmental disorders and hearing loss.
Griffiths III -Griffiths Scale of Child Developmental 3rd edition scores Griffiths Scale scores are provided in Table S5.
Kindred D, P7 P7 from Kindred E is a 13-yr-old girl born in New Zealand to non-consanguineous parents of Samoan ancestry following an uneventful pregnancy at term who has six older siblings who are well. She had hospital presentations for a urinary tract infection and two episodes of bronchiolitis in the first 6 mo of life. She presented at 10 mo of age with Hib bacteremia and meningitis (she had not been vaccinated prior). This was complicated by seizures and bilateral infected subdural hygromas requiring multiple washouts. At 12 mo old, 2 wk after discharge from the Hib illness-associated admission she represented with rapidly progressive respiratory and multiorgan failure requiring 9 d of ECMO support. RSV was identified on PCR and immunofluorescence. Bronchioalveolar lavage also isolated Candida (one colony), and CMV PCR was strongly positive (7.0 log copies/ml, negative in blood). She was initially treated with high-dose trimethoprim with sulfamethoxazole, caspofungin, palivizumab, and intravenous immunoglobulin. She relocated to Australia at age 3 and was lost to medical follow-up there. She received vaccination with MMR + MMRV at 4 yr of age without complication. She remained relatively well until she presented at age 7 yr with acute respiratory distress syndrome and multiorgan failure requiring inotropic support. She was intubated for a total of 33 d requiring escalation to highfrequency oscillation ventilation and nitric oxide. Imaging showed extensive air space opacification and background extensive bronchiectasis. No causative organism was identified.
At 8 yr of age, she was admitted with an exacerbation of bronchiectasis with adenovirus and human metapneumovirus identified on NPA. She has had a subsequent admission for pulmonary optimization and remains on prophylactic azithromycin but has not had further acute infective episodes.
In addition to the severe life-threatening infection history, she has a deletion of ∼1.7 Mb on the long arm of one chromosome 17 at cytogenic band q12 and one long continuous stretch of homozygosity (7.2 Mb) on the X chromosome. She is known to have a right renal dysplasia with a pelvic-ureteric junction obstruction. Common features of people with deletions of 17q12 include renal abnormalities and neurocognitive and neurobehavioral problems (Loirat et al., 2010;Moreno-De-Luca et al., 2010;Nagamani et al., 2010). She has global developmental delay. It is unclear whether this is due to her chromosomal deletion or her history of severe infections and seizures in early life. She is allergic to crustaceans and has some eczema. Previous testing on 112-gene WES panel reported a heterozygous variant of uncertain significance in CD3D c.418C>A (p.Gln140Lys). The finding of homozygosity for IFNAR1 p.Glu386* was made on testing 421-gene WES panel. No other variants were reported.
Online supplemental material Fig. S1 shows the genetic study of the patients. Fig. S2 shows the study of XIAP in the patients from kindred A. Fig. S3 shows the study of mutant IFNAR1. Fig. S4 shows the population genetics of IFNAR1 based on public databases. Table S1 shows the immunological evaluation of the patients. Table S2 shows the infectious diseases evaluation of patients after exposure to LAVs. Table S3 shows the summary of treatments given during admission for LAV infection. Table S4 shows the genetic evaluation of the patients. Table S5 shows Griffiths Scale of Child Developmental 3rd edition scores.
Acknowledgments
We thank the patients and their families for placing their trust in us. We thank M. Dilcher and members of the National Measles and Rubella Reference Laboratory, Canterbury Health Laboratories, New Zealand, for genotyping measles and mumps viruses. We thank the New Zealand Ministry of Health, the Samoa Ministry of Health, the Auckland District Health Board Clinical Ethics Advisory Committee, and bioethicist Monique Ryan for advice. We thank Nicolas Prud'homme and Jérémie Torterat, from the Institut de la Statistique de la Polynésie française (Papeete, Tahiti), for performing the random selection of participants from the five archipelagos of French Polynesia; the nurses and technical staff of the Institut Louis Malardé (Papeete, Tahiti) for their contribution to participant recruitment and managing the biological samples used in this study; and the mayors, municipal staff, and inhabitants of the islands of French Polynesia for their support. We warmly thank the members of both branches of the Laboratory of Human Genetics of Infectious Diseases for discussions and Y. Nemirovskaya, M. Woollet, D. Liu, S. Boucherit, C. Rivalain, M. Chrabieh, and L. Lorenzo for administrative assistance. The authors acknowledge and sincerely thank the research participants and the Samoan government-particularly the Ministry of Health; the Ministry of Women, Community, and Social Development; the Office of the Prime Minister; and the Samoa Bureau of Statistics-for their continued support of this work. We also deeply thank local community liaison officers for their assistance at the time of sample collection, particularly Dr. Nuualofa Tuuau (Samoa) and Mr. Tamarua Teariki (Cook Islands).
The laboratory of V.-M. Cao-Lormeau is supported by MATAEA grant no. 03557/MED/REC_29/05/2019 (Délégatioǹ a la recherche de la Polynésie française). The Laboratory of Human Evolutionary Genetics is supported by Institut Pasteur, Collège de France, the Centre national de la recherche scientifique, Fondation Allianz-Institut de France, the French Government's Investissement d'Avenir program, Laboratoires Supplemental material Sanger sequencing results for XIAP for P1 and P2 of kindred A, the parents, and healthy control leukocyte gDNA. (C) Lyonization in P1's and mother's gDNA. (D) Normal expression of XIAP by flow cytometry in CD3 + cells, a control, the mother of P1, and a XIAP-deficient patient. (E) Normal production of TNF-α in response to NOD2 stimulation by muramyl dipeptide (MDP) in monocytes from a control and the mother of P1, while it is defective in a XIAP-deficient patient. Bar graphs showing percentage of monocytes expressing TNFα from intracellular staining data. (F) Normal activation-induced cell death in response to anti-CD3, assessed by Annexin V staining, in T cells from a control and the mother of P1, while it is increased in a XIAP-deficient patient. Provided online are five tables. Table S1 shows the immunological evaluation of the patients. Table S2 shows the infectious diseases evaluation of patients after exposure to LAVs. Table S3 shows the summary of treatments given during admission for LAV infection. Table S4 shows the genetic evaluation of the patients. Table S5 shows Griffiths Scale of Child Developmental 3rd edition scores. | 2022-04-21T06:23:39.019Z | 2022-04-20T00:00:00.000 | {
"year": 2022,
"sha1": "75d261699c6f85fe65a55aeba15da9ed2ee4dd7c",
"oa_license": "CCBY",
"oa_url": "https://rupress.org/jem/article-pdf/219/6/e20220028/1432151/jem_20220028.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2dc65f77fc4857eb8d7e49d7d736b8150563e966",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6380380 | pes2o/s2orc | v3-fos-license | Assessment of levels of otoacoustic emission response in neonates with perinatal asphyxia
Objective: To evaluate the effects of perinatal asphyxia on the level of the response to transient otoacoustic emissions in infants. Methods: Otoacoustic emissions in 154 neonates were performed: 54 infants who suffered asphyxia at birth, measured by Apgar score and medical diagnosis, and 100 infants without risk were compared. Scores less than 4 in the first minute and/or less than 6 in the fifth minute were considered as "low Apgar". Statistical analysis of the data was performed using the Kruskal, Wilcoxon, and Mann-Whitney nonparametric tests. Results: Lower levels of response were observed in transient otoacoustic emission in the group that suffered perinatal asphyxia, with significant values for the frequencies 2,000, 3,000, and 4,000 Hz in the right ear, and 2,000 and 4,000 Hz in the left ear. Conclusions: The analysis of the intrinsic characteristics of the otoacoustic emissions evidenced low performance of outer hair cells in neonates who had perinatal asphyxia, which may affect the development of listening skills in this population.
Introduction
The Apgar score allows for the evaluation of newborn clinical status and identification of those in need of assistance, assessing the risks of perinatal asphyxia. 1 It consists of five criteria: heart rate, breathing effort, muscle tone, reflex irritability, and skin color. Each item is given values ranging from 0 to 2; the higher the score, the better the conditions at birth. 2,3 This evaluation is performed by the neonatologist in the first, fifth, and tenth minute of life. However, perinatal asphyxia develops when there is significant tissue hypoperfusion and decreased oxygen supply, resulting from several etiologies in the peripartum period, which can cause neurological lesions and damage cochlear hair cells. 2 Apgar scores lower than 4 in the first minute and/or less than 6 in the fifth minute are considered risk factors for hearing loss and deserve attention. 4 The first hearing assessment should be performed in the hospital nursery, using the otoacoustic emissions test for assessing the integrity of outer hair cells. 5,6 In most studies on hearing screening with otoacoustic emissions, the criterion used to characterize a normal exam is based on the presence of response. 7 However, for Aidan et al, 8 one of the criteria to assess the actual status of the inner ear is the analysis of the intrinsic characteristics of this examination, such as the response magnitude of these emissions.
According to Basseto et al, 9 full-term newborns have higher response amplitudes when compared to preterm newborns. The female gender and the right ear often have larger amplitudes. 9,10 The use of ototoxic drugs can cause lower amplitude response for otoacoustic emissions. 11 From the perspective that analysis of the signal/noise ratio can provide additional information on the operation of outer hair cells, this study aimed to evaluate the levels of response to otoacoustic emissions evoked by transient stimuli in infants who had perinatal asphyxia. The study consisted of a non-concurrent cohort with a fixed population. Inclusion criteria were: a) having being born in the HC, b) presence of response in the otoacoustic emissions test, c) informed consent signed by the parents/ guardians of the neonate. The exclusion criteria were: a) middle ear disorders diagnosed by the otorhinolaryngologist, b) presence of genetic syndromes, c) history of congenital infections, and d) use of ototoxic drugs.
Methods
For exposed individuals, an Apgar score of less than 4 in the first minute and/or less than 6 in the fifth minute was considered, as they are at risk for hearing loss, and this score was deemed "low Apgar". The medical diagnosis of perinatal asphyxia for this group was also taken into account. The diagnosis of perinatal asphyxia was considered by the physician according to the clinical manifestations of the newborn, and was classified as neurological, cardiovascular, respiratory, metabolic, renal, gastrointestinal, or hematological. For those not exposed, only neonates with Apgar scores >7 in the first minute were selected for comparison.
There was no criterion for sample pairing, but potential confounders, gestational age, birth weight, and gender were tested (Tables 2, 3, and 4, respectively). As there was no evidence of significant probable associations with the outcome, the association between asphyxia and responses to otoacoustic emissions was analyzed without the need for correction.
Regarding the sample size calculation, since this was a non-concurrent cohort with a fixed population, no sampling dimension was required in the planning phase, but an estimate of test power, which ranged from 60% to 85%, depending on the frequency/ear. The power estimates considered a simple random sample, normality of outcomes, type I error of 0.05, and equal standard deviation between individuals with and without low Apgar.
The assessment of transient otoacoustic emissions was performed by a speech therapist specialized in audiol- If the infant awoke during the examination, the mother or guardian was advised to make the infant sleep again. The equipment used in all assessments was OtoRead/ Interacoustics (Interacoustics do Brasil, RJ, Brazil), which allows for the recording of responses by introducing a probe, coupled with a microphone, in the ear canal.
The parameter PASS/FAIL was used as a criterion of analysis, described in the equipment protocol, with click stimuli, at an intensity of 83 dB peSPL, and six bands of frequencies (from 1,500Hz to 4,000Hz) were evaluated. Values that were considered PASS were emissions present in a signal/noise ratio of 6 dB in at least three consecutive frequency bands, including the 4,000 Hz band. The examination lasted 64 seconds, at most.
The signal/noise ratios considered for the analysis were frequencies 2,000; 3,000 and 4,000 Hz in both ears. 12 The values obtained at each frequency were compared between groups. Statistical analysis of the data set was performed using the nonparametric Kruskal-Wallis test, Spearman's correlation, and the Mann-Whitney test. The nonparametric test was used because the probabilistic distribution of the outcome was not identified. The descriptive level was highlighted in all tests and a significance level of 0.05 (5%) was used to reject the null hypothesis.
Results
The study included 154 infants; sample characterization regarding gender, mean gestational age, and birth weight is shown in Table 1.
Before the analysis of perinatal asphyxia effect on the response level of otoacoustic emissions, the effect of gestational age and birth weight as potential confounders was investigated, but no statistical significance was observed (Tables 2 and 3).
However, when investigating gender, higher response amplitudes were observed for females at 2,000Hz, 3,000Hz, and 4,000Hz in the right ear, and 3,000Hz in the left ear (Table 4).
When comparing infants who suffered perinatal asphyxia with those who were healthy, lower response amplitudes were observed in individuals exposed to the risk indicator for hearing loss in all frequencies, except at 3,000Hz in the left ear (Table 5). In this case, the analysis was adjusted in relation to gender, as it was a potential confounding factor.
Discussion
One of the current methods to diagnose early hearing loss is hearing screening by otoacoustic emissions, which aims at identifying infants with possible hearing impairment. Its analysis is based on the categorization of responses as present or absent, 13 but only those considered absent are candidates for diagnostic hearing evaluation by other methods, except in cases with suspected auditory neuropathy.
In the study of the amplitude of otoacoustic emissions in relation to gender, significantly higher mean amplitudes were observed in newborn females, with a predominance of the right ear, as reported by other studies. 8,14 According to cassidy & Ditty, 14 higher amplitudes in the examination of transient otoacoustic emissions in females when compared to males can be attributed to increased sensitivity of the outer hair cells in females. The analysis of the signal/noise ratio was also studied by other authors, such as Jiang et al, 15 who observed significantly lower amplitudes at frequencies of 1 kHz and 10 kHz in otoacoustic emission testing by distortion products in neonates with low Apgar scores, suggesting cochlear impairment, even with the presence of response. The magnitude of the response was also shown to be influenced by other risk factors, such as hyperbilirubinemia, prematurity, and exposure to ototoxic drugs. [16][17][18] These findings demonstrate that it is necessary to better investigate the criteria of normal cochlear function, especially the pass/fail criterion of emissions, as studies in adult individuals have shown loss of cochlear function in those exposed to noise and ototoxic medication 19-21 long before the decrease in psychoacoustic thresholds, a factor that is not possible to investigate in the neonatal stage.
Literature shows that perinatal asphyxia is a major cause of failure in the neonatal hearing screening examination. 22,23 However, when analyzing the amplitude of otoacoustic emissions, this study observed lower values than those found in newborns with no risk indicators for hearing loss at birth. This indicates the possibility of damage to cochlear cells caused by tissue hypoxia, an information not taken into account for the criteria of normal otoacoustic emissions. Therefore, these infants should undergo clinical follow-up, as proper development of auditory skills depends on the integrity of the peripheral auditory system, and thus, parents should be informed. It is believed that other tests, such as brainstem auditory evoked potentials (BAEPs), already used in the clinical routine in neonates with low Apgar scores and electrocochleography, could assist in interpreting these findings.
It can be concluded that the analysis of the intrinsic characteristics of the transient evoked otoacoustic emission test showed lower amplitude values, suggesting lower performance of outer hair cells in neonates who suffered perinatal asphyxia at the frequency bands of 2,000, 3,000, and 4,000 Hz for the right ear, and 2,000 and 4,000 Hz for the left ear. Newborns that suffered asphyxia require clinical monitoring and electrophysiological and electroacoustic assessment to identify possible damage to the cochlea and auditory nerve cells, as well as to the development of auditory processing. | 2017-06-18T15:15:12.587Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "c6e178dde6e77f77bffca92d7b0cde8dec8c459b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1590/0103-0582201432307",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a08873b2ac871577abe7f54435621970a88dc100",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
70674012 | pes2o/s2orc | v3-fos-license | ESPONDILODISCITE CAUSADA POR ESPINHA DE PEIXE MIGRADA PARA FARINGE. RELATO DE CASO ESPONDILODISCITIS CAUSADA POR ESPINA DE PESCADO MIGRADA PARA LA FARINGE
Ingestion of foreign bodies is a common problem seen at emergency rooms and frequently involves chicken and fish bones. There are few cases of migrated foreign bodies through the retropharynx causing infectious process in the area but no one, despite the proximity, causing spondylodiscitis. Perhaps such condition is attributed to the integrity of the longus colli fascia covering and protecting the cervical spine. We described the first case of spondylodiscitis due to a foreign body (saw-toothed fish bone) that penetrated the longus colli fascia and carved into vertebral body C3.
INtrOdUctION
Ingestion of foreign bodies is a common problem seen at emergency rooms and frequently involves chicken and fish bones. 1,8 When the foreign bodies become impacted, an endoscopic or direct oral removal is usually possible. However, migration of the foreign body through the pharynx is a rare complication. There are few reports of the migration of foreign bodies into unusual locations and complications in the literature. (Table 1) In this paper, we will review the literature and we will describe the first case of a foreign body causing cervical spondylodiscitis.
cAsE rEPOrt
A 57-year-old man looked for medical assistance after dinner, complaining of odynophagia. The first oral evaluation showed a retropharyngeal puncture injury without any signs of a foreign body (that was under beneath the mucosa). Plain radiographs of the neck showed a radio-opaque foreign body adjacent to the C2-C3 space. A computed tomography (CT) scan (Figure 1a-b) was then performed and showed a 3-cm pointed bone carved into the vertebral body of C3.
After a couple of days, a direct laryngoscopy was performed. It was possible to notice the injury in the retropharyngeal mucosa. The removal of the foreign body was made extending an incision through the mucosa puncture and exploring the wound. Two direct laryngoscopies were needed (10 days between both) for the removal of the saw-toothed fish bone (Figure 1c). The surgical procedure resulted in clinical improvements and resolution of the odynophagia. However, after 45 days, the patient presented with fever, severe cervical pain radiating to his upper limbs and a stiff neck. His laboratory evaluation showed a high erythrocyte sedimentation rate (ESR), high levels of C-reactive protein (CRP) and neutrophilic leukocytosis. Magnetic resonance imaging (MRI) of the neck showed enhanced paravertebral tissue, disc space C2-C3 and epidural abscess (Figure 1d).
A high cervical retropharyngeal approach was performed and it was noticed adherences in the soft tissues and a small amount of purulent material in paravertebral space in continuity with longus colli fascia and intervertebral space. Then, we performed a standard C2-C3 discectomy without bone grafting. A sample taken for bacterial culture was negative. An antibiotics course of ciprofloxacin and clindamicin was prescribed for 28 days. After three month of follow--up, the patient is asymptomatic, showed remarkable improvement according to both clinical and laboratory tests and the x-ray obtained showed C2-C3 fusion.
dIscUssION
The shape of the foreign body is the most important factor in the pathology of migration. 1,8 The literature describes saw-toothed fish bones as being capable of penetrating deeper into the retropharyngeal space. The saw-toothed shape allows easy penetration but difficult backward removal, causing more injury to the mucosa. In our case, removal was only possible after the second oral surgical procedure.
The most common complication with foreign body migration is abscess of soft tissues. 1,2,5,8,12 In this case, approximately 45 days after the fish bone was removed, the patient began to exhibit signs and symptoms of cervical spondylodiscitis. Our patient exhibited the criteria for a diagnosis of spondylodiscitis 9-13 based on combination of clinical (cervical pain and fever), laboratory (increased values of ESR and CRP and leukocytosis) and MRI findings (contrast enhancement of paravertebral soft tissues and disc space).
There have been a few reported cases of paravertebral and retropharyngeal abscess, 2,5,7,10,12 but spondylodiscitis has never been reported with proximity to this area. We propose that the longus colli fascia covering the cervical spine serves as an important barrier against infection in the adjacent area. In our particularly case, the foreign body was carved into the C3 vertebral body and perforated this fascia, allowing the infection to reach the disc space.
One important difference in the treatment of this patient was the selection of antibiotics. Based on the upper gastrointestinal flora, antibiotics for gram-negative and anaerobic gram-negative bacteria were selected instead of those for gram-positive bacteria, especially Staphylococcus species. [9][10][11][12][13] The remarkably clinical and laboratory improvement allowed us to confirm the efficiency of the antibiotics.
The migration of ingested foreign bodies into the pharynx is rare and unpredictable. An imaging examination is mandatory when the physician is suspicious of foreign body migration. A CT examination is one of the most important tools in the first evaluation of these patients because the majority of migratory foreign bodies are radio--opaque. MRIs are important for follow-up evaluation and complications. When a foreign body becomes impacted in the cervical spine and perforates the longus colli fascia, it can cause spondylodiscitis.
All authors declare no potential conflict of interest concerning this article. | 2019-03-07T14:04:45.937Z | 2014-03-01T00:00:00.000 | {
"year": 2014,
"sha1": "12cc74bfcd665e93065e360a52e84a84228a8ad6",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/coluna/v13n1/1808-1851-coluna-13-01-00067.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1fc4745617280b121586572551f164765dc32521",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
239770429 | pes2o/s2orc | v3-fos-license | Converting to Intubation During Non-intubated Thoracic Surgery: Incidence, Indication, Technique, and Prevention
Traditionally, intubated general anesthesia with one-lung ventilation is standard in thoracoscopic surgery. However, in recent decades, non-intubated thoracoscopic surgery (NITS) has become an alternative method to minimize the adverse effects of intubated general anesthesia. Non-intubated procedures result in fewer adverse events than tracheal intubation and general anesthesia, such as intubation-related airway injury, ventilation-induced lung injury, prolonged hospital stay, and postoperative nausea and vomiting. Despite these benefits, surgeons must consider the possibility of converting to intubation during NITS as the conversion rate is between 2 and 11%, varying between regions and learning time. The conversion rate is also affected by race, body size, the learning curve, and the surgical team's preferred methods. There are surgical (e.g., significant respiratory movements, uncontrolled bleeding, hindered surgical fields, large tumor sizes, adhesions) and anesthetic (e.g., hypoxemia, hypercapnia, airway spasms) reasons for converting to intubation. When a conversion is deemed necessary by the surgical team, the members should be well-prepared and act rapidly. Anesthesiologists should also feel comfortable intubating patients in the lateral decubitus position with or without bronchoscopic guidance. Patient selection is the key factor for avoiding conversion into an intubated surgery. Patients with an American Society of Anesthesiologists grade 2 or less, a body mass index <25, and less surgical complexity may be good candidates for NITS. Careful monitoring, adequate anesthesia depth, an experienced surgical team, and sufficient preparation can also prevent conversion. Conversion from a non-intubated into intubated thoracic surgery is unwanted but not inevitable. Therefore, NITS can be successful when performed on select patients by a well-prepared and experienced surgical team and is worthy of recommendation owing to its non-invasiveness.
INTRODUCTION
Over the past few decades, there have been considerable advances in minimally invasive thoracoscopic surgery, including surgical techniques and anesthesia methods. Traditionally, deep general anesthesia and multi-portal video-assisted thoracic surgery (VATS) were used to minimize respiratory movement and reduce technical difficulty. As surgical skills improved, surgeons could tolerate diaphragmatic or mediastinal movement to some extent. As such, deep anesthesia is no longer necessary for VATS or uniportal VATS, resulting in non-intubated thoracoscopic surgery (NITS) with spontaneous one-lung ventilation.
NITS is advantageous because it avoids perioperative morbidity derived from a mechanical ventilator and the unfavorable effects of deep anesthesia. For example, nonintubated anesthesia prevents potential damage caused by lung overdistension, the shear stress of repetitive opening and closing of alveoli, and the release of several inflammatory mediators (1)(2)(3)(4)(5). Subclinical lung injury caused by positive pressure ventilation also cannot be ignored, and minor respiratory impairment is associated with postoperative complications (6). NITS also prevents potentially fatal laryngeal or tracheal injuries from intubation (7,8). The non-intubated method with adequate sedation can also reduce the use of analgesics, which can cause postoperative complications, including dizziness, vomiting, nausea, and respiratory depression (9). A residual neuromuscular block is another complication associated with muscle relaxants, eliciting postoperative acute airway events (e.g., hypoxemia, airway obstruction), unpleasant symptoms of muscle weakness, and delayed extubation (10,11). The easy collapse of the dependent lung is another issue with mechanical ventilation and muscle paralysis, potentially increasing the risk of hypoxemia during one-lung ventilation. Furthermore, these deleterious effects may be more significant for susceptible patients (12).
The effectiveness of NITS is comparable with conventional intubated VATS, despite its non-invasiveness. Most comparative studies reported similar operative times and blood loss volumes (13)(14)(15). Currently, non-intubated methods have few technically unfavorable effects on surgeons, and the overall surgical time is shortened by not requiring anesthesia induction (16,17).
Although non-intubated thoracic surgery has considerable benefits, there are some risks. First, hypercapnia and hypoxemia were frequently noted during NITS due to ineffective spontaneous one-lung ventilation (13), mostly in patients susceptible to systemic sedatives or with underlying lung diseases. Furthermore, anesthesiologists' control over the sedation depth is difficult because the patient's respiratory pattern must be suitable for the operation field and oxygenation maintenance (14). Barking, respiratory movements, or an elevated diaphragm may interfere with the surgical field if the depth of anesthesia is not under stationary control. When NITS started to become accepted in the early 2010s, some surgical teams used epidural anesthesia combined with intravenous narcotics (13,18,19). However, epidural anesthesia may be associated with a sympathetic blockade, perhaps leading to increased bronchial tone and airway hyper-reactivity (20). If these deleterious effects worsened during the operation, NITS was highly likely to be converted to an intubated surgery.
Conversion from NITS into an intubated surgery is unwanted. However, the correct timing and technical proficiency are crucial. Establishing appropriate selection criteria for non-intubated surgery and clear indications for conversion can better prepare the surgical team facing an unanticipated condition ( Table 1) (27). Therefore, anesthesiologists and surgeons should be wellprepared and alert for possible conversions to ensure patient safety and the acceptability of NITS.
NON-INTUBATED TO INTUBATED CONVERSION RATE
The conversion rate from NITS into an intubated surgery is between 2 and 11% (21)(22)(23)(24)(25)(26), differing based on centers and countries but primarily by learning time. NITS experience may be a key point affecting the conversion rate. Hung et al. reported a remarkable decrease in the conversion rate as the cumulative non-intubated case number increased over time (23). An ∼10% conversion rate was reported by this surgical team in 2011 when they had performed 100-150 non-intubated surgeries (13). However, in 2018, when the team had >1,000 cases, the conversion rate improved to <3% (28,29).
Other conversion risk factors were also reported, including older age, higher body mass index (BMI), and anatomical resection and adhesion (23,24). However, there was no strict cutoff point for age as an NITS exclusion criterion.
Regarding BMI, researchers indicated patients with a BMI >25 or 30 are not ideal candidates for non-intubated anesthesia because more vigorous respiratory movement is expected, resulting in a disturbed surgical field (23,24). The odds ratio for conversion into intubated VATS was reported up to 9.09 (range, 3.59-25.46) for patients with a BMI >25 (23). The conversion rates in different countries may come from variable baseline characteristics of the study population, including BMI, sex, and body size. For western studies including 50% or fewer female participants, a 5-10% conversion rate is relatively higher (26,30) than the 2-5% reported by most studies from eastern countries (14,18,28,29,31,32). Both ethnicity and sex were thought to be associated with body size and respiratory movement, which are often positively correlated in our experience.
Anatomical resection is another factor affecting the conversion rate. A previously published series reported a 0.3% conversion rate for wedge resection but 2.7 and 4.7% for segmentectomy and lobectomy, respectively (23). Chen et al. reported similar results in another series, in which the conversion rate was 1.3%, 7.1%, and 5.8% for wedge, segmentectomy, and lobectomy, respectively (31). The reported conversion rates were comparatively higher (5.6-10%) for studies that only enrolled patients undergoing an anatomical resection (13,25).
Chronic lung distress could also impact conversion rate and compel surgeons not to perform NIST owing to higher risks and inefficiency. For example, an emphysematous lung would collapse more poorly and slowly in NITS procedures than intubated ones. Furthermore, NITS could not support respiratory function or intubation, and emergent conversion probably occurs. Although patients with impaired pulmonary function or chronic lung diseases are more susceptible to hypoxemia or Therefore, NITS may be still applied as a safe alternative to intubated general anesthesia in highly selected patients. There is no universal conversion rate in the literature because it is affected by numerous factors, such as the patient's baseline characteristics, surgical team's experience level, and complexity of the surgery.
CONVERSION INDICATIONS
Under certain conditions, NITS may be converted into an intubated surgery to ensure patient safety and facilitate the surgical process. For surgeons, considerable mediastinal movement is the most common reason (∼33.3-100%) to convert into intubated general anesthesia (15,18,24,27,28,32,34).
Strong mediastinal or respiratory movement can lead to hilum dissection difficulties and accidental injury to vital regions. Males with larger body sizes and heavier body weight are more likely to convert due to major respiratory fluctuation. Obesity is often associated with a higher respiratory rate and lower tidal volume and may directly cause large respiratory movements (35,36).
Bleeding is another major reason for converting, accounting for ∼12.5-33% of all intubated conversions (13,18,25,26,28,31,37). During NITS, accidental bleeding was often related to significant and unexpected respiratory movements, underregulated cough reflexes, and hindered surgical fields caused by a non-collapsed lung. Conversion to intubated anesthesia makes respiratory movements controllable and reduces the surgeon's stress while protecting the contralateral airway if the bronchus was damaged.
For anesthesiologists, prolonged hypoxemia or hypercapnia generally elicits conversion from non-intubation into intubation. The cutoff criterion of hypoxemia varies among surgical groups. Some researchers have suggested converting if the oxygen saturation on pulse oximetry was <85% for more than 5 min (33). Others suggested that an oxygen partial pressure <60 mm Hg or carbon dioxide level >80 mm Hg should indicate conversion (38). The proportion of conversions resulting from hypoxemia or hypercapnia was between 14.3 and 33.3% (13,24,25,37).
Disease characteristics, such as large tumor size or severe adhesion, have an important effect on the decision to convert; non-intubated anesthesia increases the surgical difficulty in already challenging cases. Further, the operative time may be longer for complicated cases, and NITS likely aggravates hypoxemia or hypercapnia. Disease-related conversions account for 14.3-50% (18,21,27). Other interfering factors adversely affecting non-intubated VATS include a non-collapsed lung, susceptibility to pain, airway hypersensitivity, and ineffective epidural anesthesia or intravenous sedation (13,31,37,39).
The criteria of conversion from NITS to intubated VATS varies among different groups, according to their experience levels, maturity, and surgical techniques. Nevertheless, the general principles and situations of conversion are similar. The aforementioned indications of conversion can be summed up as surgical and anesthesiological aspects. Both these aspects have to be fully discussed among doctors when the situation gets severe, and the decision of conversion should be made jointly.
NITS CONVERSION TECHNIQUE
Despite strict patient selection criteria, converting to intubation is sometimes necessary due to unexpected events, such as hypoxemia, strong mediastinal movement, massive bleeding, or a hampered surgical field. Conversions are usually unpredictable and occur in an emergency. Therefore, surgeons and anesthesiologists should be well-prepared and establish appropriate conversion criteria.
Although it is technically demanding to intubate a patient in the decubitus position, it should not be unconquerable for an experienced anesthesiologist. In these situations, bronchoscopeguided intubation may help, and good patient selection should also minimize struggling during difficult intubations (32,40). The Mallampati score is a good bedside indicator for potential obstacles in emergency intubation. A Mallampati score of 1 and an acceptable neck extension with a thyromental space extending more than four finger breadths indicate easier intubation, even in the decubitus position (41).
Hypoxemia and hypercapnia during spontaneous one-lung ventilation were common reasons for conversion. For these patients, the collapsed lung can be quickly re-expanded when the surgical wounds were sealed with transparent waterproof dressings after chest tube insertion (13). Double-and singlelumen endotracheal tubes with a blocker can be considered if conversion is deemed necessary. A laryngeal mask is also an option for the conversion, depending on the dexterity of the anesthesiologist (21). Some surgical teams rotate the table to insert endotracheal tubes in a relatively supine position.
We preferred straightforward intubation in the lateral decubitus position. Muscle relaxants (cisatracurium or rocuronium) were administered after sedatives such as propofol and fentanyl (30). Intubation was better assisted with video laryngoscope to facilitate the process. In our experiences, single-lumen intubation with blocker may be more efficient, but double-lumen intubation was also feasible.
Regardless, the anesthesiologist must be familiar with inserting an endotracheal tube, laryngeal mask, video-assisted intubation, and endobronchial blocker to choose the most suitable device depending on the patient's airway and position, the completion time of the procedure, and the causes for conversion. Together with well-established criteria and protocols, experienced team members are important for shortening the time from decision to intubation.
AVOIDING CONVERSION DURING NITS
Emergency intubation during conversion is usually undesirable, potentially resulting in airway injuries, prolonged operative time, and exacerbated stress of the team members. Therefore, avoiding conversion during NITS is ideal (8,42). Table 2 summarizes the points to consider to avoid conversion during NITS.
A preoperative patient briefing regarding the risks and benefits of the non-intubated method is necessary. Further, preoperative patient selection is crucial, and the attending surgeons should be exceptionally cautious and strict. Good candidates for NITS generally include those with a lower BMI, American Society of Anesthesiologists grades 1 or 2, and no cardiopulmonary issues (43,44). The complexity of the surgical procedure also affects the feasibility of NITS. Although there is growing evidence that non-intubated anesthesia is safe and feasible for various thoracic procedures, from simple lung resections to complicated anatomical malignancy resections, it has generally been more acceptable to perform relatively simple procedures with the nonintubated technique (45)(46)(47). As the surgical skills and anesthesia techniques improve, non-intubated surgery will be applied more widely for moderately risky patients and difficult surgeries (26,(48)(49)(50)(51).
Regarding the monitoring of non-intubated patients, a three-lead electrocardiogram, blood pressure monitoring, and pulse oximetry are minimally required. Monitoring airway patency, respiratory rate, and respiratory pattern during onelung ventilation are also important. Careful monitoring can detect early signs that conversion is necessary and avoid emergency conversions.
The anesthesia depth should be carefully monitored and well-controlled, for example, by bispectral index monitoring, which helps guide anesthesiologists. Various levels of anesthesia depth were reported during non-intubated surgery. However, it is most important for the anesthesiologist to balance respiratory function and mediastinal movements (45,47,(52)(53)(54). If a patient is over-sedated at the surgeon's request for less respiratory movement, hypoxemia and hypercapnia were likely to occur, leading to conversion and vice versa. Thus, adequate non-intubated surgery is based on a well-regulated depth of anesthesia with reasonable respiratory support, such as an oxygen mask or nasal high flow cannula (29).
Regional anesthesia techniques may also prevent conversions, including local wound infiltration, intercostal nerve blocks, vagal nerve blocks, thoracic paravertebral blocks (mostly erector spine plane), and epidural blocks. Regional blockades reduce the use of systemic sedatives and minimize respiratory sedation. Surgeons benefit from regional anesthesia because the cough reflex can be inhibited and fewer pain-stimulating movements (27). For NITS, sedation with regional analgesia is essential, indicating that a multi-pronged approach to analgesia is recommended.
Based on our experience, we shifted our regional anesthesia technique from epidural blocks to intercostal nerve blocks via direct thoracoscopic view. Epidural anesthesia may have risks of spinal cord injury, respiratory suppression, and hypotension (23). Intravenous anesthesia with regional anesthetic methods via direct thoracoscopic view, such as vagal block, intercostal block, and wound infiltration, is generally sufficient for the NITS procedure.
An inexperienced team is the absolute contraindication of non-intubated surgery (21). All team members should be accomplished and proficient in surgical and anesthesiological methodologies and informed of the potential complications and subsequent protocols. If a conversion is required, elective conversion is always preferable to emergency conversion; thus, all possible and subtle difficulties of every patient must be accessed and anticipated as early as possible. The equipment and drugs, including single-and double-lumen tubes, a bronchoscope, an endotracheal blocker, other advanced airway equipment, and first-aid medicine, must be prepared to avoid conversion failure.
CONCLUSION
Conversion into intubated VATS is sometimes required during NITS. With correct patient selection, sufficient preparation, carefully established protocols, and skilled and rapidly acting surgical team members, the risks and complications of converting to intubation can be minimized to an acceptable level. NITS could be the treatment of choice for thoracic malignancies if physicians are proficient in preventing and managing potential conversions.
AUTHOR CONTRIBUTIONS
X-HC prepared the initial draft of this article, which was refined by M-WL. All authors contributed to the article and approved the submitted version. | 2021-10-26T13:28:07.607Z | 2021-10-26T00:00:00.000 | {
"year": 2021,
"sha1": "e749c28e1f9ffe1e958342cca0b7d4d4663e059c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fsurg.2021.769850/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e749c28e1f9ffe1e958342cca0b7d4d4663e059c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4940733 | pes2o/s2orc | v3-fos-license | Intensification of antiretroviral treatment with raltegravir for pregnant women living with HIV at high risk of vertical transmission.
Objectives: The rate of vertical HIV transmission for women at high risk of HIV transmission stands at approximately 7.6%. In the present study we describe infant infection rates in women who had received raltegravir (RAL) intensification during pregnancy to a standard three-drug antiretroviral (ART) regimen in Thailand. Methods: This prospective cohort study enrolled HIV-1-positive pregnant women at high risk of vertical transmission, as defined by (1) ART initiation at a gestational age (GA) ≥32 weeks or (2) HIV-1 RNA >1000 copies/mL at GA of 32-38 weeks while on ART. Women received a standard three-drug ART regimen with RAL intensification (400 mg twice daily) until delivery and continued on a three-drug ART regimen after delivery. Plasma HIV-1 RNA testing was performed before intensification and at delivery. Infant HIV-1 status was determined using DNA PCR at birth, and at 1, 2 and 4 months of life. Results: Between February 2016 and November 2017, 154 pregnant women on ART were enrolled into the study with a median CD4 cell count and plasma HIV-1 RNA level of 382 cells/mm3 and 4.0 log10 copies/mL, respectively. The three-drug combination consisted of either a lopinavir/ritonavir- (53%) or efavirenz-based (43%) regimen. Median GA at time of RAL initiation was 34 weeks (interquartile range [IQR] 33-36) and median duration was 21 days (IQR 8-34). The proportion of women who had a plasma HIV-1 RNA <50 and <1000 copies/mL at delivery was 45% and 76%, respectively. There were six infants with HIV infection, three in utero and three peripartum. Overall vertical transmission rate was 3.9% (95% confidence interval [CI] 1.4-8.2). Conclusion: The majority of high-risk pregnant women living with HIV-1 who had received RAL intensification achieved viral suppression at delivery with a relatively low rate of vertical transmission. This intensification strategy represents an option for prevention in HIV-positive women at high risk of vertical transmission.
Introduction
The World Health Organization (WHO) has set targets to eliminate HIV vertical transmission by 2020 [1] using the following criteria: vertical transmission rate less than or equal to 50 per 100,000 live births and less than 2% in non-breastfeeding populations for at least a year [2]. Thailand was the first country in Asia to receive WHO validation for the elimination of vertical transmission by meeting these targets, with a rate of 1.9% in 2015 [3]. The Thai Ministry of Public Health has set goals to bring this rate below 1% [3,4].
The risk for mother-to-child transmission (MTCT) depends mostly on gestational age (GA) at the time of antiretroviral therapy (ART) initiation and HIV-1 viral load (VL) level before delivery. Earlier ART initiation and HIV suppression at the time of delivery are associated with a reduction of vertical transmission risk [5,6]. A study from the UK and Ireland has found that vertical transmission rates in pregnant women with levels of HIV-1 VL near delivery >10,000 and 1000-9999 copies/mL were 9.2% and 3.0%, respectively, in contrast to <0.1% if VL was <50 copies/mL [7]. Data from the Spectrum AIDS Impact model 2016 has estimated the probability of vertical transmission among pregnant women who had received ART for less than 4 weeks before delivery at 7.6% [8]. Women living with HIV-1 with a high VL who present in the late third trimester of pregnancy are unlikely to achieve an undetectable VL (<50 HIV-1 copies/mL) by the time of delivery when using standard three-drug non-nucleoside reverse transcriptase inhibitor (NNRTI)-or protease inhibitor (PI)-based ART.
Raltegravir (RAL), an HIV-1 integrase inhibitor, is used in pregnancy [9,10] as it rapidly reduces HIV-1 VL by 2 log 10 copies/mL within 2 weeks of treatment initiation [11][12][13][14][15] and it crosses the placenta, and has, therefore, a potential impact as a pre-exposure prophylaxis agent to prevent vertical transmission [14,16,17], as reported among late-presenting pregnant women in several countries such as Austria [15], Brazil [14] and France [11]. Pregnant women who were treated with RAL had an overall vertical HIV-1 transmission rate of 0.7%. However, the rate was higher at 1.3% among a subgroup of women who had received it during the third trimester of pregnancy [10]. The Thai elimination of mother-to-child transmission of HIV programme has investigated factors associated with vertical transmission among infants infected with HIV-1 who were born between 2014 and 2016. The major factors associated with transmission were: being late to antenatal care; incidental HIV-1 infection during pregnancy; and poor ART adherence [18]. As a result, we have set up a prospective pilot study in collaboration with the Thai Red Cross AIDS Research Center to assess the impact of the addition of RAL to standard three-drug ART regimens on vertical transmission rates in HIV-1 positive pregnant women who were late-presenters or had high HIV-1 viral loads near delivery.
Methods
This prospective cohort study with RAL intensification in pregnant women at high risk for MTCT was initiated by the Thai Red Cross AIDS Research Centre, under the Patronage of Her Royal Highness Princess Soamsawali. A three-drug ART regimen is recommended during pregnancy in the Thai 2016 PMTCT guidelines using efavirenz, tenofovir disoproxil fumarate (TDF) plus either lamivudine (3TC) or emtricitabine (FTC). Alternative regimens include lopinavir/ritonavir (LPV/r) with either TDF or zidovudine (ZDV) plus 3TC. These guidelines also recommend an elective Caesarean section for high-risk pregnant women with an HIV-1 RNA level >1000 copies/mL at 34-38 weeks' GA or for those who have received a standard three-drug ART regimen for <12 weeks [19]. However, the mode of delivery in Thailand is dependent on physician discretion and hospital capacity.
The study inclusion criteria were: pregnant women with HIV infection who were initiated on ART at a GA ≥32 week or who had an HIV-1 RNA >1000 copies/mL at 32-38-weeks' GA despite being on ART. Pregnant women with HIV-1 infection from all areas of Thailand were given access to this programme. Their ART regimen was dispensed by their local hospital pharmacy, and RAL couriered from the Thai Red Cross AIDS Research Center to antenatal care clinics. Women provided written consent for participation in the study, which was approved by the Faculty of Medicine, Chulalongkorn University Institutional Review Board, Bangkok, Thailand.
Antiretroviral regimens during pregnancy and postpartum
Twice-daily oral RAL 400 mg was added to a standard three-drug ART regimen for pregnant women up until delivery. Raltegravir was stopped after delivery while the three-drug ART regimen was continued postpartum. The Thai 2016 PMTCT guidelines recommend a 6-week course of triple therapy with oral ZDV 4 mg/Kg and 3TC 2 mg/Kg every 12 hours, plus nevirapine (NVP) 4 mg/Kg once daily as a post-exposure prophylaxis regimen for infants born to mothers at high risk of vertical HIV-1 transmission. Infant formula is provided to HIV-exposed infants up to 18 months of age. Breast feeding is not recommended.
Laboratory methods
HIV-1 RNA levels were measured at the time of RAL initiation and at delivery using either plasma or dried blood spot (DBS) samples. Plasma HIV-1 RNA was performed using the COBAS AmpliPrep/ COBAS TaqMan HIV-1 Test (Roche Molecular system, NJ, USA) or the Abbott RealTime HIV-1 assay (Abbott Molecular Inc, IL, USA) with a limit of detection of 20 and 40 copies/ mL, respectively. HIV-1 RNA testing using DBS specimens was performed using Abbott m2000rt (Abbott Molecular Inc, IL, USA) at the HIV-NAT Research Laboratory, Bangkok, Thailand. The assay has been validated and reports comparable HIV-1 RNA levels to those in plasma with a detection cut-off of 839 copies/mL. In this study, when the DBS HIV-1 RNA was reported as <839 copies/mL, it was categorised into the same group as plasma HIV-1 RNA 50-999 copies/mL and, if reported as undetectable, into the same group as plasma HIV-1 RNA <50 copies/mL.
The infant HIV-1 status was determined by DNA PCR at birth (0-7 days after birth) and at 1, 2 and 4 months of life [19]. HIV-1 infection in utero was defined by a positive HIV-1 DNA PCR result at birth. Infants were diagnosed as HIV-1 infected if they had two positive HIV-1 DNA PCR test results. Uninfected infants were defined as having at least two negative HIV-1 DNA PCR test results, with at least one performed at ≥4 months of age. Probably uninfected infants were defined as having at least two negative DNA PCR test results, with at least one negative at 2 months of age. Possibly uninfected infants were defined by a negative DNA PCR test result at birth and at 1 month. Presumably uninfected infants by the in utero route were defined by a negative DNA PCR test result at birth.
Statistical analysis
Data were analysed using the Stata version 13 programme. HIV-1 perinatal transmission rates are reported as percentages with 95% confidence intervals (CI). Regardless of the number of babies from one pregnancy, infants were counted individually. The proportion of pregnant women with HIV-1 RNA levels <50 and <1000 copies/mL at delivery is reported as a percentage. The viral decay rate was calculated by comparing HIV-1 RNA levels in log10 copies/mL at the time of RAL initiation and at delivery.
Demographic data
Between February 2016 and November 2017, 154 HIV-positive high-risk pregnant women were prospectively enrolled to receive RAL intensification. Of these, 113 (73%) were late-presenting pregnant and untreated women, and 41 (27%) were on ART with a high HIV-1 VL. The three-drug ARV regimen was optimised by adding RAL for those in the group who were on ART with high VL. Women originated from various regions in Thailand, e.g. 39% from Bangkok and Central Thailand, 18% from the Northeast, 17% from the South, 16% from the East and 10% from the North. Their clinical characteristics are presented in Table 1. Median age was 23 (IQR 19-29) years, median CD4 cell count 382 cells/mm 3 (IQR 171-545) and median GA at the time of RAL initiation was 34 (IQR 33-36) weeks with a median interval between their enrolment date and RAL initiation of 2 days (IQR 0-4). Median duration of RAL therapy was 21 days (IQR 8-34 days), with 41% of women receiving <2 weeks; 25% 2-4 weeks; 22% 4-6 weeks and 11% 6-9 weeks of treatment.
Of 154 pregnant women, 127 had paired sample of HIV-1 RNA at the time of RAL initiation and at delivery. The median HIV-1 RNA decrease was 1.64 log 10 copies/mL with a median duration of RAL intensification of 21 days (IQR 8-34).
HIV-1 vertical transmission rates
There were 155 infants born, including two sets of twins, and one fetal death in utero. Seventy-five (48%) were delivered by Caesarean section and 80 (52%) by vaginal delivery. The median GA at birth was 39 weeks (IQR 38-39). Of the births, 11% were preterm deliveries (GA <37 weeks) and 19% of infants were born with a low birth weight (<2500 g). All infants were formula-fed.
In total, there were six HIV-positive infants (two positive HIV-DNA PCR), giving a 3.9% (95% CI 1.4-8.3) vertical transmission rate as shown in Table 3. There were three in utero and three peripartum HIV infections. Clinical characteristics of the HIV-1 infected children are shown in Table 4. Three infants acquired HIV in utero, the mothers having initiated ART, or received raltegravir intensification at 34-35 weeks' GA, possibly after transmission had occurred. For the three infants who were infected peripartum, one of the mothers (Case 6) reported not taking ART as documented by lack of VL decrease. In another case (Case 4), the infant had received only ZDV as neonatal prophylaxis, which was not appropriate in this case.
The transmission rate, stratified by mode of delivery and indication for RAL use, is presented in Table 5. In addition, six infants were presumably HIV-1 uninfected in utero with negative HIV-1 DNA PCR test results at birth before being lost to follow-up. Maternal HIV-1 RNA at time of delivery was <200 copies/mL in five cases and 4338 copies/mL in the remaining one.
Serious adverse events in infants
There were two infants with congenital anomalies, one with trisomy 21 and the other with gastroschisis. Two deaths occurred: one in utero and one in the neonatal period. In the case of the in utero death, the mother was diagnosed with HIV-1 infection 3 years before the pregnancy and initiated on a TDF/3TC/LPV/r/RAL regimen at a GA of 33 weeks. Thirteen days later she presented to hospital with reduced fetal movements and was referred to a tertiary care centre within 24 hours. The fetus was vaginally delivered as an in utero fetal death with a body weight of 1835 g. The second infant death occurred at home at 7 days of life from an unknown cause. The mother was a migrant with a CD4 cell count of 548 cells/mm 3 and a plasma HIV-1 RNA level, at the time of RAL initiation, of 4325 copies/mL and 192 copies/mL at the time of delivery. She had received ZDV, 3TC and LPV/r with RAL intensification for 21 days. This infant was born at GA 39 weeks, birth weight was 2990 g with no perinatal complications and a negative HIV-1 DNA PCR at birth.
Discussion
The rate of HIV vertical transmission in this study among high-risk HIV-1-positive pregnant women for MTCT with late presentation or persistent viraemia who had received RAL intensification to their three-drug ART regimen was 3.9%, which is lower than the estimation from the 2016 Spectrum AIDS Impact model of 7.6%. Importantly, we have shown that the majority of these women within a resource-limited setting achieved a plasma HIV-1 RNA <1000 copies/mL at the time of delivery and that half of the infants were born by vaginal delivery. Therefore, we suggest that a RAL-based intensification strategy might be an option to achieve a reduction of HIV-1 vertical transmission among pregnant women at high risk in this type of setting where an elective Caesarean section is not routinely available. The British HIV Association (BHIVA) guidelines for the management of HIV-1 infection in pregnant women recommend the use of a three-or four-drug regimen that includes RAL for women who present late after 28 weeks of pregnancy with an unknown VL or with a VL >100,000 copies/mL [20]. In the present study, addition of RAL as a fourth drug to the regimen during pregnancy provided a simple way for reverting to a standard three-drug regimen postpartum.
We have observed that a plasma HIV-1 RNA <1000 copies/mL was achieved by 76% of study participants by the time of delivery with a median 1.6 log10 copies/mL decrease during a median 21 days of RAL intensification. This level of viral decay is smaller than previously reported (1.1 log10 copies/mL per week), and may be explained by the lower participant baseline HIV-1 RNA level in our study (4.0 log10 copies/mL compared to 5.4 log10 copies/mL) and also a shorter RAL duration [21]. There is an initial rapid HIV-1 RNA decay phase during the first 14 days of RAL administration [22]. This has most likely contributed to the high proportion of participants achieving a plasma HIV-1 RNA <1000 copies/mL, despite the short time period of RAL intensification.
The vertical transmission rate of HIV-1 was higher in our study than in a review where it stood at 1.3% in a subgroup of 153 pregnant women who had received RAL during the third trimester of pregnancy [10]. This might be explained by the shorter intensification duration and higher rate of vaginal delivery in our participants. The reduction in vertical transmission rates associated with RAL intensification might be explained by the rapid VL reduction in pregnant women and transplacental drug transfer to provide post-exposure prophylaxis to the fetus. Indeed, a previous study had shown a median 1.48 cord blood/maternal plasma RAL concentration ratio [20], with concentration in neonates remaining above the IC95 for wild-type HIV-1 up to 36 hours post-delivery [23].
The present study was a pilot, operational research study that tested the feasibility of using nationwide RAL intensification in a population at high risk for vertical transmission in a resource-limited country.
There are three important features of our programme: (1) communication via telephone, fax, email and mobile phone-based messaging from remote hospitals to the Thai Red Cross Center in Bangkok to confirm eligibility criteria; (2) logistical support for drug distribution to remote hospitals by couriers, with a median time from enrolment to RAL initiation of 2 days; and (3) the use of DBS specimens to quantify maternal HIV-1 RNA levels and HIV-1 DNA in infants. Therefore, we could ascertain HIV-1 viral load among the majority of pregnant women and also the HIV-1 infection status of all the infants in this cohort. This study leveraged on an established infrastructure for nationwide implementation of PMTCT and an early HIV-1 infant diagnostic programme [24]. The RAL intensification could, therefore, be implemented successfully in the nationwide health programme. The current Thai national guidelines 2016 have recommended RAL intensification for pregnant women at high risk and, if listed in the national essential drug list, RAL will be accessible at all hospitals throughout Thailand [19].
We are aware of the limitations in this study, which include a lack of randomisation within the control arm. However, we considered it unethical to randomly allocate women to a standard three-drug regimen versus RAL intensification when there is support in the literature and the BHIVA guidelines for this type of intensification intervention in late-presenting pregnant women. Also, we cannot provide data on systematic adverse event (AE) monitoring and reporting in these pregnant women and their offspring, for example hepatitis and hyperbilirubinemia; however, we believe we have captured serious AEs such as infant death and congenital anomalies. French authors have described an absence of increased risk of birth defects or severe AEs in infants after RAL use during pregnancy [25,26]. Detailed data on the mode of delivery were also incomplete in this study, with a lack of information on whether Caesarean sections were performed on an emergency or elective basis. Lastly, six infants who were presumed HIV-1 uninfected as they had a documented negative HIV-1-DNA PCR result at birth lacked a follow-up until 4 months of age. Their mothers had a low plasma HIV-1 RNA at birth and these infants had been prescribed a three-drug post-exposure ART regimen perinatally. We can also add that in this study, there may have been issues with adherence to ART as RAL is administered as a twice-daily regimen in pregnancy. Dolutegravir (DTG), a newer integrase inhibitor, is given once daily and therefore potentially offers superior dosing convenience. Pharmacokinetic data on its use in pregnant women have been published [27], as well as pilot data from Botswana where this drug is used as a first-line regimen in HIV-1 positive adults [28]. However, further data on the use of DTG in pregnancy are needed. Because of drug interactions, as is the case with efavirenz, and therefore a need to dose DTG at 50 mg twice daily when in combination, DTG might also be associated with decreased adherence to medication in some instances [29].
In conclusion, we have shown in an open-label, prospective observational cohort study that RAL intensification of a three-drug ART regimen in pregnancies at high risk for vertical transmission in a resource-limited setting such as Thailand is feasible, is associated with a VL <1000 copies/mL in the majority of women at the time of delivery, and might be effective in decreasing vertical transmission in the context of an established PMTCT programme. It may be regarded as an effective strategy to further reduce vertical transmission, and achieve an overall vertical transmission rate below 1% in the country. | 2018-04-27T07:21:59.316Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "a75ded24cd825e2c20db6aebb7d08a9044df1810",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1016/s2055-6640(20)30246-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a75ded24cd825e2c20db6aebb7d08a9044df1810",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233367534 | pes2o/s2orc | v3-fos-license | Developing a peer supported feedback model that enhances oral proficiency in French
This article investigates the process of development for a novel online peer-supported approach that enhances oral proficiency in French at an Australian university to cope with ever more complex challenges. These challenges include students with mixed ability in the same class, reduced teaching resources and student surveys identifying a lack of speaking practice affecting confidence and performance in oral assessments. A related aim of the present study was to facilitate assessment literacy of our students by encouraging them to make links between the skills practised in class and the requirements for the final oral summative assessment. Methodology draws on educational practice influenced by a social constructivist approach to develop a learning model using online peer feedback, where more advanced learners support less experienced peers outside the formal classroom. Preliminary results reveal that although the model was deemed to be ‘generally effective’ in enhancing speaking skills and developing a better understanding of assessment literacy, it needs to enable learners to build their meta-skills across the three-year degree program to be truly effective. The conclusion explores further development and expansion of the learning approach across the French undergraduate program and makes future recommendations.
Introduction
At our university, language enrolments have steadily increased in the last decade with the implementation of a compulsory language component in the Bachelor of International Studies program since 2010. French and Spanish are the two languages with the highest student enrolments. The first year of French studies is the entry point to a French major or a French minor. As such, it assumes no previous language skills and students come from various Faculties and educational backgrounds with different levels of proficiency, which can vary between six years of secondary language study and none.
Following recent recommendations of a review of languages taught by the Faculty, face-to-face tuition hours in undergraduate subjects were curtailed. In French as well as Italian and Spanish, firstyear students currently have four hours of weekly face-to-face tuition instead of the previous 6 hours. The second-year students have three hours of weekly face-to-face classes instead of four hours and the third-year students have three hours face-to-face teaching. The cascading effect of these changes has already affected the 2016 cohort of students who arrived in their third year of French studies in 2018 having received 16% less face-to-face tuition time than their predecessors.
Consequently, increasing numbers of students enrolling with a range of proficiencies and varying support needs, compounded by limited teaching resources challenged us to rethink our approach to delivering the French curriculum. In addition, student satisfaction surveys of students in our French beginner subjects highlighted a desire for more speaking practice in the tutorials. Indeed, students believed that more practice could help them improve their oral proficiency in assessed tasks as illustrated in this student's comment: 'Actually learning to speak what we have learned [sic] so when it comes time for the oral test it would be somewhat easier and with correct pronunciation'. Previous reductions in face-to-face hours had motivated us to find new ways to meet the needs of our students. One of these was to have increased time on tasks to support and enhance their language learning experience using online forum discussions and blogs (Jones & Bissoonauth-Bedford, 2008;Bissoonauth-Bedford & Stace, 2012;Bissoonauth-Bedford & Stace, 2015).
In this paper, we present and discuss the development of a peer-supported model to enhance oral proficiency in French. The novelty in our approach was for more advanced learners who had already achieved the learning outcomes to provide online formative feedback on oral tasks to less experienced peers in a regular and scaffolded manner. Moreover, this study fitted into the 'Curriculum Transformation' portfolio at our university, which aimed to develop students' assessment literacy defined as "understanding of the rules surrounding assessment in their course context, their use of assessment tasks to monitor or further their learning […] to produce work of a predictable standard" (Smith et al. 2013: 46). Three research questions guided our study: (i) How do students perceive engaging in additional speaking practice and receiving formative feedback online? (ii) How does regular formative feedback aid understanding of what is expected in oral assessment? (iii) What are the perceptible effects of additional regular practice with feedback on performance at the oral summative assessment?
Literature Review
Recent studies into the learning habits of millennial students have highlighted the appeal of hybrid or blended learning environments that combine face-to-face education with new technologies, particularly the social aspect of belonging to a community. In addition, the importance of creating a cohesive online community of learners, where students feel a sense of belonging and a corresponding need to contribute to that community, is also an important component to successful learning (Lord & Lomicka, 2008;Garrison & Vaughan, 2008;Bissoonauth-Bedford & Stace, 2015). Since students have a "decreased tolerance of lecture-style dissemination of course information", they prefer "24/7 information connectedness [...], environments that support multi-tasking, gravitation toward group activity and appreciation of the social aspects of learning" (Roehl, Reddy & Shannon, 2013, pp. 44-45). This observation was further corroborated in our own pre-pilot project survey, carried out in 2015, in which 93% of the students (n = 106) affirmed that they would value the opportunity to complete online tasks in order to practise their oral language skills.
Critically, there is evidence to suggest that the use of a hybrid approach in second language (L2) acquisition can improve oral language skills. Yeh et al (2019) found that online peer feedback via blogging videos had a positive effect on speaking performance of college students studying English as a Foreign Language (EFL). Kim (2015) outlined the results of a project, which involved students communicating with one another asynchronously in Korean on the mobile phone application Kakaotalk. Learners of the L2 regularly recorded themselves online and were subsequently given feedback by native speakers. Whereas this method did improve aspects of the students' speaking performance, such as pronunciation, Kim recommended face-to-face meetings to supplement students' online exchanges, since a measure of synchronous communication allows for the spontaneous asking of questions (such as learners checking correct usage of a term) as well as instantaneous feedback. Kırkgöz (2011) reported that the regular recording of speaking tasks (via video), backed up by subsequent feedback and analysis, helped students to improve their oral proficiency in EFL (English as a Foreign Language) , expanded their vocabulary, helped them to overcome their anxiety and fostered collaborative learning. In the context of our research, it was hoped that regular formative feedback would help students develop the necessary oral language skills and via increased assessment literacy feel better prepared for the summative assessment, and thus help mitigate the anxiety usually associated with this assessment.
Various studies have highlighted the benefits of regular feedback and support to students particularly when it is provided in a formative capacity from an early stage (Vonderwell & Boboc, 2013;Hattie, 2009, Nicol & Macfarlane-Dick 2006Nicol & Milligan, 2006;Liu & Carless, 2006, Lawrie et al., 2013Nader, 2019). As previously noted, it is not difficult to see why feedback from students at our institution has revealed a desire for more extensive speaking practice throughout the semester. However, regularly providing targeted individual feedback to all students in high enrolment subjects (such as in first year French) can be time-consuming, if not unfeasible, particularly as resources for teaching are being limited. Online peer supported review of oral tasks with individual feedback using students that have previously successfully completed the same learning objectives offered us a potential solution to these challenges, since it could be delivered and accessed flexibly online. In addition, it was hoped that all cohorts of students involved in the research (those receiving and those providing feedback) would benefit and consolidate their language learning. We had found that this was the case in a previous study where university students had peer tutored their high school peers (Bissoonauth-Bedford & Stace, 2017). The peer review activity in this paper is to be understood as being part of broader educational strategy known as peer learning defined as 'students learning from and with each other in both formal and informal ways' (Boud et al., 2001, p.4).
The present study adopted a blended approach combining traditional teaching methodology with online modality to provide additional opportunities to practise speaking and receive formative feedback. Providing aid in the form of 'scaffolding' through successful models allows learners 'to accomplish tasks and develop understandings that they would not be able to manage on their own' (Gibbons and Hammond, 2001, p.3), illustrates Vygotsky's (1962Vygotsky's ( , 1978 approach to language learning where social interaction and guidance by a more knowledgeable learner are key to cognitive development and successful learning.
Methodology and Data collection
Our study was divided into a number of phases. First, ethics approval was obtained from the University's Human Research Ethics Committee on the condition that there would be no control group and that all students would equally benefit from the research. Phase 1 involved a proof of concept in 2016 with the first-year students, and was progressively rolled out in phases across 2017 and 2018 to the second and third-year students respectively as summarised in Figure 1a below. The present article focuses on phases one and two of the study, which consisted in identifying the problem, developing a solution to trial in a pilot phase and making recommendations for further development and expanding the study to the complete French undergraduate program in Figure 1b below. As such it aligns with principle 5 of LCNAU (Languages and Cultures Network for Australian Universities) which 'fosters systematic review, reflection and monitoring of improvements in program design and pedagogy for university languages programs' with a view to 'provid[ing] a nation-wide focus for continuous sharing of good practice' (https://www.lcnau.org/about/).
Figure 1a
Overview and timeline of the main phases of the study
Figure 1b
Peer review and formative feedback across the French undergraduate program
Pilot study phase one with first-year students
In the second semester of 2016, a pilot study was designed and implemented with first-year beginner level students to develop extra speaking practice with personalised feedback provided online. The first-year cohort consisted of 87 students in their second semester of study. They were aged between 18 and 41 years. Three additional online oral tasks were set as formative speaking activities in weeks 4, 8 and 11 of the semester. The three tasks were based on topics studied in the first-year curriculum such as: 1. la routine (daily routine); 2. acheter un cadeau d'anniversaire (shopping for a birthday present) and 3. faire les courses pour une fête (food shopping for a party). In the first task, students had to give a brief account of their daily routine over a typical week using the vocabulary learnt in class. In the second task, students had to work in pairs to create a dialogue around buying a birthday present for a family member or a friend with one acting as a customer and the other one as a shop assistant. In the third task, students were asked to create a conversation on food shopping for a party they were hosting. While the first task was completed individually, the remaining two were done in pairs, outside of formal classes and students had to record their conversations and upload them onto the LMS (Moodle) for teacher feedback. As such, the three oral tasks were not formally assessed or allocated marks within the subject.
Phase one of the pilot study was facilitated by the two academic staff who each taught one language tutorial of approximately 20 students enrolled in each class. The two remaining tutorials were taught by casual academics, but the students were given the same opportunity to do additional speaking and receive formative tutor feedback. Formative oral feedback in this phase was provided by the teachers to trial its feasibility and efficiency and was supported by using PoodLL functionality in the LMS on each of the three oral tasks. The teachers used the criteria required for oral assessment such as pronunciation, fluency, grammar, vocabulary and appropriate use of register (Appendix 1) in their formative feedback in the week following submission of additional speaking tasks. Students were encouraged to revise their speaking task in the light of teacher feedback and resubmit within a week if they so wished.
Qualitative data on perceptions of the additional speaking tasks in the pilot phase and their impact on helping students prepare for their end of semester final oral assessment were collected via semistructured interviews (Appendix 2). Sixteen students voluntarily stayed behind after the oral assessment to participate in the evaluation of phase one of the pilot study. Since the final oral assessment was conducted in pairs, the students were interviewed in their respective pairs. Responses were digitally recorded and transcribed verbatim.
Evaluation of phase one with first-year students
Data for question 1 (Appendix 2) on the usefulness of formative feedback indicate that students perceived regular personalised feedback from teachers as 'helpful, especially for pronunciation' because 'it is good to know what you are pronouncing right and wrong'.
Data for question 2 (Appendix 2) on the effectiveness of the tasks in preparing for the oral assessment shows that the three additional oral tasks were perceived as 'good' and 'actually useful' in preparing for their final oral assessment. Reasons given were that 'we had a bit of anxiety about what kind of questions would be asked', but 'you had to prepare for it […] one can contribute at one's leisure' and 'I acquired like a confidence boost' and 'the plugin [PoodLL installed in a Moodle environment] was good'. The drawback for some was 'finding someone to work with [in weeks 8 and 11] was not always easy because of our various commitments'.
Question 3 (Appendix 2) asked students to elaborate on what else could have helped them improve their language proficiency. Responses for this question varied. Some admitted that 'additional speaking in class' would have helped more' whilst some others admitted 'I hate computer stuff', but nonetheless conceded 'I see value in them…I just hate doing them'. Some thought that 'the size of the class could be smaller' with 'a little more like on the spot practice' because 'I still need to speak more French not only to understand when someone asks me something but also to reply in the right way like instead of lingering answers'.
On the other hand, some participants admitted they 'cannot interact with people very well'. Others 'liked it when you [the teacher] had time in class to come round and give us instant feedback' whilst acknowledging at the same time that 'I guess it worked the same when you had that [the feedback] recorded'.
These valuable comments and learnings from the first-year students informed development of phase two of the pilot study in which some of them participated as second-year volunteers in semester one of 2017as described in the next section.
Pilot study phase two with first-and second-year students
In 2017, there were 144 students enrolled in the first year of the French beginners' class and 72 students in the second year of French Studies in semester one. In the second-year cohort, 70 students had completed the 2016 phase one. Twelve second-year students (17%) volunteered to participate in the next phase of the study, which consisted in reviewing the additional oral tasks set by the teachers, providing formative feedback using the teachers' assessment criteria in (Appendix 1) and ending the feedback with a model answer to their first-year peers. To have a representative and yet manageable sample, each of the twelve participants was randomly allocated four first-year students through the LMS. Thus, 48 first-year students (33%) and 12 second-year students (17%) participated in phase two. The first-year students had to complete two additional speaking tasks in week 5 (describing one's daily routine) and in week 8 (describing one's hometown) based on topics studied in the first semester. The first task was carried out individually whilst the second one was completed in pairs to encourage dialogue and discussion.
At the beginning of semester one in 2017, a face-to-face induction with the twelve participants from the second year on how to give constructive formative feedback online was organised by the academic staff conducting the study. Another component included in the induction of these students related to how they could support the development of assessment literacy in their peers. Best practice was modelled by the academic staff demonstrating how they use the marking criteria (Appendix 1) to provide constructive feedback. The academics emphasised that it is important to start with positive reinforcement then highlight two to three areas that needed improving upon before ending the review with a suggested model answer to the conversation topic. Participants were reminded to upload their oral formative feedback onto the LMS within a set timeframe, usually within a week of posting to allow their first-year peers enough time to improve their performance considering the feedback they had received.
Evaluation of phase two with first-and second-year students
At the end of semester one in 2017, both first-and second-year participants evaluated phase two. The twelve second-year volunteers completed a short survey as a focus group (Appendix 3) on the perceived effectiveness of providing formative feedback to first-year students. The forty-eight first- year participants also completed a written questionnaire (Appendix 4) reporting on what had worked well and not so well when receiving feedback from their second-year peers.
The next section analyses data relating to giving and receiving formative feedback.
Student Perceptions of giving feedback to first-year students
Data from the twelve second-year participants for question 1 reveal that giving feedback to firstyear peers was generally viewed as a positive experience since 'everything worked well except for some technical issues when recording'. In response to question 4 on the amount of time spent to review and give feedback on the two speaking tasks, participants claimed they took 'less than an hour' over the semester, which was 'very doable'.
In terms of issues encountered, the main pedagogical issue with the first-year students was that students often focused on reading their prepared responses instead of interacting with their partner when speaking in pairs. The second issue was technical and related to uploading audio files and quality of audio recordings, and web browser compatibilities that did not allow participants to post their oral feedback to their first-year peers, which was the basis of the current model.
Useful suggestions and recommendations for improving peer-supported feedback included having face-to-face meetings between mentors and mentees at the beginning of the semester. The face-toface meetings, it was felt, would enhance the social learning aspect of the interactions because it would 'create a mentoring bond' and 'make students less intimidated of the markers'.
The other suggestion endorsed by all twelve participants during the focus group was that extra oral practice which involved peer feedback should be formalised and count as a class participation mark. Perceived benefits from this activity included revisiting prior learning that 'would encourage us to practise our speaking more' and 'think about revising 'old grammar' that we may have forgotten'.
Student perceptions of receiving feedback from second-year students
The forty-eight first-year participants had mixed opinions on the pedagogical benefits of the additional tasks and contribution of formative feedback in improving their speaking skills. Results for question 1 in Appendix 4 on the effectiveness of the additional speaking tasks showed that 29% found the activities "quite effective" in improving their speaking skills and helping them prepare for their oral examination, with a small minority (10%) rating them as "very effective". Another 40% claimed they were "ineffective" at improving oral conversation skills.
Results for question 5 on the impact of the tasks on their speaking skills revealed that 26% of students perceived the feedback from the second -year students as having neither a particularly positive nor a particularly negative impact on their oral skills. Reasons given were diverse, ranging from not receiving quality feedback in time to prepare the next activity, non-completion of online tasks due to workload implications, especially as they were not formally assessed (casual employment, family obligations, students enrolled in Science, Law and Engineering with intensive academic workload). Some felt that the time taken to complete the recordings because of technical issues could be reduced if there were more face-to-face interactions with mentors instead of online exchanges only.
The feedback from all students was included in the current results because they allowed us to test and evaluate weaknesses and deficiencies in our pilot study that were subsequently taken into account for improving the peer support and learning experience.
Perceptible effects of formative feedback on oral assessment results
While the research is still in its implementation and evaluation phase, preliminary results from phases one and two of the pilot study revealed a small trend in improvement in the 2017 first-year students' oral examination scores (post-study cohort) when compared to the 2015 scores (pre-study cohort). A comparison between both sets of marks showed a slight improvement in the high end of the scale (80-100% range) together with a slight dip in the lower end of the scale (40-70% range).
The frequency distribution of marks in both years is shown in figure 2 below.
It needs to be highlighted that the second-year cohort in 2017 had completed the pilot phase of the study in 2016 and that the second-year mentors had reviewed speaking tasks that they themselves had completed previously as first-year students. Further research is required in the future however, to determine how technology that offers more opportunities for students to practise speaking skills independently coupled with peer formative feedback might help improve the spoken proficiency of students as they progress through their course.
Figure 2
Comparison of first-year oral examination results in 2015 and 2017
Discussion
Based on these preliminary results, some pedagogical implications can be drawn. Our results corroborate previous findings in the literature that scaffolding tasks and using evaluation criteria to provide constructive feedback (Shephard, 2005: 66) are valuable strategies to fill in the 'cognitive gaps' (Spycher, 2017: 6) in students' zone of proximal development (Vygotsky, 1978). A key component in the peer feedback was to end the review of the oral task with a model answer.
Data also highlighted students' meta-learning in terms of what strategies were identified that can aid their own language learning (Biggs, 1985). For the first-year students, the importance of social interactions with second-year mentors at the start of the semester was viewed as conducive to rapport-building and motivation for learning. For the second-year students, providing feedback and modelling answers to their less advanced peers were considered as two approaches that helped consolidate and/or raise awareness of their own learning needs. For both groups however, there was a clear indication that the additional activities outside of the classroom should be integrated in the assessment system.
The results from the pilot phase however, also pointed to issues with the quality of peer feedback, particularly in the case of second-year students who were less proficient than some first-year students, which highlights the teaching and learning context in our institution. Guidelines have been prepared by teachers to explain the context of the formative peer feedback at our university and how it can support student learning. Teachers will provide modelling examples at the start of the semester to show students what is considered as constructive feedback by using the evaluation criteria for assessing oral performance to develop students' assessment literacy.
Conclusion
This research investigated the development of additional speaking practice supported by formative feedback to enhance oral proficiency in French by: − engaging students in additional online oral activities to give and receive formative feedback from peers; − gauging whether regular formative feedback allowed a better understanding of what was expected in the oral assessment, and − finding out the effects of additional practice combined with formative feedback on oral performance at the end of semester.
Preliminary findings indicate that both first and second-year students perceived benefits, albeit different, of additional speaking practice in terms of learning strategies and language learning. The majority of the students thought that the extra activities should also count in the assessment weighting.
The associated benefits for the teachers were that peer-supported feedback helped alleviate teachers' marking workload at no extra cost and share this responsibility with more advanced students. With reduced teaching hours, it allowed teachers to provide opportunities for extra practice of speaking outside of formal classes via online technologies to both first-and second-year students.
Although results showed benefits for both groups in terms of language learning, there were nonetheless technological and pedagogical issues highlighted by the students that were taken into consideration to formulate recommendations for the future as highlighted in the discussion above.
Although the results highlighted a small increase in students' examination grades in the post-study cohort at first year level, further research is required to correlate the impact of additional online tasks in enhancing oral proficiency and performance in final oral assessments for second and third year students. As pointed out previously ( figure 1.b), the study has now been rolled out to the whole undergraduate French program. Evaluations will be carried out to find out to what extent engaging with regular formative student feedback from more advanced learners can contribute towards successive learning and improving students' understanding of assessment literacy. | 2021-04-24T01:23:26.305Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "a5ccb5eabb7add09bd36157af7e6a9a28c3173f3",
"oa_license": null,
"oa_url": "https://doi.org/10.53761/1.17.5.13",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a5ccb5eabb7add09bd36157af7e6a9a28c3173f3",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": []
} |
17031022 | pes2o/s2orc | v3-fos-license | PhysBinder: improving the prediction of transcription factor binding sites by flexible inclusion of biophysical properties
The most important mechanism in the regulation of transcription is the binding of a transcription factor (TF) to a DNA sequence called the TF binding site (TFBS). Most binding sites are short and degenerate, which makes predictions based on their primary sequence alone somewhat unreliable. We present a new web tool that implements a flexible and extensible algorithm for predicting TFBS. The algorithm makes use of both direct (the sequence) and several indirect readout features of protein–DNA complexes (biophysical properties such as bendability or the solvent-excluded surface of the DNA). This algorithm significantly outperforms state-of-the-art approaches for in silico identification of TFBS. Users can submit FASTA sequences for analysis in the PhysBinder integrative algorithm and choose from >60 different TF-binding models. The results of this analysis can be used to plan and steer wet-lab experiments. The PhysBinder web tool is freely available at http://bioit.dmbr.ugent.be/physbinder/index.php.
INTRODUCTION
Proteins called transcription factors (TFs) are crucial for proper regulation of gene expression. They function by binding to regions of DNA called transcription factor binding sites (TFBS). Two different mechanisms contribute to the TF-DNA binding specificity needed for correct regulation of gene expression: a direct readout component caused by direct contact between the amino acids of the protein and the bases of the DNA and an indirect readout component caused by the global shape of the DNA and by conformational changes in both interaction partners (1,2). Traditional methods for predicting TFBS tend to look at the direct readout component alone and almost exclusively at the primary sequence. However, many of these widely used methods, such as positional weight matrices, are afflicted by many false positive predictions, indicating the need for incorporating other discriminative features (3). Recent evidence shows that sequence-dependent structural variations in the DNA account for a significant portion of the protein-DNA specificity (4)(5)(6). Thus, it is expected to be beneficial to include structural features and nucleotide dependencies in the prediction models. In a recent publication, we examined the effect of incorporating nucleotide position dependencies, which are related to the 3D structure of the DNA (7), on the prediction of TFBS (8). We also calculated structural features of the DNA and verified to which extent these features improve the prediction of TFBS. We found that incorporation of both types of data can substantially enhance the prediction of TFBS. Here, we present PhysBinder, a web tool based on the flexible Random Forest algorithm published in (8). We compiled >60 vertebrate TF models from various sources, but many more models will be offered in the future, as new data become available. Binding sites for these models can be visualized together with the ENCODE TFBS data track of UCSC genome (9) to get a useful insight in the genomic context of the inspected region.
Input
The PhysBinder web tool is easy to use: for most parameters, we offer default configurations to ensure a quick and easy workflow. Users just provide their sequences of interest and select the appropriate TF model information.
Sequences can be uploaded by one of the following means: (i) pasting a set of FASTA-formatted sequences in the input field; (ii) uploading a file with FASTA-formatted sequences; (iii) indicating genomic regions in the 'Fetch genomic regions' text field. Subsequently, a model and a threshold are to be selected. We provide three precalculated thresholds: 'Max. Precision', 'Max. F-Measure' and an average of these two measures. A custom threshold can also be selected.
More than 60 different TF models are now available on the PhysBinder website, but we expect to provide more models, as additional data become available. Most of the PhysBinder models are compiled from recent ENCODE data (10), but other sources were also used (see Materials and Methods for more information). TF models constructed from sequences that, according to the literature, clearly contain a sequence element associated with the TF are called 'direct evidence' models. When an alternative consensus sequence is found or when no consensus sequence is known for a particular TF, we call the models 'putative associated factors' (PAFs). Such a PAF might be a TF binding to multiple sequence elements, or it might be a common cofactor (hence 'putative associated factor'). By default, PhysBinder is configured to run in filter mode to speed up the calculations. In this mode, sequences are prefiltered with a short positional weight matrice with low thresholds, minimizing the number of false-negative hits and effectively guaranteeing maximum recall.
Output
A summary table is given at the top of the results web page. This table can be sorted by model type or by input sequence, and, for each model or sequence, the number of hits is indicated. On this page, users can still alter the thresholds to increase or decrease the stringency of the binding site predictions. In the results section, binding sites are shown as sequences with a colored background (exemplified in Figure 1a). Clicking on the first nucleotide of such a colored sequence provides more details on the binding site. When clicked, a details window with the sequence logo of the binding site is shown (this logo was calculated on the model data), and the Random Forest score with a P-value is given as well. The relative position of the TFBS is shown, and if the genomic location of the sequence is known (because the user indicated this on the input page or performed a BLAT analysis of the sequence against a human or mouse reference genome), then the absolute coordinates of the binding sites are shown in the details window. Two additional options become available when the absolute position is known. For human sequences (hg18 and hg19), it is possible to integrate the most recent ENCODE data to get an overview of the transcription factors and RNA polymerase components that might bind within this genomic region. Predicted binding sites can also be visualized in the UCSC genome browser (11) (exemplified in Figure 1b). Using the aforementioned checkboxes, the sequences or those on the right side of the screen, models can be dynamically shown or hidden to aid the interpretation of the results.
Example
As an example (see Figure 1), we examined the analysis performed by Kyo et al. (12) of the promoter of the human TERT gene, encoding the catalytic subunit of telomerase. These researchers identified a core promoter of 181 bp responsible for the transcriptional activity of the TERT gene. This 181-bp region, consisting of the 5'-UTR and the upstream promoter region, contains two E-boxes bound by MYC in vivo. Between these E-boxes, Kyo et al. discovered and validated five GC-boxes that are bound by SP1. For illustrative purposes, we used the PhysBinder tool to look for SP1, MYC and TBP binding sites with default threshold settings in the same sequence they used (12), and we were readily able to confirm their findings. We unmistakably found the five SP1 binding sites flanked by two MYC binding sites, as reported in the initial publication. No TATA-box was found, and this promoter was reported to lack such box (13).
Web tool
The web tool is hosted on a Linux CentOS 5 server with 32 GB of RAM, an Apache 2.2.3 web server, and PHP version 5.1.6. Web pages are written in the PHP and Javascript scripting languages. To map input sequences to mouse (mm10) or human (hg19) reference genomes, we use gfServer and Client binaries from UCSC, which makes it possible to BLAT sequences (11). ENCODE tracks are obtained from UCSC Genome (9). Sequences can be fetched from 16 different species, obtained from UCSC Genome. Extensive help documentation is available on the PhysBinder website, including guidelines and tutorials to facilitate the interpretation of the PhysBinder results.
Backend and models
The backend of PhysBinder is programmed in a combination of Perl and R-script. The Random Forest classifier used in the backend is the 'FastRandomForest' implementation. This is a multithreaded implementation of the Random Forest classifier in the Weka statistical package (14). In our models, we use a Random Forest with 100 trees. Most models are built from available ENCODE data of tier 1 cell lines, except for Esrrb (15), ETS1 (16), KLF4 (15), NANOG (15), Nmyc (15), STAT3 (15), TBP (17), Tfcp2l1 (15), TP53 (18) and Zfx (15). All sequences were first aligned using the multiple EM (expectation maximization) for motif elicitation (MEME) motif aligner (19) on the STEVIN supercomputing infrastructure of Ghent University. To ensure the quality of input data, the resulting aligned sequence motifs were then manually searched for in the literature. If a motif is not yet reported in literature, the resulting model is called a PAF. Otherwise, the model is termed a direct evidence model. When available, 100 sequences were used to build the model. The other sequences were used for validation. More information on the different steps of the algorithm and on its validation has been reported by us previously (8). Details about all models are available on are shown in green. The default threshold ('Average') was used for both models. Gray shaded bars indicate overlapping ENCODE tracks (9). The checkboxes below the sequence indicate the different ENCODE tracks visualized in this sequence. (b) Both models were visualized in the UCSC Genome Browser (11). MYC binding sites are indicated in blue, whereas SP1 binding sites are in red. the 'models' page, where an overview can be found of all the features contained in the models, together with performance measures that were calculated on external test sets. | 2015-07-06T21:03:06.000Z | 2013-04-24T00:00:00.000 | {
"year": 2013,
"sha1": "a163f057625cffc6354f4760630945cb1db5ba65",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/nar/article-pdf/41/W1/W531/3813772/gkt288.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa9909e292d93d4da465dae67dc3d8da3f6add81",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine",
"Computer Science"
]
} |
2322463 | pes2o/s2orc | v3-fos-license | Glycol chitosan incorporated retinoic acid chlorochalcone (RACC) nanoparticles in the treatment of Osteosarcoma
Background Osteosarcoma is the most common of all the bone malignancies and accounts for 30-80 % of the primary skeletal sarcomas. The overall survival rate of patients with osteosarcoma is < 20 % suggesting poor prognosis. Methods The present study demonstrates the effect of retinoic acid chlorochalcone (RACC) incorporated glycol chitosan (GC) nanoparticle transfection in osteosarcoma cells. MG-63 and Saos-2 osteosarcoma cells were transfected with various concentrations of RACC-incorporated GC nanoparticle for 24 h. The effect on cell proliferation, Ezh2 expression, apoptosis, cell cycle arrest, cell migration and invasiveness, Akt phosphorylation and local tumour growth and metastases were studied. Results MG-63 and Saos-2 osteosarcoma cells on RACC-incorporated GC nanoparticle transfection for 24 h showed a concentration-dependent inhibition of cell proliferation. Of the various concentrations of RACC tested, the effective concentration started from 5 μM with an IC50 of 20 μM. Wound healing assay also showed that RACC-incorporated GC nanoparticles inhibited migration of tumor cells more effectively compared to the parent RA. RACC transfection resulted in inhibition of cell proliferation, Ezh2 expression inhibition, apoptosis through mitochondrial pathway by decrease in membrane potential and release of cytochrome c and cell cycle arrest in the G0/G1 phase. The invasiveness of cells treated with 5 and 20 μM RACC was decreased by 49 and 76 % respectively, compared to the control. RACC-treated mice showed significantly lower number of metastases compared to that in the control mice. Conclusions Thus, RACC-incorporated glycol chitosan nanoparticle strategy can be promising for the treatment of osteosarcoma.
Background
Osteosarcoma is the most common of all the bone malignancies and accounts for 30-80 % of the primary skeletal sarcomas [1,2]. It frequently attacks the children, teenagers, and young adults between 10-30 years of age [3]. Compared to females osteosarcoma is more predominantly observed in males. The long cylindrical bones like femur, tibia, and humerus including the knee joint are the main target in osteosarcomas [4]. However, the shoulder blade, pelvic, and skull bones are also sometimes affected [5]. Osteosarcoma is a well-defined clinical entity with a characteristic radiographic appearance, histologic features, a relatively consistent spectrum of clinical presentations, and established standard treatments. These features have been the subject of many prior book chapters and reviews [6][7][8][9][10][11]. However, all of the present treatments are less efficient. Therefore, the discovery of molecules with roles in the osteosarcoma inhibition is highly desired to improve the clinical treatment.
Polycomb group of genes (PcG) which play a crucial role epigenetically in regulating gene transcription programs possess a catalytic subunit, Enhancer of Zeste homolog 2 (Ezh2) [12]. It has been demonstrated that Ezh2 controls expansion and differentiation of tumor initiating cells and the development and progression of cancer [13][14][15]. In Myelodysplastic syndromes Ezh2 functions as a tumor suppressor [16,17]. It inhibits cell differentiation to maintain stemness of tumor cells [18,19].
Retinoic acids (RAs) have been used in the prevention and treatment of dermatological diseases [20,21]. Recently retinoic acid and other retinoids have been reported to possess promising anti-cancer activity [22]. It was demonstrated that retinoic acids affect in vitro proliferation, differentiation, and apoptosis of colon [23], prostate [24], lung [25], and leukemia [26] cancers. Moreover, retinoc acids also influence the morphological differentiation, proliferation, and gene expression of neuroblastoma [27] and astrocytoma cells [28]. Recurrent malignant cerebral gliomas have been treated with ATRA [29,30] and 13-cis RA [31]. Despite of its in vitro biological promise, its poor bioavailability under in vivo restricts its clinical applications [32]. One of the techniques to overcome this drawback is the development of polymeric micelles [33], like glycol chitosan micelle. Taking cue from the above literature we devised an experiment to study the effect of RACC ( Fig. 1) having more bioavailability compared to the parent compound on human glioma.
RACC-incorporated GC nanoparticles cause proliferation inhibition in human osteosarcoma cells
The results from MTT assay revealed a dose-dependent inhibition of the MG-63 and Saos-2 cell proliferation on RACC treatment after 24 h. Among the range of concentrations from 1 to 20 μM tested, the inhibition was significant at 5 μM with a reduction in O.D. values of 16 ± 0.6 and 13 ± 0.8 % for MG-63 and Saos-2 cell lines respectively. The reduction in O.D. values at 10, 15 and 20 μM was 23 ± 2, 63 ± 3.5, 90 ± 10 % for MG-63 and 36 ± 3.2, 64 ± 3.43 and 89 ± 10.34 for Saos-2 cells respectively. The IC 50 values of RACC were 18.2 ± 2.8 μM for both the tested cell lines.
The daily MTT assay using 20 μM RACC for 4 days showed that growth inhibition for both the cell lines was maximum at day 4 (Fig. 2a,c). The trypan blue exclusion assay showed drop in cell number in a time-dependent manner (Fig. 2b,d).
RACC-incorporated GC nanoparticle transfection inhibits Ezh2 expression in human osteosarcoma cells
We used Western blot and RT-PCR analysis to examine the changes in Ezh2 and protein expression levels in MG-63 and Saos-2 cells on RACC-incorporated GC nanoparticle treatment. The results showed a significant decrease in Ezh2 expression level after 24 h of RACCincorporated GC nanoparticles (20 μM) transfection compared to control. The Ezh2 inhibition by RACC lasted for at least 72 h after the RACC-incorporated GC nanoparticle transfection (Fig. 3). These results suggest that after the transfection of the RACC at 20 μM for 24 h, the Ezh2 and protein expression levels are effectively inhibited.
RACC-incorporated GC nanoparticles induce apoptosis in MG-63 and Saos-2 human osteosarcoma cells
We used flow-cytometric and ssDNA detection assay to examine apoptotic cell death in osteosarcoma cells. In MG-63 cells treatment with 5 and 20 μM RACC induced apoptosis in 5.89 ± 3.9 and 60.54 ± 5.4 % cells respectively compared to 2.05 ± 1.01 % cells in control (Fig. 4). Similar results were observed in Saos-2 cells, where in exposure to 5 and 20 μM RACC induced
RACC treatment induces apoptosis in the MG-63 and Saos-2 human osteosarcoma cells through the mitochondrial pathway
We used JC-1 staining to detect the changes in mitochondrial membrane potential in MG-63 and Saos-2 cell lines. The results clearly showed that increase in concentration of RACC in RACC-incorporated GC nanoparticle from 10 μM to 25 μM significantly reduced the mitochondrial membrane potential in MG-63 cells (Fig. 5a). Western blot analysis revealed translocation of Bax and Bcl-2 proteins from mitochondria to cell cytosol (Fig. 5b). Similar results were obtained in Saos-2 human osteosarcoma cell lines.
RACC-incorporated GC nanoparticle transfection causes a cell cycle arrest in the G0/G1 phase in MG-63 and Saos-2 human osteosarcoma cells
The results from flow cytometry showed a significant increase in G0/G1 cell population in both MG-63 and Saos-2 cells with subsequent decrease in S and G2/M phase on treatment with RACC (5 μM) (Fig. 6). The increase in concentration of RACC from 5 μM to 20 μM led to further increase in the percentage of cells in G0/ G1 phase and subsequent decrease in cell percentage from S and G2/M phase (Fig. 6). These results confirm that RACC transfection arrests cell cycle in G0/G1 phase in human osteosarcoma cell lines.
RACC-incorporated GC nanoparticle transfection inhibits cell migration and invasiveness
Treatment with RACC (20 μM) for 24 h significantly decreased the migratory activity of MG-63 and Saos-2 cells by 52 and 58 % respectively compared to control cells (Fig. 7a). The migratory activity of the cells treated with 5 μM RACC was decreased by 10 and 40 % respectively in MG-63 and Saos-2 cells. In invasion assay, the capacity of the RACC-treated MG-63 cells to pass though the Matrigel-coated filters was significantly lower compared to control cells (Fig. 7b). The invasiveness of cells treated with 5 and 20 μM RACC was decreased by 49 % and 76 % (P < 0.001), respectively, compared to the control. Knockdown of Has1 and/or Has3 with siRNA revealed that the single knockdown of Has1 or Has3 did not compensate for the effects of RACC on cell motility
RACC-incorporated GC nanoparticle transfection inhibits Akt phosphorylation
We also examined the effect of RACC treatment on Akt phosphorylation in MG-63 and Saos-2 cells using western blot analysis. The results revealed a significant decrease in Akt phosphorylation after 5 and 10 h of RACC treatment than that in the control cells. However, no difference was observed at 1 or 2 h (Fig. 7d).
RACC-incorporated GC nanoparticle transfection exhibits inhibitory effects on local tumour growth and metastases
Administration of 20 μM RACC exhibited an inhibitory effect on MG-63 tumour growth, based on the reduction in tumour wet weight (67 % reduction, Fig. 8a). We used HABP staining to analyse the HA retention in the local tumour inhibited by RACC treatment. The results revealed a significantly lower HA retention in RACCtreated local tumours compared to that in the control tumours ( Fig. 8c-e). RACC treatment resulted in a significant (84 %) reduction in the number of metastatic lesions which was visually analysed (Fig. 8b). RACC-treated mice showed significantly lower number of metastases compared to that in the control mice.
Discussion
In the present study, RACC-incorporated GC nanoparticles formed by electrostatic interaction between -COOH group of RACC and -NH 2 group of glycol chitosan were prepared. The presence of reactive -NH 2 group makes chitosan a suitable substrate for drug conjugation and ion complex formation with anionic drugs [14,[34][35][36]. Thünemann and Beyermann initially developed the concept of nanoparticle formation acid and positively charged macromolecules [29,30]. Since then nanoparticle targeted treatment of cancer has been studied extensively [31][32][33].
Taking into consideration poor bioavailability of RACC, we transfected RACC-incorporated GC nanoparticles into Our results from flow-cytometry demonstrate that osteosarcoma cells undergo apoptosis through mitochondrial pathway. The Bax and Bcl-2 proteins were seen to translocate from mitochondria into cytoplasm where they led to release of cytochrome c. Cytochrome c then activates caspase 9 and caspase 3, which play key roles in the apoptosis pathway [37]. The increase in concentration of RACC in RACC-incorporated GC nanoparticle from 5 μM to 20 μM significantly reduced the mitochondrial membrane potential in MG-63 cells. Therefore, these results suggest that the RACC inhibition of Ezh2 expression induces apoptosis through the mitochondrial pathway in human osteosarcoma cells. Our results from flow cytometry also suggest that RACC induces cell cycle arrest in G0/G1 phase. Treatment of MG-63 and Saos-2 cells with 10 μM concentration of RACC, led to an increase in the percentage of cells in G0/G1 phase with the subsequent decrease in S and G2/M phase. The increase in concentration of RACC from 5 μM to 20 μM significantly increased the percentage of cells in G0/G1 phase.
The results from our study revealed that RACC exerted a multistep inhibitory effect on the tumourigenicity of osteosarcoma cells through inhibition of HA synthesis. HA being the major component of ECM, the reduction of Has subsequently causes the suppression of ECM production, particularly that of the cell-associated matrix. It is reported that cell-associated matrix is linked to tumourigenicity [38][39][40]. Our results demonstrate that the inhibition of cell-associated matrix formation through suppression of HA synthesis by RACC effectively suppressed the tumourigenicity. Thus, the anti-tumour activity of RACC may be partly through the depletion of cell-associated matrix formation. Recent studies have shown that the PI3K/Akt signalling pathway is significantly involved in HA-induced cell motility and invasiveness. We also demonstrated that RACC-induced downregulation of Akt phosphorylation in osteosarcoma cells. Considering the delayed inhibition of Akt phosphorylation (after 6 h) by RACC in this study, RACC may indirectly affect Akt phosphorylation, possibly via suppression of HA synthesis, perturbation of HA-receptor interaction, or alteration of cell signalling pathways including Akt phosphorylation.
The degree of the inhibitory effects of RACC on the formation of osteosarcoma metastasis in vivo was markedly higher than that on the growth of the implanted primary tumour. In contrast to the growth of the primary tumour, multistep processes are associated with distant metastasis. In this study, RACC suppressed proliferation, motility, and invasion of osteosarcoma cells in vitro. Inhibition of these steps by RACC led to substantial suppression of tumour metastasis. Another explanation is that RACC affects the microenvironment of the primary and target organs. The tumour stroma and surrounding normal cells (immune cells, inflammatory cells, pericytes, vascular endothelial cells, and fibroblasts) can be affected by RACC, possibly via suppression of HA synthesis. Notably, in the current study, HA deposits were markedly suppressed not only in the periphery of the tumour, but also in the surrounding stromal tissues and perivascular region in vivo. In the clinical context, the strong suppressive effects of RACC on lung metastasis might be especially beneficial for patients with osteosarcoma, considering that the primary cause of death in this group is metastasis [41].
Conclusions
The present study demonstrates that RACC inhibits various processes of tumourigenicity in vitro in murine and human osteosarcoma cell lines, and markedly suppressed osteosarcoma metastasis. Thus RACCincorporated GC nanoparticles may be a promising strategy for the treatment of osteosarcoma.
Cell culture
Human osteosarcoma cell lines, MG-63 and Saos-2 were purchased from the Health Science Research Resources Bank (Osaka, Japan). The cells were maintained in RPMI 1640 medium (RPMI:ECM = 4:1) supplemented with 10 % fetal bovine serum at 37°C in 5 % CO 2 in a humidified atmosphere.
Ethical statement
The present study was approved by the Institutional Review Board and Ethics Committee of the Nanjing University, Jiangsu, China.
Preparation of RA-incorporated GC nanoparticles
The RACC-incorporated GC nanoparticles were prepared by adding a solution containing 5 mg RACC in 1 mL of DMF to an aqueous solution containing 40 mg of GC in 10 mL of deionized water while stirring. The stirring was continued for 20 min under darkened conditions. A dialysis membrane (MWCO = 12,000 g/mol, Sigma Chem. Co. Ltd. St. Louis, MO, USA) was used to prepare dialyzed solution against deionized water by dialysis for 1 day. Out of 20 mL prepared by adding deionized water to the dialyzed solution, 100 μL was diluted with 9.9 mL of DMSO. UV spectrophotometer (UV-1200, Shimadzu Co. Ltd., Kyoto, Japan) was used to measure drug contents at 365 nm and empty GC vehicles were used as a blank test.
Proliferation inhibition assay (MTT assay)
In each well of a 96-well plate, aliquots containing 2.5 × 10 5 cells were seeded. The cells were incubated overnight in a 5 % CO 2 incubator at 37°C and then RACC-incorporated GC nanoparticle solution was added to each well. After dilution with RIMI 1640 (10 % FBS), these were used to treat the tumor cells. RIMI 1640 (10 % FBS) with 0.1 % (v/v) DMSO was used as control. The incubation for 48 h was followed by addition of 25 μL of MTT (3 mg/mL in PBS) to each well and incubation was continued for 4 h more. To each well was added 100 μL of SDS-HCl solution (SDS 10 % w/v, 0.01 M HCl) and incubated again for 12 h. An Infinite M200 pro reader (Tecan Austria GmbH, Salzburg, Austria) was used to measure the absorbance at 570 nm. The viable cells were expressed as percentage of control and all the experiments were conducted in triplicate.
Western blotting
The transfected osteosarcoma cells from were washed twice in PBS followed by addition of Lysis buffer (50 mM Tris-HCl pH 7.4, 137 mM NaCl, 10 % glycerol, 100 mM sodium vanadate, 1 mM PMSF, 10 mg/ml aprotinin, 10 mg/ml leupeptin, 1 % NP-40, and 5 mM cocktail). Bicinchoninic acid assay (BCA) method was used to determine protein concentration. Equal amounts of protein were loaded and resolved by electrophoresis on a 10 % polyacrylamide gel. The semi-dry method was used to transfer proteins onto a PVDF membrane which was then blocked with 5 % non-fat dry milk overnight. After TBST washing, membrane was incubated for 2 h with primary antibodies and then washed again with TBST before incubation with secondary antibodies for 2 h. Then X-ray autoradiography was performed and the gray scale images were analysed.
Flow cytometric analysis
Identification of apoptosis and necrosis in osteosarcoma cells was performed by propidium iodide and FITCannexin V reagents respectively. Treatment of cells with various concentrations of RACC-incorporated GC nanoparticles for 24 h was followed by washing with PBS. After suspension in binding buffer (10 mM HEPES pH 7.4, 150 mM NaCl, 5 mM KCl, 1 mM MgCl 2 , and 1.8 mM CaCl 2 ) containing FITC annexin V (1 μg/mL) the pellets were incubated for 20 minutes. Then PI (10 μg/mL) was added to stain necrotic cells under dark conditions and incubation was continued for 10 minutes more. FAC Scan flow cytometer (Becton Dickenson Biosciences, San Jose, CA, USA) was used to analyse the cells immediately.
Detection of Single-Strand DNA (ssDNA)
In a 96-multiwell plate, 10000 cells/well were seeded and incubated with the RACC-incorporated GC nanoparticles. The cells were then fixed with 80 % methanol for 30 minutes. The plates were dried and incubated with formaldehyde for 10 min at room temperature followed by 10 min at 75°C, and then at 4°C for 5 min. With 3 % non-fat dry milk cells were incubated for 1 h followed by incubation with the antibody mixture (containing a primary monoclonal antibody to ssDNA and horseradish peroxidase-labeled secondary antibody) for 30 min. The addition of 2-2'-azino-bis[3-ethylbenziazoline-6-sulfonic acid] solution permitted the reading of the plates at 405 nm in a standard microtiter reader. As positive control ssDNA and as negative control necrotic cells obtained by hyperthermia were used.
Immunocytochemistry for Has
Onto the chamber slides (BD Biosciences, Mountain View, CA, USA) 2.5 × 10 6 MG-63 cells were seeded and allowed to stick to the bottom. The cells were then incubated with various concentrations of RACC for 24 h and subjected to Has1 and Has3 immunocytochemistry. The antibodies against Has1 and Has3 were raised in rabbits by subcutaneous injection of the synthetic peptides.
Motility and matrigel invasion assays
Transwell motility chambers were used to analyse cell migration and invasion. For this, the 8-mm pore diameter transwell motility chambers (Corning) were coated with matrigel (BD Biosciences) on undesurfaces. Into the upper chamber, 2 × 10 6 cells were plated in serum-free culture medium and the lower chamber was filled with medium containing 10 % FBS. The plates were incubated for 24 hours at 37°C. After incubation the upper surface of the compartment was cleaned. The inserts after methanol fixing were stained with crystal violet solution (0.5 %) followed by microscopic examination. The 5 areas were randomly selected and the cells were calculated. Experiments were performed in triplicates.
Effects of RACC in vivo
The dorsal flank of 5-week-old C3H/He male mice were transfected with MG-63 cells (2.5 × 10 6 ) suspended in 200 ml of serum-free DMEM. After 14 days of in vivo growth small tumours (0.6-1.2 cm in diameter) were observed. The mice were then randomly assigned into two groups with 10 each. The mice in RACC group received 15 mg RACC with 100 ml of 0.4 % CMC solution intraperitoneally daily whereas the mice in control group were given same amount of 0.4 % CMC solution. Twenty days after the treatment, the mice were sacrificed, and their tumours were excised and analysed for tumour wet weight and number of metastatic colonies. All animal experiments were performed in accordance with the National Cancer Research Institute (2010) Guidelines for the welfare and use of animals in cancer research and under approval of the institutional animal ethics committee.
HA staining for cells and tissues
The hyaluronic acid binding protein (HABP; Seikagaku, Tokyo, Japan) was used to examine the accumulation of hyaluronan in cells and in vivo tissues with or without RACC. MG-63 cells were distributed onto chamber slides (BD Biosciences) and allowed to adhere to the bottom. The cells were then incubated with various concentrations of RACC with or without exogenous 200 mg ml −1 of HA for 72 h. After HABP staining, the cells and local tumours were incubated with a 2.0 mg ml −1 biotinylated HABP probe for 1 h at room temperature. Streptavidin-peroxidase reagents (Nichirei, Tokyo, Japan) and diaminobenzidinecontaining substrate solution (Nichirei) were used to analyse b-HABP binding.
HA quantification
MG-63 cells were incubated with or without 20 μM RACC for 6, 12, and 24 h. The cells were incubated for 10 min at 37°C with trypsin-EDTA followed by PBS wash to remove the cell-surface-associated HA. The cells were then placed in Protease K solution (0.15 M Tris-HCl, pH 7.5, 0.15 M NaCl, 10 mM CaCl 2 , and 5 mM deferoxamine mesylate containing 20 units of protease K) and incubated for 2 h at 55°C. For inactivation of the protease activity samples were heated at 100°C for 20 min and centrifuged at 12 000 g for 45 min at 4°C. The supernatants were analysed for HA concentrations using a sandwich enzyme-linked immunosorbent assay.
Statistical analysis
The in vitro quantitative experiments were performed in triplicates, and analysis of variance followed by Bonferroni-Dunn post-hoc test was used to assess differences between means. Student's t-test was used for statistical comparisons between the two groups. | 2016-05-04T20:20:58.661Z | 2015-07-14T00:00:00.000 | {
"year": 2015,
"sha1": "1c70dd85e2e0420bfc04cd7dd0115c837ac2adbc",
"oa_license": "CCBY",
"oa_url": "https://lipidworld.biomedcentral.com/track/pdf/10.1186/s12944-015-0068-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c70dd85e2e0420bfc04cd7dd0115c837ac2adbc",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
236869975 | pes2o/s2orc | v3-fos-license | Hardening ion plasma coatings (Ti, Alx)N (x = 3 at. %) for carbide cutting tools
Coatings (Ti, Al0,03)N and comparison samples – coatings (Ti, Al0,25)N and TiN are formed by the arc-PVD method on the WC-Co carbide alloy. The phase composition, the substructure characteristics and the mechanical properties of the coatings were investigated. The addition of Al and its increase in the coating composition is accompanied by a decrease in the value of the lattice parameter, refinement of the coatings subgrain structure and an increase in both microstrains and macrostresses. At the same time, the hardness of the coatings and the values of the parameters H3/E2 and H/E increase with a decrease in the relative work of plastic deformation of the samples, which is a characteristic of their viscosity. Durability tests were carried out using a cutting tools with the investigated coatings showed the advantage of the (Ti,Al0.25)N coating in continuous turning operations. The (Ti,Al0.03)N coating is characterized by increased resistance compared to other tested coatings in milling operations and compared to TiN coating in continuous cutting.
Introduction
One of the most popular hardening coatings on cutting tools at present are coatings based on the complex nitride -Ti-Al-N. The reason for this is their high functional characteristics: hardness up to 38 GPa, heat resistance, resistance to high-temperature oxidation up to 800 °C, thermal stability of the composition and structure [1][2][3]. The hardness of these coatings increases with an increase in the aluminum concentration up to 70 (at. %). At the same time, the operational capabilities of Ti-Al-N coatings in intermittent cutting operations are limited due to the high tendency to brittle fracture [4][5][6]. An increase in the viscosity of such coatings with a slight decrease in hardness due to aluminum content decrease in them can expand the area of use of the cutting tool both in continuous and interrupted cutting operations (milling, planing, etc.). To study this possibility, the properties of coatings with the composition (TiAl x )N (x = 3 at. %) and comparison samplescoatings TiN and Ti-Al-N with an aluminum content of ~ 25 at. % obtained by the method of ion-plasma vacuum-arc deposition were studied. The resistance properties of a carbide cutting tool with these coatings on continuous and interrupted cutting operations have been evaluated.
Experimental details
The deposition of coatings was carried out by filtered cathodic vacuum arc deposition. A threecathodic evaporation system with droplet phase separators was used in the work. To obtain coatings (Ti 0.75 ,Al 0.25 )N powder Ti-Al cathodes with an Al content of 50 at. % were used. Coatings (Ti 0.97 ,Al 0.03 )N were obtained using cathodes made of Ti-Al alloy 6 at. % Al and cathodes made of RSMC 2020 Journal of Physics: Conference Series 1713 (2020) 012011 IOP Publishing doi:10.1088/1742-6596/1713/1/012011 2 titanium alloy with 99.5 at. % Ti was used to obtain TiN coatings. The nitrogen partial pressure (PN2) was maintained at 0.5 Pa. The deposition time was 60 minutes. The thickness of the coatings was 4-4.5 microns. WC-Co carbide plates were used as substrates. A negative bias voltage (Us) on the substrate was -120 V. A constant electric arc current was ~120 A for all the cathodes. The phase composition and characteristics of the coatings substructure were investigated by X-ray analysis. The structure of the coatings was investigated using a transmission electron microscope (TEM). The physical and mechanical properties of the coatings were determined using microhardness tester. Macrostresses were determined by the sin2ψ method. The durability of the coated cutting tool during the processing of the EI 698-VD alloy (nickel base heat resistant steel, C -~0,05 mass. %, Cr -~14 mass. %, Ti -~3 mass. %, Al -~2 mass. %, Mo -~3 mass. %, Nb -~2 mass. %) was estimated by the time during which the tool participates in the cutting process until a predetermined wear is reached. Its criterion for turning was angular wear along the flank surface VB E = 0.4 μm, and for face milling, the limiting wear of the flank surface of the cutting insert was h 3 = 0.5 mm. Turning was carried out in the following modes: dry cutting speed 250 m/min; feed 0.2 mm/rev; cutting depth 1 mm. The selected milling modes were: depth of cut 1.0 mm; feed per tooth 0.125 mm/tooth; cutting speed 25 m/min; spindle speed 50 rpm. A single tooth cutter was used.
Results and discussion
The morphology of the investigated coatings is characterized by a cellular surface structure with a roughness (Ra) ~0.1 μm with a relatively small amount of droplet phase. The grain structure of the coatings is columnar ( figure 1). The composition of the coatings is shown in table 1. X-ray phase analysis (figure 2) showed the presence of the TiN (B1) phase in the coatings. With the addition of aluminum and its increase, the diffractograms show a shift in the arrangement of the diffraction lines of fcc-TiN towards to larger angles. Taking into account the knowledge of datasets published [7], it can be concluded that during the Ti-Al-N system coatings deposition, a metastable fcc-Ti x Al 1-x N phase is formed with lattice parameters close to titanium nitride. Table 2 shows the characteristics of the substructure of the coatings and the values of thermal and concentration macrostresses. 4,2478 ± 0,0013 48,1 ± 4,3 0,15 ± 0,02 -2,2 (Ti 0,97 ,Al 0,03 )N 4,2386 ± 0,0003 16,0 ± 2,0 0,60 ± 0,10 -3,7 (Ti 0,75 ,Al 0,25 )N 4,1646 ± 0,0005 14,0 ± 2,0 0,70 ± 0,10 -8,9 The presented results indicate that with the addition and increase of Al in the coatings composition [8], the value of the lattice parameter of the coating phase decreases, the coating subgrain structure is refined and microstrains grow. The coatings are characterized by compressive macrostresses, the value of which increases with the addition of aluminum into their composition and an increase in its content. At the same time, there is an increase in hardness (H), elastic modulus (E), values of the coating material parameters H 3 /E 2 and H/E (table 3). In this case, there is a decrease in the relative work of the samples plastic deformation (Wp), which is a characteristic of their viscosity [9]. This property, along with hardness, can play an important role in determining the effectiveness of hardening coatings operating under impact loads. for the (Ti 0,97 Al 0,03 )N coating, the operating time to critical wear during milling increases by 10 % compared to the (Ti 0,75 ,Al 0,25 )N coating.
(a) (b) Figure 3. Comparative tests histograms for the resistance of WC-Co carbide tool inserts with coatings during turning (a) and milling (b) of the EI698-VD alloy.
Conclusions
The obtained results give a reason to recommend this coating, as a hardening one, for use on cutting tools under both constant and impact loads. | 2021-08-04T00:04:18.373Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "8bf7ea7a33f3cef27e75100615f2e45d478fe0d9",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1713/1/012011/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "eebee78ef13ffd9559c1501cc26954a340b10248",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
262200824 | pes2o/s2orc | v3-fos-license | Effect of Train Vibrations on the Dynamic Response of a Multi-Span Double-Curved Brick Arch Thin-Shell Factory of Changleyuan
: The dynamic characteristics of a multi-span double-curved brick arch thin-shell factory of Changleyuan in Baoji City and the dynamic response to train vibration load were studied using field dynamic tests and finite-element numerical simulations, and a vibration evaluation of the thin-shell factory was carried out. The results showed that the first-order frequency of the thin-shell factory was 6.24 Hz in the horizontal direction (east–west) and 9.31 Hz in the vertical direction. Moreover, it was established that the horizontal vibration is the overall vibration of the factory, while the vertical vibration is the individual vibration of the double-curved brick arch. In addition, the self-oscillation frequency obtained from the numerical simulation results was greater compared with the field measurements, with a maximum error rate of 7.14%. Both in acceleration and velocity, the vertical vibration for each measurement point was larger than the horizontal vibration, and the farther away from the railroad, the smaller the vibration. The vibration of the velocity at the bottom of the arch was almost the same as that at the top of the arch, while the acceleration vibration at the bottom of the arch was significantly larger than that at the top of the arch, with an average amplitude of 40.64%. For every 20 km/h increase in train running speed, the average increase in vertical acceleration amplitude, vertical velocity amplitude, horizontal acceleration amplitude, and horizontal velocity amplitude for each measurement point of the thin-shell factory was 35.4%, 29.8%, 23.7%, and 12.5%, respectively. When v = 150 km/h, the maximum velocity amplitude for each measurement point of the thin-shell factory was 1.163 mm/s, which is less than the security specification limit of 2.5 mm/s, such that the security of the thin-shell factory meets the requirement, and the maximum horizontal velocity amplitude was 0.272 mm/s, which is close to the integrity specification limit of 0.27 mm/s, such that the integrity of the thin-shell factory just exceeds the requirement; so it is suggested that train running speeds should not exceed 150 km/h and that the thin-shell factory needs to strengthen the monitoring and protection of its integrity.
Introduction
With the rapid development of China's economy and urbanization in recent years, a transportation system compatible with modern cities has also developed in an all-round, multi-level, and three-dimensional manner; in particular, railroads have expanded, being favored by customers because they offer large capacity, high speed, punctuality, convenience, and comfort [1].Transportation systems have played an important role in promoting the economic development of urban areas and facilitating urban travel; however, the consequent traffic vibration problem has become a growing concern.
Xia et al. [2] studied the vibration response of the Beijing Liao dynasty Liangxiang Tower in the horizontal direction under different train excitations through field tests and numerical simulations and found that the vibration velocity response of the Liangxiang Tower under the action of freight trains was greater than that of passenger trains.Qiao et al. [3] carried out field tests and a theoretical analysis of the vibration elevation response mechanism of the ancient city wall of Jingzibao based on various working conditions and showed that the vibration velocity at the top of the wall was larger than that at the bottom.Further, the vibration amplification effect was mainly influenced by vehicle load and driving speed.Zhang et al. [4] studied the dynamic characteristics of the Xi'an Drum Tower and its dynamic response under different operation modes of Metro Line 6, assessing its vibration performance according to the relevant national codes.Tian et al. [5] found that the vibration acceleration amplitude of high-speed railroad tunnel lining arches increased with an increase in vehicle speed, and the lateral and vertical vibrations of the arches showed different transmission laws.Mykola Karpenko et al. [6] conducted theoretical and experimental research on a composite pneumatic tire used in transport engineering, and the obtained results indicate that the offered methodology can be used in numerical simulations for composite tire investigations and considering material viscoelastic properties.Masoud et al. [7] comparatively studied the vibration characteristics of ground and subway trains by examining the vibrations of buildings at six locations in the Boston area.Hinzen [8] analyzed the dynamic response of Cologne Cathedral under vibratory loads from subway trains through field measurements, evaluated the safety of the structure, and proposed vibration-damping measures.Javad et al. [9] established a finite-element model of the track and surrounding buildings to predict the safe distance between the subway line and the buildings and verified the model's validity and practicality.Agostinacchio et al. [10] studied the vibration response induced by heavy vehicles under different road surfaces and found that vehicle dead weight, traffic volume, and road surface unevenness are the factors that influence vibration transmission.Ju et al. [11] studied and compared the dynamic response of finite-element analysis and field measurements and established a finite-element model that can be used to simulate soil vibrations caused by high-speed trains traveling over bridges.
The vibrations generated by train operations can cause a dynamic response in surrounding buildings and have a serious impact on the safety of buildings [12].Old industrial buildings are the birthplace of modern industry and have high historical, scientific, and social value.Therefore, it is important to conduct research on the influence of train vibrations on old industrial buildings.This study considered the thin-shell factory of Changleyuan in Baoji City as the research object and analyzed its dynamic characteristics via a field dynamic test and a finite-element numerical simulation; further, the dynamic response of the factory under the action of train vibration load was studied, and a vibration evaluation of the factory was carried out.
Project Overview
The thin-shell factory is the most well-preserved anti-war industrial site in China.It is famous at home and abroad and known as "one of the greatest miracles of China during the war" [13].The thin-shell factory was the first industrial production line supporting the war in the northwest, and it is also a testament to the patriotism of industry entrepreneurs.It has been a witness to the "industrial cooperation" movement and the birthplace of modern industry in Baoji.Its protection and utilization provide a good example of the preservation of modern industrial heritage and support the economic development of the surrounding area.
The thin-shell factory is located in the flat area at the foot of Changleyuan; the structure is situated just over 30 m away from the Longhai Railway (Figure 1).The three groups of buildings constitute six factories in total, each with a depth of 15 m, a width of 20 m, and a surface area of 300 m 2 .The thin-shell factory as a whole faces south; each factory has four doors, and every two factories are connected by a door in the middle (Figure 2).The factories have similar structural features, such as the brick masonry walls; the height of the east and west side walls (6 m), each of which have four brick columns; and the highest point of the north and south walls (8.5 m), each of which have two brick columns.In addition, the top of the factory exhibits a two-way arch structure; the span of the main arch is 14.5 m, with a vector height of 2.6 m, and the span of the secondary arch is 2 m, with a vector height of 0.55 m.Each factory has 10 groups of these arches, which are connected in the direction of the secondary arch to form the overall large-span roof.The arch structure is made of square bricks, forming a unique thin-shell structure.This approach enabled the construction of large-span structures in times when there were extreme shortages of materials and which could not have been achieved by utilizing wood.This type of largespan factory provided excellent conditions for industrial development.A two-way arch thin-shell structure of this sort is relatively rare in China, and it has an important role in the study of the development of the construction industry at that time.An image of the arch structure is shown in Figure 3.
Field Dynamic Test
Donghua's 32-channel DH5983 portable dynamic signal test and analysis system was used for on-site dynamic testing.The data acquisition system was connected to a computer for real-time signal acquisition, storage, display, and analysis.The acquisition equipment is shown in Figure 4.A total of six measurement points were selected, including three at the bottom of the arch (P1-P3) and three at the top of the arch (P4-P6).First, the Pickup Shaker was fixed to the plasticine, and then the plasticine was fixed at each measurement point.Two pickup shakers were placed at each measurement point, one for the horizontal (east-west) direction and the other for the vertical direction.The arrangement of the measurement points is shown in Figure 5.The first structural dynamic test, that is, the measurement of the dynamic characteristics of the thin-shell factory, was performed under the action of ground pulsations.The velocity and acceleration at each measurement point were acquired separately, and the acquisition time for each measurement was 10 min.The signals were pre-collected for at least 3 min before each acquisition to ensure the reliable connection of the test equipment.Velocity and acceleration were measured in the three groups, and no trains or pedestrians passed next to the test area during the test.
Subsequently, a train dynamic test, measuring the dynamic response of the thin-shell factory under the vibration load of a train was conducted, and the velocity and acceleration for each measurement point were acquired.
Results Pre-Processing
In the field of dynamic measurement, human influence as well as external factors can introduce bias, often resulting in the confusion of the collected vibration signal with the interference components.Therefore, before analyzing the acquired data, it was necessary to pre-process the data.Using DAS 2.0 software, the acquired data were pre-processed in four steps: removing abnormal signals, de-DC, digital filtering, and eliminating trend terms.
After the pre-processing was completed, the time and frequency domain waveforms of velocity and acceleration for each measurement point (P1-P6) were obtained.The waveforms under the action of ground pulsations of the velocity at P1, for example, are shown in Figures 6 and 7.
Dynamic Characteristic Analysis of the Thin-Shell Factory under the Action of Ground Pulsation
The dynamic test treated the factory as a multi-mass structure, and the factory was made to vibrate under natural excitation.Since natural pulsation is random, the vibration of the factory must also be random.Based on the random vibration theory, the ground pulsation can be regarded as a smooth random process, whereas the thin-shell factory can be regarded as a linear system, where both the damping and the self-oscillation frequency do not change with time; thus, the pulsation response of the factory is a smooth random process.For this process, the structure has the following relationship between the excitation and response [14]: In the formula: H(ω) is the transfer function of the structure, ω is the circular frequency of the vibration, G yy (ω) is the self-power spectrum of the structural reaction, and G f f (ω) is the self-power spectrum of ground pulsation.
There can be several sources of noise present in the input signal while measuring dynamic characteristics, such as environmental vibrations, wind pulsations, and other vibration signals, which can obscure the acquisition of accurate input excitation signals.Thus, the input excitation signal can be approximated as finite-bandwidth white noise, meaning that G f f (ω) is assumed to be a constant, so the performance of the reaction power spectrum of the structure can be reflected by the transfer function.If the frequency of the transfer function is close to its self-oscillation frequency, the transfer function will have a peak, and, according to this principle, the inherent frequency of the structure can be determined.
The above methods and principles were followed, and the frequencies corresponding to the velocity and acceleration were averaged to obtain the first-order and second-order inherent frequencies of the thin-shell factory [15], as shown in Table 1. and 9 show the velocity and acceleration amplitude at each measurement point of the thin-shell factory under train excitation, and it can be seen: 1.
With respect to velocity and acceleration, for all of the measurement points (P1-P6), the vertical amplitudes were larger than the horizontal amplitudes.
2.
With respect to velocity and acceleration, for the three measurement points at the bottom of the arch, whether horizontal or vertical, the amplitude showed a decreasing trend (P1 > P2 > P3).Similarly, for the three points at the top of the arch, the amplitude also showed a decreasing trend (P4 > P5 > P6), indicating that the farther away from the railroad, the smaller the vibration.
3.
When comparing the amplitude of velocity between the measurement points located at the same distance from the railroad, whether horizontal or vertical, the amplitude of the velocity was not significantly different between P1 and P4, P2 and P5, and P3 and P6, indicating that the velocity of the vibration at the bottom of the arch was basically the same as that at the top of the arch.When comparing the acceleration amplitudes of the measurement points located at the same distance from the railroad, whether horizontal or vertical, the acceleration amplitudes were as follows: P1 > P4, P2 > P5, and P3 > P6, indicating that the acceleration of the vibration at the bottom of the arch was greater than that at the top of the arch, with an average amplitude of 40.64%.The ABAQUS 6.14 finite-element software was used to generate a three-dimensional finite-element model of the thin-shell factory; shell cells were used based on the actual size and the characteristics of the structure, and the whole model was divided into 63,567 cells.The model is illustrated in Figure 10.The factory consists mainly of a concrete floor, brick walls, concrete beams, and brick arches.The physical parameters of each material, according to the current survey documents of the thin-shell factory and a review of the relevant codes [16], are listed in Table 2.
Model Checking
The ABAQUS 6.14 software was used to solve the thin-shell factory structure, and the self-vibration frequency and vibration pattern of the structure were calculated by implementing the Lanczos algorithm.It was found that the vibration in the horizontal direction was the overall vibration of the factory, whereas the vibration in the vertical direction was the individual vibration of the double-curved brick arch.The simulation results were compared with the field dynamic characteristic test results, and the comparison and the first two orders of vibration patterns are shown in Table 3 and Figure 11, respectively.The results indicate that the self-oscillation frequencies obtained from the numerical simulation were greater compared with the field measurements.This might have been because the numerical simulation was carried out based on the assumption of complete elasticity.In contrast, the thin-shell factory, which is relatively old in construction, may have suffered from structural damage because of degradation of building-material properties, structural cracking, and other factors.Consequently, this could have led to degradation of structural stiffness, and the simulation results were greater than the results obtained from the measurements [17].The maximum error rate was 7.14%, indicating that the errors of both measurements were within the acceptable range, and the reliability of the finite-element model was initially verified.
Establishment of the Finite-Element Model for the Soil Structure
The soil body was modeled using the solid cell approach and meshed using 17,395 C3D8 linear hexahedral cells.Chain-link constraints were used around and at the bottom of the soil body [18].
The contact relationship between the soil body and the factory structure can be defined as a tie constraint, so that the deformation between the two is common [19].The overall model is shown in Figure 12.Based on the literature [20] and the current survey documents of the thin-shell factory, the following mechanical parameters for the soil were used: elasticity modulus, E = 29400 kPa; density, ρ = 1900 kg/m 3 ; and Poisson's ratio, µ = 0.25.
Simulation of Train Vibration Load
Train load is transferred through the rail to the rail sleeper and then to the substructure; therefore, for a fixed wheelbase train, its effect on the substructure can be superimposed by a series of excitations.
The magnitude of the train vibration load is mainly determined by the vehicle's weight and wheels and the track structure.In this study, the excitation force function method [23][24][25][26] was used to calculate the train vibration load.In this method, the train vibration load is composed of the static load of the wheels, the vibration load caused by the wheels and other factors, the periodic vibration load generated by the wheel wear eccentricity on the track structure, and the forced vibration load of the train caused by untimely construction and track maintenance.The calculation formula for the train vibration load, F(t), is as follows: In the formula: P 0 is the static wheel load; P 1 , P 2 , and P 3 are the train vibration load amplitudes under three control conditions, namely, smoothness of traffic, additional dynamic load applied on the line, and wear of the track waveform, respectively (Table 4); and ω i is the circular frequency of the uneven wavelength corresponding to the train running speed, v, and can be expressed by the following formula: where L i is a typical wavelength corresponding to the three control conditions mentioned in Table 4 and t is the time.If the under-spring mass of the train is M 0 , then its corresponding vibration load amplitude, P i , is: where α i is the typical vector height corresponding to the three control conditions listed in Table 4.
Based on domestic and foreign railroad standards and field observations, the following values were assumed: the axle weight of the vehicle was set as 24 t, the under-spring mass as M 0 = 1990 kg, the unilateral static wheel weight of the train as P 0 = 120 kN, and the speed as 60 km/h.Then, based on the geometric unevenness management values presented in Table 4, the L and α values were set as follows: L 1 = 10 m, α 1 = 3.
Dynamic Response Checking
The train vibration load simulated by the excitation force function method was entered at the corresponding position in the model.Then, the acceleration response curves for each measurement point were extracted, the amplitude values were obtained and compared with the corresponding field measured data [27], and the two matched well, as shown in Figure 14 (for the limitation of space, only the acceleration curve of P2 is listed) and Figures 15 and 16 In summary, the calculation results of the model can quantitatively reflect the actual vibration response for each measurement point when a train passes through the thin-shell factory, which further verifies the reliability of the finite-element model.
Analysis of the Dynamic Response of the Thin-Shell Factory under Different Train Vibration Loads
On the basis of the calculations in this paper, by changing the running speed of the train to study the effect of vibration load on the dynamic response of the thin-shell factory under different train running speeds and taking train running speeds of 80 km/h, 100 km/h, 120 km/h, 150 km/h, and 250 km/h (high-speed rail) as five cases to study, the thin-shell factory acceleration and velocity amplitude change law for each measurement point were determined, as shown in Figure 17 (where the horizontal direction indicates the east-west direction).
From Figure 17, it can be seen that the vertical acceleration amplitude, vertical velocity amplitude, horizontal acceleration amplitude, and horizontal velocity amplitude were all positively correlated with train running speed.For every 20 km/h increase in train running speed, the average increase in the vertical acceleration amplitude, vertical velocity amplitude, horizontal acceleration amplitude, and horizontal velocity amplitude for each measurement point of the thin-shell factory was 35.4%, 29.8%, 23.7%, and 12.5%, respectively.It can be seen that the increase in train running speed has the greatest effect on the amplification of vertical acceleration amplitude and the least effect on the amplification of horizontal velocity amplitude.
Vibration Evaluation of the Thin-Shell Factory
Both domestic and foreign building structure allowable vibration standards are based on the peak particle velocity (PPV) as the vibration limit value for ancient building control.The statistics for some of the vibration standards relevant to this paper are shown in Table 5. Integrity in Table 5 relates to whether a building is damaged, with building damage usually referring to the fatigue of non-load-bearing elements of ancient buildings under the action of micro-vibrations due to cumulative damage resulting from surface cracking, spalling, and other phenomena, while security relates to whether a structure is damaged, which usually refers to the strong vibration of ancient buildings under the action of load-bearing elements of damage that endanger the safety of the structure.
As can be seen from Table 5, most of the vibration standards for building structures at home and abroad are proposed for the security of the structure, and the lower limit of PPV is mainly between 1.8 and 12.7 mm/s, which means that when the vibration is less than this lower limit, it usually does not cause structural damage.The "Anti-industrial Vibration of Ancient Buildings Technical specification" (GB/T 50452-2008) was proposed for the integrity of buildings.Its development was based on the long-lasting effects of vibration, while taking into account the security and integrity of buildings, and was proposed as a fatigue limit as the basis for vibration standards, so that its requirements regarding micro-vibrations are currently the most stringent among similar standards in the international arena.The specification specifies allowable vibration values in accordance with the type of ancient building structure, the materials used, the level of protection, and the propagation speed of elastic waves in the ancient building structure.For the national heritage protection units of ancient masonry structure, the vibration limit is between 0.15 and 0.2 mm/s.The security and integrity of old industrial buildings are much better compared to ancient buildings, and their limits can be allowed to be increased appropriately.Thus, for this factory, it was assumed that its security limit is 2.5 mm/s and that its integrity limit is 0.27 mm/s.
The velocity amplitude for each measurement point of the thin-shell factory under different train running speeds was extracted, as shown in Tables 6 and 7, and compared with the limit values in Table 5.It is clear that when v = 150 km/h, the maximum velocity amplitude is 1.163 mm/s, which is less than the vibration control lower limit of 2.5 mm/s for the security of the thin-shell factory, such that the security of the thin-shell factory meets the requirement, and the maximum horizontal velocity amplitude is 0.272 mm/s, which just exceeds the lower limit of 0.27 mm/s for the integrity of the thin-shell factory, such that the integrity of the thin-shell factory just exceeds the requirement.When v = 250 km/h, the maximum horizontal velocity amplitude is 0.394 mm/s, which exceeds the corresponding normative limit, indicating that the integrity of the thin-shell factory does not meet the requirement completely.Therefore, it is recommended to the transportation department that the train should not run at a speed of more than 150 km/h and that the thin-shell factory strengthen the monitoring and protection of its integrity.
Conclusions
Through field dynamic tests and finite-element numerical simulations, the dynamic characteristics of a thin-shell factory with a multi-span double-curved brick arch in Changleyuan in Baoji City were analyzed, the influence of train vibration load on the dynamic response of the thin-shell factory was investigated, and a vibration evaluation of the thin-shell factory was carried out.The following main conclusions were derived: (1) The field dynamic test results of the thin-shell factory show that the first-and secondorder frequencies in the horizontal direction of the structure were 6.24 Hz and 18.50 Hz, respectively, and that the first-and second-order frequencies in the vertical direction were 9.31 Hz and 19.28 Hz, respectively.(2) The vibration in the horizontal direction is the overall vibration of the factory, whereas the vibration in the vertical direction is the individual vibration of the double-curved brick arch.(3) The self-oscillation frequency obtained from the numerical simulation results (which were based on the assumption of complete elasticity) was greater compared with the field measurements because of the reduced structural stiffness.Additionally, the maximum error rate was only 7.14%.(4) Both in acceleration and velocity, the vertical vibration for each measurement point was larger than the horizontal vibration, and the farther away from the railroad, the smaller the vibration.The velocity of the vibration at the bottom of the arch was almost the same as that at the top of the arch, while the acceleration of the vibration at the bottom of the arch was significantly larger than that at the top of the arch, with an average amplitude of 40.64%.(5) For every 20 km/h increase in train running speed, the average increase in the vertical acceleration amplitude, vertical velocity amplitude, horizontal acceleration amplitude, and horizontal velocity amplitude for each measurement point of the thin-shell factory was 35.4%, 29.8%, 23.7%, and 12.5%, respectively.
(6) When v = 150 km/h, the maximum velocity amplitude for each measurement point of the thin-shell factory was 1.163 mm/s, which is less than the security specification limit of 2.5 mm/s, such that the security of the thin-shell factory meets the requirement, and the maximum horizontal velocity amplitude was 0.272 mm/s, which is close to the integrity specification limit of 0.27 mm/s, such that the integrity of the thin-shell factory just exceeds the requirement; so it is suggested that train running speeds should not exceed 150 km/h and that the thin-shell factory needs to strengthen the monitoring and protection of its integrity.
Figure 1 .
Figure 1.A map showing the location of the thin-shell factory of Changleyuan, Baoji City.
Figure 2 .
Figure 2.An image showing the outer appearance of the thin-shell factory.
Figure 3 .
Figure 3.An image showing the arch structure in detail.
Figure 4 .
Figure 4. Acquisition equipment used for dynamic testing.
Figure 5 .
Figure 5.A graphical illustration showing the arrangement of the measurement points.
Figure 6 .
Figure 6.Time domain waveform of velocity at P1.
Figure 7 .
Figure 7. Frequency domain waveform of velocity at P1.
Figure 8 .
Figure 8. Velocity amplitude curve for each measurement point under train vibration load.
Figure 9 .
Figure 9. Acceleration amplitude curve for each measurement point under train vibration load.
Figure 10 .
Figure 10.Finite-element model of the thin-shell factory.
Figure 11 .
Figure 11.Simulation results of the first and second patterns of the thin-shell factory.
Figure 12 .
Figure 12.Overall finite-element model of soil structure.
5 mm; L 2 = 2 m, α 2 = 0.4 mm; and L 3 = 0.5 m, α 3 = 0.08 mm.The load time curve of the train was plotted in Origin 2021 software, and the load was shown to vibrate in the range of 110-130 kN, as shown in Figure 13.
Figure 13 .
Figure 13.Time history curve of the train vibration load. .
Figure 14 .
Figure 14.Comparison of simulated and measured acceleration time domain waveforms at P2.
Figure 15 .
Figure 15.Comparison of simulated and measured velocity amplitude curves for each measurement point.
Figure 16 .
Figure 16.Comparison of simulated and measured acceleration amplitude curves for each measurement point.
Figure 17 .
Figure 17.Vibration amplitude curves for each measurement point under different train running speeds.
4.3.Dynamic Response Analysis of the Thin-Shell Factory under Train Vibration LoadFigures 8
Table 2 .
Physical parameters of the materials used in the construction of the thin-shell factory.
Table 3 .
Comparison of measured and simulated frequencies of the thin-shell factory (Hz).
Table 4 .
Geometric parameters for UK railway engineering.
Table 5 .
Summary of domestic and foreign ancient building control standards.
Table 6 .
Vertical velocity amplitude for each measuring point under different train running speeds (mm/s).
Table 7 .
Horizontal velocity amplitude for each measuring point under different train running speeds (mm/s). | 2023-09-24T15:20:42.301Z | 2023-09-21T00:00:00.000 | {
"year": 2023,
"sha1": "8bc8a055de86be086156c71bc4f95cf7b8674f32",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-5309/13/9/2400/pdf?version=1695300817",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b76a5f2996dde8b4d232470493eb79a7fbee4e1d",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
244799370 | pes2o/s2orc | v3-fos-license | Event Neural Networks
Video data is often repetitive; for example, the contents of adjacent frames are usually strongly correlated. Such redundancy occurs at multiple levels of complexity, from low-level pixel values to textures and high-level semantics. We propose Event Neural Networks (EvNets), which leverage this redundancy to achieve considerable computation savings during video inference. A defining characteristic of EvNets is that each neuron has state variables that provide it with long-term memory, which allows low-cost, high-accuracy inference even in the presence of significant camera motion. We show that it is possible to transform a wide range of neural networks into EvNets without re-training. We demonstrate our method on state-of-the-art architectures for both high- and low-level visual processing, including pose recognition, object detection, optical flow, and image enhancement. We observe roughly an order-of-magnitude reduction in computational costs compared to conventional networks, with minimal reductions in model accuracy.
Introduction
Real-world visual data is repetitive; that is, it has the property of persistence. For example, observe the two frames in Fig. 1 (a). Despite being separated by one second, they appear quite similar. Human vision relies on the persistent nature of visual data to allocate limited perceptual resources. Instead of ingesting the entire scene at high resolution, the human eye points the fovea (a small region of dense receptor cells) at areas containing motion or detail [51]. This allocation of attention reduces visual processing and eye-to-brain communication.
Processing individual frames using artificial neural networks has proven to be a competitive solution for video inference [55,60]. This paradigm leverages advances in image recognition (e.g., pose estimation or object detection) and processes each frame independently without considering temporal continuity, implicitly assuming that adjacent frames are statistically independent. This assumption leads to inefficient use of resources due to the repeated processing of image regions containing little or no new information.
There has been recent interest in leveraging temporal redundancy for efficient video inference. One simple solution is to skip processing image regions [45]. Over this time, some areas of the image maintain consistent pixel values (sky region). However, these areas only represent a small fraction of the frame. In other regions, the pixel values change but the textures (vertical lines) or semantics (tree branches) remain the same. Each type of persistence corresponds to a different depth in the neural hierarchy. EvNets leverage temporal persistence in video streams across multiple levels of complexity. (b) EvNets yield significant computation savings while maintaining high accuracy. (c) Event neurons have state variables that encode long-term memory, allowing EvNets to perform robust inference even over long video sequences with significant camera motion. A network without long-term memory (left) fails to correctly track the object due to gradual error accumulation.
containing few changes in pixel values. However, such methods cannot recognize persistence in textures, patterns, or high-level semantics when it does not coincide with persistent pixel values. See Fig. 1 (a). Because neural networks extract a hierarchy of features from their inputs, they contain a built-in lens for detecting repetition across many levels of visual complexity. Shallow layers detect low-level patterns, and deep layers detect highlevel semantics. Temporal repetition at a given level of complexity translates to persistent values at the corresponding depth in the neural hierarchy [14]. Based on this observation, we propose Event Neural Networks (EvNets), a family of neural networks in which neurons transmit (thereby triggering downstream computation) only when there is a significant change in their activation. By applying this strategy over all neurons and layers, we detect and exploit temporal persistence across many levels of complexity.
One of the defining features of EvNets is that each neuron has state variables that provide it with long-term memory. Instead of re-computing from scratch for every new input, an EvNet neuron accumulates information over time. Longterm memory allows EvNets to perform robust inference over long video sequences containing significant camera motion. See Fig. 1 (c).
We design various structural components for EvNets -both at the individual neuron level (memory state variables) and the network level (layers and transmission policies). We recognize that transmission policies, in particular, are critical for achieving a good accuracy/computation tradeoff, and we describe the policy design space in detail. We show that, with these components, it is possible to transform a broad class of conventional networks into EvNets without re-training. We demonstrate our methods on state-of-the-art models for several high-and low-level tasks: pose recognition, object detection, optical flow, and image enhancement. We observe approximately an order-of-magnitude reduction in arithmetic operations with minimal effects on model accuracy.
Scope and Limitations. In this paper, we focus on the theoretical and conceptual properties of EvNets. Although we show results on several video inference tasks, our goal is not to compete with the latest methods for these tasks in terms of accuracy. Instead, we show that, across a range of models and tasks, EvNets can significantly reduce computational costs without decreasing accuracy.
In most of our analyses we do not assume a specific hardware platform or computation model. We mainly report arithmetic and memory operations (a platform-invariant measure of computational cost) instead of wall-clock time (which depends on many situational variables). An important next step is to consider questions relating to the design of hardware-software stacks for EvNets, in order to minimize latency and power consumption.
Related Work
Efficient Neural Networks. There are numerous methods for reducing the computational cost of neural networks. Many architectures have been designed to require fewer parameters and arithmetic operations [18,22,29,30,41,59]. Another line of work uses low-precision arithmetic to achieve computation savings [8,21,40,49]. Our approach is complementary to both architecture-and precision-based efficiency methods. These methods reduce the cost of inference on a single time step, whereas EvNets eliminate repetitive computation between multiple time steps. Pruning algorithms [15,16,25,26] remove redundant neurons or synapses during training to improve efficiency. Instead of pruning universally redundant neurons, an EvNet adaptively ignores temporally redundant neurons.
Adaptive Networks. Adaptive models modify their computation based on the input to suit the difficulty of each inference. Prior approaches consider an ensemble of sub-networks [20,50], equip a network with multiple exits [19,48], select the input resolution at inference time [7,32,57], or dynamically choose the feature resolution [53]. These methods are designed for image recognition tasks and do not explore temporal redundancy. Further, many require custom tailoring or re-training for each task and architecture. In contrast, EvNets can be readily integrated into many existing architectures and do not require re-training.
Temporal Redundancy. Several recent approaches consider temporal redundancy for efficient video inference. Many take a keyframe-oriented approach, computing expensive features on keyframes, then transforming those features for the other frames [6,23,27,44,60,61]. Other methods include using visual trackers [56], skipping redundant frames [13,54], reusing previous frame features [33], distilling results from previous time steps [36], two-stream computation [11], and leveraging video compression [52]. In general, these methods require extensive modifications to the network architecture.
Skip-convolution networks (Skip-Conv) [14] are closely related to EvNets. Skip-Conv reuses activation values that have not changed significantly between frames. However, the algorithm only tracks changes between consecutive frames and thus requires frequent re-initialization to maintain accuracy. Re-initialization leads to reduced efficiency, especially in the presence of camera motion. In contrast, the long-term memory in an EvNet maintains accuracy and efficiency over hundreds of frames, even when there is strong camera motion. See Fig. 1 (c).
Sigma-Delta networks [37] exploit temporal redundancy by quantizing the changes in neuron activations. Sigma-Delta networks have been limited so far to simple tasks like digit classification. Unlike Sigma-Delta networks, EvNets do not require quantization (although they do allow it). Compared to Sigma-Delta networks, EvNets achieve superior accuracy/computation tradeoffs (Fig. 5) and generalize better to challenging, real-world tasks (Fig. 7).
DeltaCNN [39] is concurrent work with similar goals to this paper. Like EvNets, DeltaCNN models have mechanisms for integrating long-term changes. They focus on translating theoretical speedups into GPU wall-time savings by enforcing structured sparsity (all channels at a given location transmit together). Despite its practical benefits, this design is inefficient when there is camera motion. In contrast, we emphasize broad conceptual frameworks (e.g., arbitrary sparsity structure) with an eye toward future hardware architectures (Sec. 6). Event Sensor Inference. Event sensors [28] generate sparse frames by computing a quantized temporal gradient at each pixel. Many networks designed for inference on event-sensor data have efficient, sparse dynamics [4,34]. However, they make strong assumptions about the mathematical properties of the network (e.g., that it is piecewise linear [34]). EvNets place far fewer constraints on the model and are compatible with a broad range of existing architectures. Recurrent Neural Networks (RNNs). EvNets use long-term memory to track changes, and are thus loosely connected to RNNs. Long-term memory has been widely adopted in RNNs [17]. Several recent works also propose adaptive inference for RNNs by learning to skip state updates [3] or updating state variables only when a significant change occurs [35,38]. Unlike EvNets, these approaches are tailored for RNNs and generally require re-training.
Event Neurons
Consider a neuron in a conventional neural network. Let x = [x 1 , x 2 , . . . , x n ] be the vector of input values and y be the output. Suppose the neuron composes a linear function g (e.g., a convolution) with a nonlinear activation f . That is, where w = [w 1 , w 2 , . . . , w n ] contains the weights of the function g. In a conventional network, every neuron recomputes f and g for every input frame ( Fig. 2 (a)), resulting in large computational costs over a video sequence.
Inspired by prior methods that exploit persistence in activations [3,14,37], we describe a class of event neurons with sparse, delta-based transmission.
Sparse, Delta-Based Transmission. An event neuron transmits its output to subsequent layers only when there is a sufficient change between its current activation and the previous transmission. This gating behavior makes the layer output sparse. However, a value transmission may still trigger many downstream computations (neurons receiving updated input values must recompute their activations from scratch). See Fig. 2 (b). Therefore, instead of transmitting an activation value, an event neuron transmits a delta (differential).
Suppose a neuron receives a vector of incoming differentials ∆ in (one element per incoming synapse). ∆ in is sparse, i.e., it only contains nonzero values for upstream neurons that have transmitted. The updated g is given by Instead of computing g(x+∆ in ) from scratch, an event neuron stores the value of g(x) in a state variable a. When it receives a new input ∆ in , the neuron retrieves the old value of g(x) from a, computes g(∆ in ), and saves the value g(x)+g(∆ in ) in a. This process only requires computing products w i x i for nonzero elements of ∆ in , i.e., computation scales linearly with the number of transmissions.
The activation function f is nonlinear, so we cannot update it incrementally like g. Whenever a changes, we recompute f (a), then store the updated value in another state variable. f is usually a lightweight function (e.g., ReLU), so the cost of recomputing f is far smaller than computing the products w i x i .
Building Event Neurons
An event neuron consists of three state variables, as shown in Fig. 3. The accumulator (a) stores the current value of g(x). The best estimate (b) stores the current value of f (a). The difference (d) stores difference between b and the most recent output. When a neuron receives a differential update ∆ (t) in at time t from one or more of its inputs, it updates these state variables as follows: A neuron transmits an output ∆ out when some condition on d is satisfied. This condition is defined by the transmission policy. The transmission policy also gives the relationship between d and ∆ out . The policies in this paper simply set ∆ out = d. However, other relationships are possible, and the properties described in Sec. 3.2 hold for other relationships. After a neuron transmits, it sets d to d − ∆ out . See Sec. 4.3 for more details on transmission policies.
Properties of Event Neurons
Long-and Short-Term Memory. The state variable d accumulates all notyet-transmitted corrections to the neuron output. It represents the neuron's long-term memory, whereas b represents its short-term memory. Including a long-term memory keeps the neuron from discarding information when it does not transmit. This error-retention property grants certain guarantees on the neuron's behavior, as we demonstrate next. Error Retention. Consider an event neuron receiving a series of inputs over T time steps, ∆ in . Assume that the state variables a, b, and d have initial values a (0) , f (a (0) ), and zero, respectively. Let the transmitted output values at each time step be ∆ out (some of these may be zero). By repeatedly applying the neuron update rules, we arrive at the state See the supplementary material for a detailed derivation. Observe that d is equal to the difference between the actual and transmitted changes in the activation. This is true regardless of the order or temporal distribution of the ∆ in and ∆ out . Because the neuron stores d, it always has enough information to bring the transmitted activation into exact agreement with the true activation b. We can use this fact to bound the error within an EvNet. For example, we can constrain each neuron's error to the range [−h, +h] by transmitting when |d| > h.
The Importance of Long-Term Memory. For comparison, consider a model in which neurons compute the difference between b on adjacent time steps, then either transmit or discard this difference without storing the remainder. This is the model used in Skip-Conv [14]. Under this model, the final state of a neuron depends strongly on the order and temporal distribution of inputs.
For example, suppose a neuron transmits if the frame-to-frame difference exceeds a threshold δ. Consider a scenario where the neuron's activation gradually increases from 0 to 2δ in steps 0.1δ, 0.2δ, . . . , 2δ. Gradual changes like this are common in practice (e.g., when panning over a surface with an intensity gradient). Because 0.1δ < δ, the neuron never transmits and ends in a state with error −2δ. The neuron carries this error into all of its future computations. Furthermore, because the neuron discards non-transmitted activations, it has no way to know that this −2δ error exists.
Building Event Networks
So far, we have considered the design and characteristics of individual event neurons. In this section, we broaden our view and consider layers and networks. A "layer" is an atomic tensor operation (e.g., a convolution). By this definition, g and f as defined in Sec. 3.1 correspond to two different layers.
We define three new layer types. An accumulator layer consists of a state vector a that contains the a variables for a collection of neurons. A gate layer contains state vectors b and d and the transmission policy. A buffer layer stores its inputs in a state vector x for future use by the next layer; this is required before non-pointwise, nonlinear layers like max pooling. The state vectors a, b and d are updated using vectorized versions of the rules in Eq. 3. An accumulator layer converts its input from delta-based to value-based, whereas a gate converts from value-based to delta-based.
To create an EvNet, we insert gates and accumulators into a pretrained network such that linear layers receive delta inputs and nonlinear layers receive value inputs (Fig. 4). Note that residual connections do not require any special treatment -in an EvNet, residual connections simply carry deltas instead of values. These deltas are added or concatenated to downstream deltas when the residual branch re-joins the main branch (like in a conventional network).
We place a gate at the beginning of the network and an accumulator at the end. At the input gate, we use pixel values instead of f (a) and update b and d at every timestep. At the output accumulator, we update a sparsely but read all its elements at every frame. Throughout the model, the functions computed by the preexisting layers (the f and g) remain the same. Fig. 4. Building Event Networks. We insert accumulators and gates to make the input to linear layers (e.g., convolutions, fully-connected layers) delta-based and the input to nonlinear layers (e.g., ReLU activations) value-based.
Network Initialization
The equations in Sec. 3.1 define how to update the neuron state variables, but they do not specify those variables' initial values. Consider a simple initialization strategy where a = 0 and d = 0 for all neurons. Since the activation function f is nonlinear, the value of the state variable b = f (a) may be nonzero. This nonzero b usually translates to a nonzero value of a in the next layer. However, we initialized a = 0 for all neurons. We have an inconsistency.
To address this problem, we define the notion of internal consistency. Consider a neuron with state variables a, d, and b. Let b in and d in be vectors containing the states of the neurons in the previous layer. We say that a network is in an internally consistent state if, for all neurons, The simplest way to satisfy these criteria is to flush some canonical input through the network. Starting with neurons in the first layer and progressively moving through all subsequent layers, we set a = g(b in ), b = f (a), and d = 0. In our experiments, we use the first input frame as the canonical input. [37]). Results are for a 3-layer fullyconnected network on the Temporal MNIST dataset [37].
Transmission Policies
In addition to its locality, each policy has a granularity. The granularity defines how m-values are shared between neurons. A chunked policy ties neurons together into local groups, producing one value of m for each group. Neurons in the same group fire in unison. This might be practically desirable for easy parallelization on the hardware. In contrast, a singular policy assigns every neuron a separate value of m, so each neuron fires independently. A Linear-Cost Policy. In this work, we use an isolated, singular policy based on a simple threshold. Specifically, where H is the Heaviside step function and h i is the threshold for neuron i.
A key advantage of this policy is its low overhead. On receiving an incoming transmission, a neuron evaluates |d| > h (one subtraction) in addition to the usual updates to a, d, and b. Neurons not receiving any updates (e.g., those in a static image region) do not incur any overhead for policy computations. In other words, the policy's cost is linear in the number of updated neurons. Combined with the linear cost of computing the neuron updates, this results in EvNets whose overall cost scales linearly with the amount of change in the input, not with the quantity of input data received. This linear cost has important implications for networks processing data from high-speed sensors (e.g., event sensors [28] or single-photon sensors [10]). Here, the differences between adjacent inputs are often minuscule, and the cost of a policy with fixed per-frame overhead (e.g., a Gumbel gate [14]) could come to dominate the runtime. EvNets with a linear-overhead policy are a natural solution for processing this type of high-speed data. Policy Design and Quantization. When a policy is both isolated and singular, we can characterize the functions M (d) and P (d) by scalar functions M (d i ) and P (d i ). Taking the product M (d i ) · P (d i ) gives a response function R(d i ) that describes the overall behavior of the neuron. Some response functions employ quantization to reduce the cost of computing dot product terms w i x i (Eq. 1). Sigma-Delta networks [37] use a rounding policy to quantize neuron outputs; a neuron transmits if this rounded value is nonzero. This rounding policy has significantly worse accuracy-computation tradeoffs (Fig. 5 (b)) compared to our proposed policy. This might be caused by coupling the firing threshold with the quantization scale. To increase its output precision a Sigma-Delta network must reduce its firing threshold, possibly resulting in unnecessary transmissions.
Experiments and Results
EvNets are widely applicable across architectures and video inference tasks. Any network satisfying a few basic requirements (i.e., frame-based and composing linear functions with nonlinear activations) can be converted to an EvNet without re-training. To demonstrate this, we select widespread, representative models for our main experiments: YOLOv3 [41] for video object detection and OpenPose [5] for video pose estimation. Additionally, we conduct ablation experiments and report results on low-level tasks (optical flow and image enhancement).
In the supplement, we include additional results on HRNet [47] for pose estimation. We also include ablations for the effect of layer depth on computation cost, variations in computation over time, the effect of granularity on savings, improved temporal smoothness, and a comparison to simple interpolation.
Video Pose Estimation
Dataset and Experiment Setup. We conduct experiments on the JHMDB dataset [24] using the widely adopted OpenPose model [5]. We use weights pretrained on the MPII dataset [2] from [5] and evaluate the models on a subset of JHMDB with 319 videos and over 11k frames, following [14]. We report results on the combination of the three JHMDB test splits. We use the PCK metric [58] with a detection threshold of α = 0.2, consistent with prior works [14]. Implementation Details. We resize all videos to 320 × 240, padding as needed to preserve the aspect ratio of the content. The joint definitions in MPII (the training dataset for OpenPose) differ slightly from those in JHMDB. During evaluation, we match the JHMDB "neck," "belly," and "face" joints to the MPII "upper neck," "pelvis," and "head top" joints, respectively. Baselines. We consider the following baselines, all using the OpenPose model.
-Conventional: This is the vanilla OpenPose model without modifications.
-Skip-Conv: This is a variant of the Skip-Conv method with norm gates and without periodic state resets. -Skip-Conv-8: This adds state resets to Skip-Conv by re-flushing every 8 frames to reduce the effect of long-term activation drift. We recognize that Skip-Conv networks can also incorporate a learnable gating function (the Gumbel gate) that uses information from a local window around each neuron. This can also be used for our EvNets (it is local and chunked rather than isolated and singular), but it requires re-training of the network and can incur a higher computational overhead. To keep the analysis fair, we only compare to the Skip-Conv norm gate.
Results. Fig. 6 (a) presents our results. We vary the policy threshold h to characterize the accuracy/computation Pareto frontier. For both Skip-Conv and EvNets, increasing the threshold reduces the computational cost but increases the error rate. EvNets consistently outperform their direct competitors (Skip-Conv and Skip-Conv reset) on the Pareto frontier, achieving significantly higher accuracy when using a similar amount of computation. Surprisingly, compared to the conventional OpenPose model, EvNets sometimes have slightly better accuracy, even with a large reduction in computation. We hypothesize that this is caused by a weak inter-frame ensembling effect. Table 1 summarizes the accuracy and computation at the best operating point on the Pareto curve. For each model, we choose the highest threshold that reduces PCK by less than 0.5 %. To better understand the accuracy-computation tradeoff, we further report the compute and memory overhead of our EvNets (at the best operating point) in Table 2. We report overhead operations both as a number of operations and as a percentage. This percentage gives the ratio between "extra operations expended" and "number of arithmetic operations saved." For example, an arithmetic overhead of 0.12 % indicates that the neuron updates and transmission policy require 0.12 extra arithmetic operations for every 100 operations saved. Overall, EvNets add minimal operation overhead and manageable additional memory.
Video Object Detection
Dataset, Experiment Setup, and Baselines. We evaluate on the ILSVRC 2015 VID dataset [42] using the popular YOLOv3 model [41] with pre-trained weights from [43]. We report all results on the validation set with 555 videos and over 172k frames, using mean Average Precision (mAP) with an IoU threshold of 0.5 (following previous works [6,43,61]). We evaluate the same model variants as in Sec. 5.1 (conventional, EvNet, Skip-Conv, and Skip-Conv reset). Implementation Details. We resize all videos to 224 × 384, padding as needed to preserve the aspect ratio. Unlike OpenPose, YOLOv3 includes batch normalization (BN) layers. BN gives us a convenient way to estimate the distribution of activations at each neuron. We use this information to adjust the threshold values. Specifically, we scale the threshold at each neuron by 1/γ (where γ is the learned BN re-scaling parameter). This scaling makes the policy more sensitive for neurons with a narrower activation distribution, where we would expect equal-sized changes to be more meaningful.
Results. Fig. 6 presents our results with varying thresholds. Again, we observe that our EvNets outperform Skip-Conv variants, and sometimes have slightly higher accuracy than the conventional model with greatly reduced compute cost. Table 1 presents the accuracy and computation at the best operating points.
Low-Level Vision Tasks
We have so far considered only high-level inference tasks. However, EvNets are also an effective strategy for low-level vision. We consider PWC-Net [46] for optical flow computation and HDRNet [12] for video frame enhancement. For brevity, we only show sample results in Fig. 7 and refer the reader to the supplementary material for more details. As with the high-level models, we observe minimal degradation in accuracy and significant computation savings. Fig. 7. Versatility of EvNets. We demonstrate that EvNets are an effective strategy for many high-and low-level vision tasks. Across tasks, we see significant computation savings while maintaining high-quality output. This frame shows a person mid-jump. The EvNet tracks the subject correctly under rapid motion.
Ablation and Analysis
Rounding Policy Comparison. Fig. 5 (b) compares our transmission policy and the rounding policy used in a Sigma-Delta network [37]. We obtain these results by evaluating the fully-connected model from the Sigma-Delta paper (with the authors' original weights) on the Temporal MNIST dataset [37]. We evaluate EvNets with thresholds of the form 10 p , where p ∈ {−1.5, −1.4, . . . , −0.3, −0.2}. We obtain results for the Sigma-Delta network using the original authors' code, which involves training the quantization threshold (the Pareto frontier is a consequence of varying a training penalty scale λ).
Ablation of Long-Term Memory. Fig. 8 shows the effect of ablating the longterm memory d (resetting it to zero after each input). We evaluate the OpenPose model on the JHMDB dataset. Other than resetting d, the two models shown are identical. Both models use a threshold of 0.05. We see that long-term memory is critical for maintaining stable accuracy.
Camera Motion. Global camera or scene motion (e.g., camera shake or scene translation) reduces the amount of visual persistence in a video. We would therefore expect camera motion to reduce the savings in an EvNet. To confirm this, we evaluate the OpenPose and YOLO models on a custom-labeled video dataset. We label the camera motion in each video as "none" (perfectly stationary camera), "minor" (slight camera shake), or "major." See the supplement for details. We test OpenPose with a threshold of 0.05 and YOLO with a threshold of 0.06. Because this dataset does not have frame-level labels for pose or object detection, we do not explicitly evaluate task accuracy. However, the thresholds we use here give good accuracy on JHMDB and VID. For OpenPose, the computation savings for "none," "major," and "minor" camera motion are 17.3×, 11.3×, and 8.40×, respectively. For YOLO, the savings are 6.64×, 3.95×, and 2.65×. As expected, we see a reduction in savings when there is strong camera motion, although we still achieve large reductions relative to the conventional model. We implement the model in PyTorch. For the EvNet, we replace the standard convolution with a custom sparse C++ convolution. Our convolution uses an input-stationary design (i.e., an outer loop over input pixels) to skip zero deltas efficiently. In the conventional model, we use a custom C++ convolution with a standard output-stationary design (i.e., an outer loop over output pixels). We use a custom operator in the conventional model to ensure a fair comparison, given the substantial engineering effort invested in the default MKL-DNN library. We implement both operators with standard best practices (e.g., maximizing dataaccess locality). We compile with GCC 9.4 with the -Ofast flag.
Discussion
Hardware Platforms. Mainstream GPU hardware is designed for parallel, block-wise computation with coarse control flow. EvNets with neuron-level transmission are inefficient under this computation model. In the long term, we expect to achieve the best performance on specialized hardware designed for extreme parallelism and distributed control. It is important to emphasize that event neurons do not need to operate by a shared clock. Each neuron operates independently -consuming new input as it arrives and transmitting output once it is computed. This independence permits an asynchronous, networked execution model in contrast to the ordered, frame-based model in conventional machine learning. Spiking neural networks (SNNs) [31] share this asynchronous computation model and have motivated the development of neuromorphic hardware platforms [1,9] that could be re-purposed for efficient implementation of EvNets. Acknowledgments. This research was supported by NSF CAREER Award 1943149. This is the supplement to our main paper. Here we present further results on low-level vision tasks (Sec. A), show additional analysis experiments (Sec. B), show preliminary results on HRNet [12], provide several example model outputs (Sec. D), expand on the details of our main experiments (Sec. E), provide a derivation for Eq. 4 (Sec. F), and discuss the theoretical properties of event networks (Sec. G). For sections, figures, tables, and equations, we use numbers (e.g., Fig. 1) to refer to the main paper and capital letters (e.g., Fig. A) to refer to this supplement.
A Results on Low-Level Tasks
In this section, we describe our experiments for low-level vision tasks. We consider HDRNet for image enhancement [5] and PWC-Net for optical flow [11].
Note that these models include some specialized operations (i.e., the bilateral transform for HDRNet [5] and flow warping in PWC-Net [11]). These operations represent a small portion of the overall computational cost of the models. For simplicity, we exclude them when counting multiply-accumulate operations.
Image Enhancement. HDRNet [5] can be trained to reproduce several image enhancement effects. We use the Local Laplacian [8] version of the model. HDR-Net has two subnetworks: a deep, low-resolution feature network and a shallow, high-resolution guidemap network. The guidemap network represents only about 10 % of the overall operations, and converting it to an EvNet has a noticeable effect on the visual quality of the output. Therefore, we only convert the feature network to an EvNet. We report operation savings for both the overall model (both subnetworks) and the feature network (the EvNet portion). We refer to these operation counts as "HDRNet-a" and "HDRNet-f," respectively. We use a threshold of h = 0.1 and evaluate using the PSNR metric. We resize all images to 540 × 960 before applying the model.
We use the original authors' pretrained weights. However, these weights were trained on a non-public dataset. Therefore, instead of evaluating the model against ground truth labels, we compute the agreement between the outputs of the event model and conventional model. We evaluate on a subset of the MPII video dataset [1] (see Sec. E for details on the dataset). See Table A and Table B for results. We also show example model outputs Optical Flow. We also consider the PWC-Net model [11] for optical flow computation. Unlike the other models (OpenPose, YOLO, HDRNet) which take a single frame as input, this model takes a pair of frames. We use a threshold of h = 0.01 and evaluate using the EPE metric [2]. We resize all images to 288×512 before applying the model. We use the original authors' weights trained on Sintel [3]. Like with HDRNet, we evaluate the agreement between the event and conventional outputs. Results are shown in Table A and Table B. We also evaluate on the ground-truth labels in the Sintel training dataset. On this data, the conventional model achieves EPE 2.86 and the event model achieves EPE 3.33.
B Additional Analysis Experiments
Layer Trends. Fig. A shows the computational cost of the OpenPose model as a function of the layer depth. We show results both on the JHMDB dataset and on our custom-labelled MPII dataset (to allow analysis of the effect of camera motion). Overall, we see a reduction in the relative cost as we go deeper in the network. This highlights the importance of leveraging repetition in the deep layers of the network, not just near the input. We also observe that the early layers transmit more frequently when there is large camera motion. This corresponds to an increased number of changes in low-level features and pixel values.
Temporal Variation . Fig. B shows the per-frame computational cost of the OpenPose EvNet over the course of a video. The video in question has a static background and a moving foreground object (person). Recognizable events in the video (e.g., walking, jumping) correspond to temporary increases in the number of operations. In this way, we see EvNets living up to their promise of "only computing when something interesting is happening." Varying Granularity. Table C shows the effect of increasing the granularity of the policy. We evaluate the OpenPose model [4] on the JHMDB dataset [7]. We test both a spatial chunking policy and a policy that chunks along the channel dimension (e.g., [6]). Because each neighborhood computes a mean of several |d|, the thresholds must be reduced to keep the accuracy from dropping. The threshold-setting strategy 0.05/ √ n is a heuristic that we found to give relatively stable accuracy with varying n. The results show that increasing the chunk size reduces the operation savings. However, chunking may, in practice, allow more efficient execution on certain hardware.
Comparison Against Output Interpolation. One alternate strategy for efficient video inference is to run a model once every n frames and interpolate its predictions for the remaining n − 1 frames. We apply this strategy to Open-Pose on JHMDB and compare it to the EvNet approach. We use n = 16 and linearly interpolate the joint positions between model predictions (the value 16 was chosen to give a computational cost close to the EvNet in Table 1). The interpolated model expends 6.764 × 10 9 ops per frame on average and achieves a PCK of 68.52 % (a reduction of 7.88 % from the conventional model). Compare this to the EvNet in Table 1, which expends 6.780 × 10 9 ops on average while achieving a PCK score of 76.37 % (a reduction of 0.03 % from the conventional model). Compared to output interpolation, the EvNet gives much higher accuracy at a similar computation cost. Note that we trim the inputs for the interpolation model to have a length of kn + 1 frames, where k is a positive integer. This ensures that the video can be divided into uniform blocks of n frames (with one extra frame at the end). If we trim to the same length for the EvNet, it achieves a PCK of 76.82 % with average cost 7.265 × 10 9 ops. The conventional model achieves PCK 76.67 % at 7.055 × 10 10 ops on the trimmed video.
Temporal Smoothness. We have anecdotally observed improved temporal smoothness in the outputs of EvNets. We hypothesize that this is one of the reasons for the slightly increased accuracy for some models (e.g., Table 1) over the conventional baselines. We quantitatively measure temporal smoothness for OpenPose on JHMDB by measuring the mean L2 joint motion between frames. The average joint motion for the conventional model is 10.3 pixels. For the
C HRNet Experiments
We test HRNet [12], a state-of-the-art model for various location-based tasks (e.g., object detection) on the JHMDB pose recognition dataset [7]. We use the HRNet-W32 version of the model.
Training Procedure. We initialize with pretrained MPII weights from [12]. We fine-tune the model on JHMDB for 20 epochs using the Adam optimizer and a learning rate of 1 × 10 −5 . We set aside 20 % of the training data for validation and save the model at the epoch with the lowest validation loss. JHMDB defines three train/test splits -we train and evaluate a model on each training split and average the results (accuracy and computation costs) over the three splits.
Where not otherwise specified, we adopt the training and data augmentation parameters of [6]. All of our training code will be publicly released and included with the supplementary material.
Evaluation. We evaluate three model variants: the conventional model, an EvNet and Skip-Conv (without periodic resets). See Table D for results. We report the PCK metric (as in our experiments on OpenPose). The accuracy and savings we observe are in line with our other experiments.
D Example Outputs
Example Outputs. Fig. C, Fig. D, Fig. E
E Experiment Details
Custom MPII Dataset. Here we describe the dataset that we use in our camera motion experiments (Table 3 and Table B) and for evaluating HDRNet and PWC-Net (Table A). We take a subset of the MPII video dataset [1]specifically, the first 246 videos that have exactly 41 frames (most, but not all videos in MPII have 41 frames). We then label each video in this dataset as having "no camera motion" (perfectly stationary camera), "minor camera motion" (slight camera shake), or "major camera motion". These splits contain 59, 46, and 141 videos, respectively.
Overhead Counting. We count overhead operations as follows. An update to an accumulator requires one load (a), one addition (a + g(∆ in )), and one store (a). An update to a gate requires two loads (b and d), three additions (d+f (a)−b and |d| − h), and two stores (b and d). A transmission requires one load (d), one subtraction (d − ∆ out ), and one store (d).
Tables of Results. Table E shows the complete results for OpenPose on JH-MDB. These values correspond to the points in Fig. 7 (a). Table E shows the complete results for YOLO [9] on VID [10], corresponding to Fig. 7 (b). Table G shows the overhead operation percentages for all thresholds tested for Fig. 7.
We also show results for larger input images (352 × 480 for OpenPose and 320×544 for YOLO). Results for pose recognition, object detection, and overhead are given in Table H, Table I, and Table J, respectively.
F Derivation of Equation 4
The equation for a (T ) is a consequence of the update to a (t) defined in Eq. 3, combined with the linearity of g (g of a sum is equal to the sum of the g). The equation for b (T ) is a direct consequence of the update rule in Eq. 3.
The equation for d (T ) in Eq. 4 comes from combining Eq. 3 and the posttransmission subtraction of ∆ out . Let d (0) = 0 as stated in Sec. 4.2. With Eq. 3, and noting that b (t) = f (a (t) ), Adding in the post-transmission subtraction of ∆ (t) out , we have
G Thoughts on Theoretical Guarantees
For certain special cases of transmission policies (e.g., a threshold policy with h = 0), we can guarantee that the output of an EvNet will be equal to that of the equivalent conventional network. As we make the policy more selective (e.g., by increasing h), the efficiency of the EvNet improves, but its output increasingly deviates from that of the conventional network. While we currently describe this behavior qualitatively, developing the rigorous theoretical tools necessary for a quantitative description is an important next step. We can describe a neural network as a composition of functions, We can think of an event network as perturbing the output of each q i by some ϵ i . That is, If we assume a threshold policy with threshold h, then ∥ϵ i ∥ ∞ < h. Given these facts and some knowledge of the properties of the q i (e.g., the distribution of their weights), can we bound the norm of y − y ′ ? This question has important implications for applications that require accuracy guarantees and should be studied in future work. Table G. Operation Overhead. The amount of overhead operations required for computing EvNet updates. "Math" refers to arithmetic operations (additions and subtractions) and "load/store" refers to memory access operations. Percentages are the ratio of additional operations expended for each arithmetic operation saved. For example, a memory overhead of 1 % indicates that one extra load/store is expended for each 100 arithmetic operations saved. See Table 2 | 2021-12-03T02:15:30.824Z | 2021-12-02T00:00:00.000 | {
"year": 2021,
"sha1": "e8c90f58d76aabede69af77fa704a1e74e697fdd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e8c90f58d76aabede69af77fa704a1e74e697fdd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
246113201 | pes2o/s2orc | v3-fos-license | Assessment of Photon Recycling in Perovskite Solar Cells by Fully Coupled Optoelectronic Simulation
An optical dyadic Green’s function framework to describe the transverse electromagnetic fields in a planar perovskite solar-cell stack is coupled to an electronic drift-diffusion model for rigorous treatment of photon recycling in the wave-optics regime for a realistic photovoltaic device. The optical model provides the local reabsorption rate as well as a detailed-balance compatible radiative prefactor, which are used in the electronic model to achieve a self-consistent solution that yields the full optoelectronic device characteristics. The presented approach provides detailed insights into the impact of photon recycling on device performance under different regimes of charge transport and recombination and can help identify the various electronic and optical losses for nonideal, realistic devices. The global efficiency of photon recycling is quantified by defining quantum efficiencies of reabsorbed radiation, while the local efficiency can furthermore be quantified by defining an effective local radiative prefactor. The model introduced here can be used to guide the design of future devices that exploit the full potential of photon recycling.
I. INTRODUCTION
In the past few years, hybrid metal-halide perovskite materials have become more and more popular for use in single-junction and multijunction (tandem) solar cells, especially in combination with crystalline silicon, where for both device architectures new record efficiencies could be reached recently (>25% for single-junction [1] and >29% for Si tandem cells [2]). Similarly, perovskite semiconductors have been gaining attention also for use in light-emitting devices. One aspect making this material ideally suited for photovoltaic energy conversion is the strong optical absorption and remarkably low nonradiative recombination for high-quality perovskites, which allows such devices to operate close to the radiative limit [3][4][5]. Furthermore, an exceptionally large joint density of states (JDOS) close to the band edge leads to a very sharp absorption edge in these materials [6], which in turn results in a low Stokes shift and hence a pronounced overlap of absorption and emission spectra. The combination of these properties leads to sizable photon-recycling (PR) effects due to reabsorption of internally emitted photons [7,8] with a positive effect on device performance, such as an increase in open-circuit voltage (V OC ) [9] for photovoltaic devices and improved external quantum efficiency due to the reabsorption and re-emission of guided into leaky modes for light-emitting devices [10]. Modeling and simulation can play a decisive role in understanding these processes as internal emission and reabsorption are difficult to assess experimentally.
Modeling of PR coupled to electronic transport has already been employed for some time, most prominently in GaAs (gallium arsenide) photovoltaic devices, however, largely based on ray-optical approaches [11][12][13]. Treatments of PR beyond the ray-optical approximation in thin-film perovskite devices have been limited to purely optical estimates of the open-circuit voltage enhancement using either a detailed-balance approach [9,14,15] or a rigorous solution of Maxwell's equations in the dipole picture [16,17] for external and internal emission. While the former usually relies on the generalized Kirchhoff law [18] for external emission, which correctly takes the optical environment of the full device into account, and on the van Roosbroeck-Shockley (VRS) relation [19] for internal emission, which assumes emission into an optically homogeneous medium (and which is therefore inconsistent with the aforementioned generalized Kirchhoff law), the latter calculates the internal emission taking the wave optics of the device into account. Yet, these dipole models suffer, on the one hand, from divergencies in emitted power due to nonradiative coupling to evanescent modes in the absorbing surrounding medium [20][21][22], which has to be circumvented by introducing nonphysical transparent regions around the dipole [16,23], and, on the other hand, from the lack of a detailed-balance compatible connection to the electronic excitation giving rise to the emission in the first place. In general, the full coupling of a wave-optical treatment of emission and reabsorption to the electronic transport problem beyond the radiative limit in a detailed-balance compatible way has not been achieved so far, as would be needed for a correct understanding of this phenomenon in thin-film perovskite solar cells. Such understanding would enable the further optimization of solar-cell designs to increase PR, the study of which has seen an increased interest recently (see, e.g., Raja et al. [24] and Cheng and O'Carroll [25] for recent reviews).
In this paper we present such a rigorous and fully coupled approach and apply it to a model system of a high-V OC device from the literature, in order to analyze the mechanisms and benefits of PR in realistic optical and electronic conditions (multilayer thin-film device, parasitic absorption, nonradiative recombination, nonideal band alignment, . . . ). The first part of the paper gives a short overview of the employed optical and electronic model and the coupling thereof. Then the description of the photovoltaic devices studied is given followed by a quantification of the benefits of PR, an analysis of the electrical losses and finally an assessment of the optical losses, both losses resulting in a deviation of the ideally achievable V OC enhancement due to PR. The paper is wrapped up with a summary and conclusion of the findings.
II. COMPUTATIONAL MODEL
In order to describe the physical processes in the solar cell, a mathematical model consisting of two parts is employed. On the one hand, the optical characteristics of the layered system are described using a dyadic Green's function approach, which depends only on the stack geometry and the optical constants (complex refractive indices) of the materials. On the other hand, electronic transport processes are described using an established drift-diffusion model capable of simulating complex multilayer device architectures and processes including nonradiative recombination.
For sake of space and clarity, only the crucial parts of the optical model are described here, with the fully detailed description given in Ref. [26]. For a quasi-onedimensional (1D) system where the in-plane dimensions are assumed to be infinitely extended the problem assumes an in-plane rotational symmetry such that it is beneficial to use cylindrical coordinates ( , z) with the in-plane vector = r − r . Furthermore, the real-space dyadic Green's tensor ↔ G( , z, z , E γ ) is preferably expressed in terms of its partial Fourier transform where E γ is the photon energy and q the in-plane photon momentum. For sources z lying inside or in proximity to absorbing layers, the integral in Eq. (1) diverges due to the nonradiative coupling to longitudinal (evanescent) photonic modes [20][21][22]. This can be circumvented by realizing that for inorganic, nonexcitonic semiconductors radiative recombination primarily couples to transverse photons, which are related to the transverse part of the Green's tensor ↔ GT(r, r , E γ ) [21,[27][28][29], which is obtained here by subtracting the purely longitudinal part from the total Green's tensor. For simplicity, the subscript T is omitted and ↔ G from now on directly refers to the transverse Green's tensor.
Once the transverse Green's tensor is computed for the given optical layer stack, it can be used to calculate the necessary input quantities for the electronic transport simulation. First, a detailed-balance compatible radiative recombination rate is obtained based on a general nonequilibrium quantum-kinetic framework (NEGF) [30] (and consistent with Würfel's general formulations [18]) where α, D γ , n r , κ, and μ cv are the absorption coefficient, the local photonic density of states (LDOS), real and complex part of the refractive index, and quasi-Fermilevel splitting, respectively. In the second expression we calculate the net emission rate by expressing the LDOS in terms of the imaginary part of the transverse Green's tensor and employing the Boltzmann approximation to express the quasi-Fermi-level splitting in terms of the electron and hole densities (n and p, respectively) and the squared intrinsic charge density n 2 i . Furthermore, it is possible to derive an expression describing the local reabsorption rate caused by radiative emission at all other points z in the layer stack: It has been shown that such a local approach is fully compatible with global detailed-balance relations such as the generalized Kirchhoff relation between global quasi-Fermi-level splitting and emitted photoluminescence spectra [18,26], which are frequently used for the characterization of solar cells [3,31,32].
Going beyond the analysis in Ref. [26], the optical model is now coupled to an electronic model to include realistic transport aspects. To this end, the two quantities from Eqs. (2) and (3) are included directly in the driftdiffusion formulation of the electronic transport problem, consisting of the Poisson equation and the electron and hole continuity equations, as implemented in the optoelectronic device simulator SETFOS [33] where ψ is the electrostatic potential (with the right-hand side of Eq. (4a) representing the sum of all charges present in the system), J n and J p the electron and hole current densities, respectively, and with R SRH given by the usual expression for trap-assisted Shockley-Read-Hall (SRH) recombination [34]. The radiative rate prefactor B rad and the reabsorption rate G reabs are computed from the optical model based on the Green's function. G illum represents the charge generation caused by the external illumination and is calculated using the transfer-matrix method available in SETFOS [33]. To ensure consistency and detailed-balance compatibility (discussed in Ref. [26]), the same complex refractive index data was used for the computation of all optoelectronic quantities (B rad , G reabs , and G illum ). The computation of G reabs depends on the unknown electron and hole densities n and p through radiative recombination, hence an iterative approach is required. In this approach, once the Green's functions for the given layer stack are calculated, the local generation rate due to PR is updated after each solution of n and p using the aforementioned Green's function model, which again in turn gives rise to an updated n and p distribution (shown schematically in Fig. 1).
III. RESULTS AND DISCUSSION
The optoelectronic model introduced above can now be applied to a realistic solar-cell-device structure. As model systems the two high-V OC device architectures presented by Liu et al. [32] are implemented, which both consist of an indium tin oxide (ITO) transparent electrode (150 nm), polytriarylamine (PTAA) as hole transport layer (HTL, 30/12 nm), a methylammonium lead iodide (MAPI) perovskite absorber layer (280/510 nm) and phenyl-C61butyric acid methyl ester (PCBM) as electron transport layer (ETL, 45 nm). The bottom electrode consists of a thin bathocuproine (BCP) layer (8 nm) and a silver (Ag) back reflector (80 nm). Both layer stacks are displayed schematically in Figs. 2(a) and 2(b). A fit of the measured J -V characteristics is then performed using the SETFOS device simulation software [33] by varying the SRH (trap) parameters as well as the electron and hole mobilities in the MAPI, HTL, and ETL. The fitted J -V curves along with the measured characteristics are shown in Fig. 2(c) for both architectures.
The fit is performed with the additional generation due to PR being taken into account, as PR is assumed to be present also in the actual measurement. It is assumed that 014023-3 radiative generation and recombination is restricted to the MAPI absorber. On the other hand, trapping with SRH recombination is present throughout the entire electrically active stack where additional thin (0.1 nm) interface layers are included at the PTAA-MAPI and MAPI-PCBM interfaces with the same electrical characteristics as MAPI but with increased SRH recombination rates in order to model strong recombination. The values of the parameters chosen for the drift-diffusion simulation are given within the Supplemental Material [35] and compared to the literature [32,33,[36][37][38][39][40][41][42][43][44], alongside the complex refractive-index data used in the optical models [32,[45][46][47][48][49].
A. Electrical-loss analysis
It is well known that PR in GaAs and perovskite solar cells (among others) leads to an increased V OC due to an enhanced quasi-Fermi-level splitting in the active region. Often this is modeled using an effective (reduced) prefactor for the radiative recombination rate [14,50], which however lacks any spatial information. Furthermore, the effectiveness of PR strongly depends on the nonradiative recombination channels, which can quickly quench any benefit from PR if dominant [51]. By employing a local approach as presented here such issues can be taken into account directly.
A comparison of the J -V characteristics of the two devices in the presence or absence of PR is shown in Figs. 3(a) and 3(b), respectively. The open-circuit voltage and power-conversion-efficiency (PCE) enhancement is shown in Fig. 3(c) for the case where nonradiative SRH recombination is taken into account, in the radiative limit (i.e., zero nonradiative recombination) as well as in a purely optical limit (neglecting any electronic transport losses and assuming a spatially constant quasi-Fermi-level splitting corresponding to the applied voltage). The enhancement in V OC and PCE is obviously smaller in the realistic case compared to the radiative limit, as the nonradiative recombination quenches the internal emission and hence reduces the extent of possible PR (see also Fig. S4 within the Supplemental Material [35]). The optical limit is computed using a current-voltage characteristic of an ideal diode: with J SC the optical short-circuit current and J 0,rad the radiative (ideal) dark saturation current. While J SC is pretty much equal in all cases for a given device, the dark saturation current has to be calculated from the radiative recombination current in the active absorber with the dagger representing the case without reabsorption taken into account. J † 0,rad in Eq. (6) represents the energy-integrated local radiative emission, while the second term in Eq. (7) corrects this value by the amount that is reabsorbed in the MAPI layer. Employing such an ideal diode model, Eq. (5), implicitly assumes that the applied voltage directly governs the quasi-Fermi-level splitting in the active absorber and therefore any voltage loss in the charge-transport layers (due to, e.g., band misalignment, internal resistance due to finite mobilities, . . . ) is neglected. The values calculated by this method hence represent absolute upper limits for a given optical device structure. While device B still shows a considerable discrepancy between the radiative and the optical limit and hence room for improvement when it comes to ( )V OC , device A gets already very close to this absolute performance limit. The fill factor for both devices under consideration of transport is, however, still far from ideal, which can be explained by reduced extraction efficiency and finite mobilities when taking the full device into account.
For a more detailed analysis of the device behavior we focus on device B, as both architectures exhibit similar behavior in this regard and a conclusion based on the characteristics of one also holds for the other device. The plots showing the data for device A can be found within the Supplemental Material [35]. Only radiative recombination in the bulk MAPI and SRH recombination in the interface layers are considered, as other contributions are negligible in comparison.
The radiative recombination rate in the MAPI is governed by the radiative rate prefactor B rad , which is calculated from Eq. (2). By taking the photonic environment (LDOS) into account, considerable deviations from the VRS approximation arise. The VRS expression is based on detailed-balance considerations, however, assuming emission into an optically homogeneous medium, and is given by [19] In Fig. 4 the radiative prefactor in the MAPI layer of device B is plotted, calculated using the VRS expression from Eq. (8) (solid line) and using our Green-function-based expression from Eq. (2). As the complex refractive index is constant in the MAPI layer, the radiative prefactor in the VRS approximation B VRS rad is also constant over the layer thickness, whereas the LDOS introduces significant spatial variation of B rad as calculated from the Green's function.
As shown in Fig. 5(a), the spatially integrated recombination rates due to radiative transitions and trap-assisted SRH processes are of similar magnitude with some variation depending on the external voltage. As mentioned, the radiative recombination occurs mainly in the bulk MAPI absorber while the nonradiative recombination is split between the two interfaces with the CTLs, with the interface to the HTL tending to be dominant depending on the external voltage. This picture might be misleading, however, as it shows the gross radiative recombination rate as calculated from Eq. (2). In order to quantify the actual losses in the system it is more appropriate to use an effective radiative recombination rate defined as which describes the local net loss due to radiative recombination. If this effective radiative recombination rate is considered, the relative loss balance shifts more towards nonradiative losses as shown in Fig. 5(b). Even though at first glance this seems like a change for the worse, a strong relative decrease in radiative recombination is a direct effect of efficient PR. Close to short-circuit conditions, R eff rad ≈ R rad as PR is negligible there. The profiles of the absolute values of R rad (z) and G reabs (z) for both devices are plotted in Fig. S7 within the Supplemental Material [35].
While the enhancement of the open-circuit voltage provides a direct measure of the absolute magnitude of PR in a given device, a useful quantity for the assessment and optimization of the PR efficiency can be defined in the shape of a reabsorption internal/external quantum efficiency (IQE/EQE) as follows: where J inj and J inj † represent the injected current densities at the terminals with and without PR, respectively. The quantity in Eq. (10) describes the collection efficiency of the generated charges due to internal reabsorption, while the quantity of Eq. (11) additionally takes into account incomplete reabsorption of the internal emission, similar to the usual internal and external quantum efficiencies defined for incident illumination.
The two efficiencies are plotted in Figs. 6(a) and 6(b), respectively, for both devices around V OC in the case of SRH and in the radiative limit. For V V OC the IQE goes to 1 in every case, i.e., perfect collection of internally generated carriers is observed. Close to V OC the efficiency drops sharply and finally vanishes for V V OC regardless of the recombination type considered. This means that for low voltages, diffusion of such carriers is still sufficient for them to be extracted and PR has a measurable impact on the net current at the contacts, while for large forward bias diffusion of internally generated carriers is fully suppressed and they are confined inside the absorber until they eventually recombine either nonradiatively or radiatively with photon emission into a lossy mode (outcoupled, parasitically absorbed, . . . ). At this point, there is no measurable impact of PR at the contacts anymore. This can be directly visualized by plotting the dark currents in the radiative limit with and without PR, as shown in Fig. 7. While at low bias the injection current with PR lies considerably below the injection current without PR, they become equal at higher voltages even though the gross radiative recombination current with reabsorption, given by stays much larger, with the difference in currents being dissipated from the system. At the maximum power point, FIG. 7. Dark current densities of device B with and without reabsorption. In the case with reabsorption, the injected current density (J inj ) lies below the radiative recombination current density (J rad ) as only a fraction of the latter is actually lost from the system. In the case without reabsorption, the injected current density (J inj † ) equals the radiative recombination current density (J rad † ).
however, the majority of charge carriers generated through reabsorption still contribute to the terminal current. It is interesting to note that these quantum efficiencies are not necessarily always higher for lower nonradiative recombination, as for both devices the quantum efficiencies in the radiative limit temporarily fall short compared to the SRH case. In both devices this, however, happens for voltages larger than V OC . On the other hand, the EQE reabs shown in Fig. 6(b) follows the same functional dependence as the IQE reabs , reduced by the fraction of internally emitted radiation, which is not reabsorbed in the system. This fraction that is lost is slightly higher for device A due to the thinner absorber thickness.
To increase the impact of PR, this drop in reabsorption quantum efficiency should be shifted to higher voltages if possible. This can be achieved by increasing the builtin field of the device to counteract the applied forward bias, for example, by doping the charge-transport layers. In Fig. 8 the IQE reabs is shown for both devices for increasing doping density (n doping in PCBM and p doping in PTAA). In both devices the loss in IQE reabs is shifted to higher applied voltages (by approximately 130 mV for device A and approximately 80 mV for device B) and therefore PR has a beneficial impact on device current over a larger voltage range, where the impact is stronger on device A due to the thinner absorber thickness (and hence larger built-in field in the absorber). At the same time, the increase in V OC is from 1.259 to 1.304 V for device A and from 1.263 V to 1.288 V for device B (see also Fig. S9 within the Supplemental Material [35]).
B. Optical-loss analysis
Aggregate quantities such as the internal and external quantum efficiencies for internally emitted light can be useful to quickly assess the overall efficiency of PR in a device. However, emission and reabsorption possess a strong inherent dependency on the local photonic environment and hence the device geometry. The present modeling approach allows for a detailed assessment of PR efficiency with spatial resolution. It is possible to define an effective radiative rate prefactor in the LED sense as follows: The second term describes the correction due to reabsorption of radiation emitted at the source point z, i.e., B eff, LED rad describes the emitted radiation which is irrecoverably lost from the system. It is now possible to calculate a spatial profile and to reveal possible areas of large optical losses (e.g., plasmon losses close to a metallic electrode), as shown in Fig. 9. Perfect PR would be theoretically achieved for B eff, LED rad (z)/B rad (z) → 0 everywhere. This, however, would in turn mean that there is zero outcoupling, and due to the reciprocity principle also zero incoupling of external FIG. 9. Plot of the ratio B eff, LED rad (z)/B rad (z) inside the MAPI bulk layer for both devices and varying PCBM thickness. The larger the ratio, the more of the locally emitted radiation is not reabsorbed and is lost from the system. radiation, i.e., only guided modes exist inside the device. Therefore, for a solar cell, there will always be some degree of internal emission losses. For both devices A and B, the loss of internally emitted light increases sharply towards the ETL interface. In general, it is to be expected to have reduced PR efficiency for emission close to the absorber boundaries as a larger portion of light escapes the active absorber. The asymmetry in the present case points, however, to the Ag back reflector as the cause of these losses, which is supported by the data obtained for varying ETL thicknesses. For ETL thicknesses below the original value of d PCBM = 46 nm, the asymmetry increases and the losses in internally emitted radiation become large, as the active MAPI absorber is moved closer to the back electrode.
For the optical environment of the given devices, the fate of propagating photons emitted into the loss cone is described by the out-of-plane component of the Poynting vector, which is given by [26] with The resulting photon-flux profile for device B is shown in Fig. 10(a). The strong asymmetry of the photon flux is caused by the Ag back reflector, which can be recognized by the dominant negative photon flux at the left-most and thus top-most interface. While most of the MAPI layer acts as a photon source (dS z /dz > 0), the other layers act as photon sinks (dS z /dz < 0). By relating the absorption of each layer in the device to the total internally emitted light, a detailed loss analysis of the latter is possible. In order to take into account light emitted into arbitrary modes instead of the loss cone only, the photon flux given by the Poynting vector as described in Eq. (14) is not sufficient and the relative absorptances should be evaluated using the integrated reabsorption rate [which is shown in Fig. 10 with l the layer index and l the spatial domain of layer l. The resulting absorptances are plotted in Fig. 11(a) for both devices and split into outcoupled and parasitically absorbed contributions, as well as the contribution of reabsorbed radiation in the MAPI layer. The plotted values are calculated at V MPP = 1.07 V (same for both devices), as there is only a small variation of the relative contributions over applied voltage due to spatial shifts in radiative recombination (see Figs. S10 and S11 within the Supplemental Material [35] showing the full voltage dependence). The corresponding plots for Fig. 10 for device A are again shown within the Supplemental Material [35]. From Fig. 10(a) one is led to the conclusion that a large fraction of the radiatively emitted power is outcoupled through the front surface (while outcoupling through the back is negligible as transmission through the Ag back reflector is small). While this is true for propagating photons in the loss cone, taking all modes into account the outcoupled fraction shrinks to 5.2% and 8.5% for device A and B, respectively. The largest fraction for both devices is indeed reabsorbed in the MAPI absorber (71.4% and 74.8%) and is contributing to PR, while parasitic absorption in the other layers amounts to roughly 23.5% and 16.7%. A more detailed look at the parasitic absorption shown in Fig. 11 and PCBM ETL (for both devices approximately 56% and 31%, respectively) which are responsible for the reduced PR efficiency close to the ETL interface inside the MAPI. Yet, it is worthwhile to note that the Ag electrode plays a crucial role in the PR process: without it, large portions of internally emitted light would be lost through the back side of the device and overall the benefits of a back reflector can outweigh the drawbacks when it comes to reabsorption, given the plasmonic losses can be reduced (by e.g., using a semitransparent electrode in conjunction with a Bragg reflector).
To check the influence of the Ag back reflector, we conduct a purely optical analysis using the architecture of device A. The optical-loss analysis changes if the back electrode is made transparent, as depicted in Figs. 12 and 13. Keeping everything else the same, the optical properties (complex refractive index) of the Ag back electrode are replaced with the properties of ITO (as is the case in bifacial solar cells). The relative absorptances are plotted in Fig. 12 where the largest share of optical power is absorbed in the MAPI absorber (approximately 78%), with a slightly increased share of outcoupled light from 5.2% to 8.5%. The photon-flux profile is shown in Fig. 13(a), which exhibits a much more pronounced symmetry in comparison with Fig. 10(a), with now increased outcoupling through the back surface.
In Fig. 13(b) the reabsorption efficiency is shown in terms of the ratio B eff,LED rad /B rad for the transparent ITO and the reflecting Ag back electrode. Note that even though the losses through outcoupling increased according to the photon flux calculated using the Poynting vector, when taking all modes into account the efficiency of reabsorption is higher (lower value of B eff,LED rad /B rad ) in the case of transparent ITO back electrode according to Fig. 13(b), as also confirmed by the increased relative absorptance of the MAPI layer (71.4% to 78%). The plasmonic losses introduced, e.g., by a metallic back reflector present a common problem for PR due to the internal emission into modes with large transverse photon momentum, in contrast to the propagating modes of the external illumination.
IV. CONCLUSION
A modeling approach is presented that combines an optical model based on a Green's function formalism with a charge-carrier drift-diffusion model for the full optoelectronic simulation of thin-film solar cells where reabsorption of internally emitted light (PR) plays a crucial role. Using the dyadic Green's functions, a detailed-balance compatible expression of the radiative recombination rate prefactor as well as of the internal reabsorption rate as a function of electron and hole densities can be derived. These quantities are then directly considered in the electronic simulation to reach a self-consistent solution of the optoelectronic device equations. Two realistic MAPI single-junction device structures are used as model system by fitting the measured J -V characteristics using the aforementioned model.
For both devices, the influence of PR could be quantified in different regimes of nonradiative recombination, where a potential enhancement in V OC of up to approximately 50 mV can theoretically be achieved. When taking electronic losses into account, however, this enhancement reduces to approximately 40 mV (radiative limit) and approximately 15 mV (including nonradiative SRH recombination). Particularly the SRH recombination at the HTL and ETL interfaces quenches the radiative recombination, which is dominant in the bulk MAPI. It is, however, useful to distinguish between the gross and effective radiative rates, as only the effective rate describes the actual radiative loss at a given point in the device.
Further, the definition of internal and external quantum efficiencies for reabsorbed radiation, similarly to the efficiencies commonly defined for external illumination, allows a quantification of the extent of the contribution such generated charge carriers make to the current at the terminals as a function of the external voltage. It is shown that for voltages close to V OC the internally generated charges become trapped in the device and do not contribute anymore to the device current as they are lost to nonradiative or lossy radiative modes before extraction.
It is also shown that through the definition of an effective radiative rate prefactor B eff,LED rad the efficiency of PR can be resolved spatially. While in the bulk the reabsorption efficiency is quite high, it decreases towards the interfaces, where the optical environment additionally plays a crucial role, such as metallic layers introducing plasmon and absorption losses. At the same time, metallic reflectors act as useful barriers to reduce losses incurred by light outcoupling through the rear electrode, as can be analyzed directly by considering the photon fluxes calculated using again the dyadic Green's functions.
In conclusion, PR has a sizable effect on device performance in metal halide perovskite solar cells and should be taken into account in the design process. However, the processes involved are complex and strongly couple optical and electronic properties. Using the device simulation approach presented here, these aspects can be considered in a comprehensive and quantitative manner. | 2022-01-22T16:32:42.643Z | 2022-01-20T00:00:00.000 | {
"year": 2022,
"sha1": "b6c6e6fcb2bcdf13f489e1f203e1fe4de8d8ce94",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevApplied.17.014023",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a3a3ba193f9d058f54349a40dabcfff4841e05e0",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
45715140 | pes2o/s2orc | v3-fos-license | Effects of Inorganic Seeds on Secondary Organic Aerosol (SOA) Formation
aerosols was dominant with low initial hydrocarbon concentration in the competition, while the reverse was true with high initial hydrocarbon concentration. This illustrates that the interplay of different compositions of real atmosphere aerosols can lead to complex synergistic effects on SOA formation.
Introduction
Atmospheric aerosol has significant influences on human health (Kaiser, 2005), visibility degradation (Cheng et al., 2011), and climate change (Satheesh and Moorthy, 2005). It was found that organic aerosols (OA) was the most abundant component of atmospheric aerosol (He et al., 2001) and more than 50% of the total OA are secondary organic aerosols (SOA) (Duan et al., 2005). SOA are produced from the oxidation of volatile organic compounds (VOCs) followed by gas-particle partitioning of the semivolatile organic products. Among the various VOCs, aromatic hydrocarbons are one type of SOA precursors which have drawn the most attention due to their abundance in the air and high SOA contribution to urban atmospheres (Lewandowski et al., 2008). Toluene and m-xylene are the two of the most abundant aromatic hydrocarbon species.
The detailed mechanism and controlling factors of SOA formation are not fully understood yet, which leads to the lower SOA level prediction from air quality models than the ambient measurements (Volkamer et al., 2006). Using smog chamber, SOA formation process can be investigated under controlled experimental conditions. Series of smog experiments have been conducted by different research groups to investigate the effects of background seed aerosols on SOA formation (Cao and Jang, 2007, Czoschke et al., 2003, Gao et al., 2004, Jang et al., 2002, Liggio and Li, 2008. Increased SOA formation and SOA yields were observed with the presence of acid seed aerosols. The effects of acidic seeds suggest that aerosol phase reactions may play an important role on SOA formation (Jang et al., 2002). Interactions between the organic and inorganic components of aerosols are important for further understanding the SOA formation process. Most research concludes that acid-catalyzed aerosol-phase reactions generate additional aerosol mass due to the production of oligomeric products with large molecular weight and extremely low volatility (Cao and Jang, 2007, Czoschke et al., 2003, Gao et al., 2004 and, therefore, enhance SOA formation. Uptake of semivolatile organic products to acidic sulfate aerosols was also found contributing to enhance SOA formation (Liggio and Li, 2008). In these studies, (NH4)2SO4 or H2SO4 seed aerosols were widely used to study the effect of particle acidity on SOA formation from both biogenic and aromatic hydrocarbons.
Atmospheric aerosols always have a very complex composition. Studying the effects of (NH4)2SO4 or H2SO4 seed aerosols did not draw the whole picture of the role that inorganic seed aerosols play in SOA formation. Metal-containing aerosols are important components of the atmosphere. Calcium and iron are the most abundant metal species in atmospheric aerosols and the average concentration of them in Beijing could be as high as about 1.2 μg m -3 and 1.1 μg/m 3 in PM2.5 (He et al., 2001) respectively. In this study, we tested the effect of different inorganic seeds on SOA formation using a smog chamber. Two aromatic hydrocarbon precursors toluene and m-xylene are used. Effects of various inorganic seeds, including neutral inorganic seed CaSO4, acidic seed (NH4)2SO4, transition metal contained inorganic seeds FeSO4 and Fe2(SO4)3, and a mixture of (NH4)2SO4 and FeSO4, were examined during m-xylene or toluene photooxidation with the presence of nitrogen oxides (NOx).
Experimental section
The experiments were carried out in a smog chamber which was described in detail in Wu et al. (Wu et al., 2007). The 2 m 3 cuboid reactor, with a surface-to-volume ratio of 5 m -1 , was constructed with 50 μm-thick FEP-Teflon film (Toray Industries, Inc. Japan). The reactor was located in a temperature controlled room (Escpec SEWT-Z-120), with a constant temperature between 10 and 60 °C (± 0.5 °C). The reactor was irradiated by 40 black lights (GE F40T12/BLB, peak intensity at 365 nm). Based on the equilibrium concentrations of NO, NO2 and O3 in a photo-irradiation experiment of an NO2/air mixture, the NO2 photolysis rate was calculated at approximately 0.21 min -1 , using a method described by Takekawa et al. (2000Takekawa et al. ( , 2003. Prior to each experiment, the chamber was flushed for about 40 h with purified air at a flow rate of 15 L/min. In the first 20 hours, the chamber was exposed to UV light at 34 °C. In the last several hours of the flush, humid air was introduced to obtain the target relative humidity (RH). Seed aerosols were generated by atomizing salt solutions using a constant output atomizer (TSI Model 3076). To avoid hydrolysis and precipitation in the Fe2(SO4)3 salt solution, as little sulfuric acid as possible was added to the solution. What's more, for generating internally mixed seed aerosols, a mixed solution of FeSO4 and (NH4)2SO4, in which the concentration ratio of FeSO4 to (NH4)2SO4 is 1:5, was used. The generated aerosols were passed through a diffusion dryer (TSI Model 3062) to remove water and a neutralizer (TSI Model 3077) to bring the aerosols to an equilibrium charge distribution. The hydrocarbon, NO and NO2 were carried by purified dry air into the chamber. The concentrations were continuously monitored at a measurement point in the reactor until they were stable, ensuring the components in the reactor were well mixed. The experiment was then conducted for 6 hours with the black lights on.
A gas chromatograph (GC, Beifen SP-3420) equipped with a DB-5 column (30 m×0.53 mm×1.5 mm, Dikma) and flame ionization detector (FID) measured the concentration of the hydrocarbon every 15 min. NOx and O3 were monitored with an interval of 1 min by a NOx analyzer (Thermo Environmental Instruments, Model 42C) and an O3 analyzer (Thermo Environmental Instruments, Model 49C), respectively. Size distribution of particle matter (PM) was measured by a scanning mobility particle sizer (SMPS, TSI 3936) in the range of 17-1000 nm with a 6-min cycle. The volume concentration of aerosols was estimated from the measured size distribution by assuming the particles were geometrically spherical and nonporous.
Estimating the generated SOA mass (Mo)
Due to deposition of particles on the Teflon film, the measured aerosol concentration had to be corrected. Takekawa et al. (2003) developed a particle size-dependent correction method, in which the aerosol deposition rate constant (k(dp), h -1 ) is a four-parameter function of particle diameter (dp, nm), as shown in equation (1) The resulting k(dp) values for different dp (40-700 nm) were determined by monitoring the particle number decay under dark conditions at low initial concentrations (<1000 particles cm -3 ) to avoid serious coagulation. Based on more than 500 sets of k(dp) values (dp ranges from 40 to 700 nm), the optimized values of parameter a, b, c, and d were calculated to be 6.46×10 -7 , 1.78, 13.2, and -0.957, respectively. It should be noted that the estimation of deposited aerosol concentrations using this method might introduce some error (Takekawa et al., 2003) because some scatter was recognized when fitting k(dp) values into equation (1). To reduce error due to wall deposition, SOA yields were calculated when the measured particle concentration reached its maximum in the experiments because deposited aerosols were a greater proportion of the aerosol concentration change in the reactor after that time.
Calculation of SOA yields
The fractional SOA yield (Y), defined as the ratio of the generated organic aerosol concentration (Mo) to the reacted hydrocarbon concentration (ΔHC), was used to represent the aerosol formation potential of the hydrocarbon (Pandis et al., 1992). Odum et al. (1996) developed a gas/particle absorptive partitioning model to describe the phenomenon that Y largely depends on the amount of organic aerosol mass present. Equation (2) illustrates the relationship between SOA yield and organic aerosol mass concentration: In equation (2), i presents the serial number of the hydrocarbon reaction products, Ai, αi and Kom,i (m 3 μg -1 ) are the aerosol mass concentration, the stoichiometric coefficient based on mass and the normalized partitioning constant for product i respectively. If we assume that all semi-volatile products can be classified into one or two groups, equation (2) can be simplified to a one-product model (i.e., i=1) or two-product model (i.e., i=2). Parameters (α and Kom) can be obtained by fitting the experimental SOA yield data with a least square method. Since numerous compounds are actually produced by the reaction of a hydrocarbon, parameters obtained by the simplified model only represent the overall properties of all products (Odum et al., 1996). A one-product model was proved sufficiently accurate to describe the relationship between aerosol yield and mass (Henry et al., 2008, Takekawa et al., 2003, Verheggen et al., 2007. Therefore, we used a one-product model for our experimental SOA yield data to quantify of the effects of inorganic seed aerosols on SOA formation.
Comparing the temporal variation of NO and O3 during these experiments with similar initial conditions (Figure 1), the results indicate that CaSO4 and (NH4)2SO4 seed aerosols have no significant effect on gas-phase reactions. This result is consistent with the findings of Kroll et al. (2007) and Cao and Jang (2007) that (NH4)2SO4 and (NH4)2SO4/H2SO4 seed aerosols had a negligible effect on hydrocarbon oxidation.
Similarly, by comparing the temporal variation particle concentrations (Figure 2) during the experiments with identical initial conditions except for the seed aerosols, the effects of CaSO4 and (NH4)2SO4 seed aerosols on SOA formation were identified. In Figure 2, PMcorrected was calculated from the measured PM concentrations plus wall deposit loss, and PM0 was the seed aerosol concentration. The results indicate that the presence of neutral aerosols CaSO4 (16-73μg m -3 ) in the m-xylene/NOx photooxidation system have no significant effect on SOA formation. Experiments with the presence of acid aerosols (NH4)2SO4 have different particle profiles according to the concentrations of the introduced (NH4)2SO4 seed aerosol. In Figure 2, experiment Xyl-AS2 has similar particle profile with the seed-free experiment Xyl-N5, indicating that (NH4)2SO4 seed aerosols have little effect on SOA formation when the initial concentration is low. However, when with high concentration of (NH4)2SO4 seed aerosol introduced, SOA formation was enhanced (i.e. experiments Xyl-AS3 and Xyl-AS9) comparing with the seed-free experiment Xyl-N5. Comparing experiments Xyl-AS3 and Xyl-AS9, higher concentration of (NH4)2SO4 seed aerosol resulted in higher SOA concentration. Therefore, the effects of (NH4)2SO4 seed aerosol on SOA formation depend on its concentration.
Experiment
no.
HC0
(ppm) initial m-xylene concentration (HC0), initial seed aerosol mass concentration (PM0), initial seed aerosol surface concentration (PM0,S), initial NOx concentrations (NO0 and NOx,0-NO0), ratio of HC0/NOx,0, generated SOA mass (Mo), reacted hydrocarbon (ΔHC) , and SOA yield (Y) Further analysis found that the effects of (NH4)2SO4 seed aerosol on SOA yield were positively correlated with the surface concentration of (NH4)2SO4 seed aerosols. To draw the SOA yield curves shown in Figure 3, the experiments were classified into different groups (experiment Xyl-AS3 was not classified into any group since the surface concentration of (NH4)2SO4 seed aerosols in this experiment was different from others) by the surface concentration of (NH4)2SO4 seed aerosols. The regression lines for each group (there was no regression line for experiments XylCS1~2 and Xyl-AS1~3 since they had similar SOA yield with the seed-free experiments) were produced by fitting the data of generated SOA mass (Mo) and SOA yield (Y) into a one-product partition model. As indicated in Figure 3, experiments with higher surface concentration of (NH4)2SO4 seed aerosols had higher yield curves. As proposed by most research, acid-catalyzed aerosol-phase reactions (Cao and Jang, 2007, Czoschke et al., 2003, Gao et al., 2004 and uptake of semivolatile organic products to acidic sulfate aerosols enhance SOA formation (Liggio and Li, 2008). The observed SOA formation enhancement could be related to the acid catalytic effect of (NH4)2SO4 seeds on particle-phase surface heterogeneous reactions and the surface uptake of semivolatile organic products.
Effects of Fe2(SO4)3 and FeSO4 seed aerosols on SOA formation
A seed-free experiment and three experiments with Fe2(SO4)3 seed aerosols were carried out to investigate Fe2(SO4)3 seed aerosols on phooxidation of toluene/NOx. The four experiments had identical initial conditions except for the concentrations of the introduced Fe2(SO4)3 seed aerosol. Fe2(SO4)3 seed aerosols did not have obvious effects on SOA formation as shown in the temporal variation of PMcorrected-PM0 concentrations in Figure 4. Fe2(SO4)3 seed aerosols had no obvious effect on gas phase compounds in toluene/NOx photooxidation either. A minimal amount of acid was added to the solution to generate Fe2(SO4)3 seed aerosols. The introduced H + concentration was in the range of 0.0002-0.002 μg m -3 in the Fe2(SO4)3-introduced experiments. This is much lower than the H + concentration in the "non-acid" experiment by Cao and Jang (2007). Therefore, we presume the effect of the introduced sulfuric acid was negligible and Fe2(SO4)3 seed aerosols did not have obvious effects on SOA formation in phooxidation of toluene/NOx. We also conducted 18 irradiated toluene/NOx experiments with/without FeSO4 seed aerosols. The conditions, generated SOA mass (Mo), and SOA yield (Y) are shown in Table 2. FeSO4 seed aerosols had no obvious effect on gas phase compounds either, but significantly suppressed SOA formation. Figure 5 compares the temporal variation of particle concentrations during the 4.2 ppm toluene experiments (Exierments Tol-N3, Tol-FS1, Tol-FS3, Tol-FS8 and Tol-FS12) conducted under identical initial conditions except seed aerosol concentrations. Experiments with the presence of FeSO4 seed aerosol generated less SOA than the seed-free experiment. And experiment with a higher FeSO4 seed aerosol concentration generated less SOA than experiment with a lower FeSO4 concentration. So the inhibited effect of FeSO4 aerosols on SOA yield became stronger at higher concentrations of FeSO4 seed aerosols. At other toluene/NOx photooxidation concentrations, we also found similar temporal variation of particle concentrations. However, as indicated in Table 2 and Figure 5, SOA yields of experiments Tol-FS1 and Tol-FS3 are similar to corresponding seedfree experiments of Tol-N3. These two seed-introduced experiments (as well as Tol-FS2) were conducted at the lowest ratio of FeSO4 seed aerosol mass concentration to initial toluene mass concentration (FeSO4/toluene) and did not show obvious effect on SOA formation comparing to their corresponding seed-free experiments. In these three experiments, the mass ratios of FeSO4/toluene (assuming particle density to be 1.898 g cm -3 , density of FeSO4·7H2O, because of the lack of the information the amount of hydrate water) were calculated to be lower than 4.2×10 -4 . It is possible that most of the ferrous iron was oxidized before significant SOA mass were generated since few FeSO4 seed aerosols were introduced and high concentrations of oxidizing substances were generated during the toluene/NOx photooxidation. Besides these three experiments with lowest FeSO4/toluene mass ratio, FeSO4 seed aerosols suppressed SOA formation relative to the corresponding seed-free experiments. And in our experiments, the suppress ratio could be as high as 60%, as calculated from We classified the experiments with FeSO4 seed aerosols introduced into three groups by FeSO4/toluene mass ratios to create SOA yield variations as a function of generated SOA mass ( Figure 6). Experiments with different FeSO4/toluene mass ratios seemed to fall into different yield curves. When FeSO4/toluene mass ratio was lower than 4.2×10 -4 , FeSO4 seed aerosols had a negligible effect and SOA yields of these experiments with FeSO4 seed aerosols coincide with the yield curve of seed-free experiments. When FeSO4/toluene mass ratio was higher than 5.1×10 -4 , the SOA yield curve indicated experiments with FeSO4 seed aerosols had lower yields than seed-free experiments. Lower yield curves from the experiments with higher FeSO4/toluene mass ratio were observed, indicating that a higher Fe/C ratio had a greater suppression effect on SOA formation from toluene/NOx photooxidation.
Effects of mixed (NH4)2SO4 and FeSO4 aerosols on SOA formation
Atmospheric aerosol is often a mixture of different components. We tested the effect of internal mixed (NH4)2SO4 and FeSO4 seed aerosols on SOA formation in m-xylene/NOx photooxidaiton. The experimental conditions, generated SOA mass (Mo), and SOA yield (Y) are shown in Table 3. To generate internal mixed (NH4)2SO4 and FeSO4 aerosols, a mixed solution of (NH4)2SO4 and FeSO4, in which the mass concentration ratio of (NH4)2SO4 to FeSO4 was 5:1, was used in the atomizer. So the approximately 60 μm 3 cm -3 seed aerosols in the three experiments with mixed (NH4)2SO4 and FeSO4 seed aerosols (Xyl-FA1~3) contained about 10 μm 3 cm -3 FeSO4 seed aerosols and 50 μm 3 cm -3 (NH4)2SO4 seed aerosols.
As mentioned above, neither (NH4)2SO4 seed aerosols nor FeSO4 seed aerosols had obvious effects on gas phase compounds. And in the experiments in this section, we found that mixed (NH4)2SO4 and FeSO4 seed aerosols had no obvious effect on gas phase compounds either.
In Figure 7, after wall deposition correction and deduction of seed aerosols, temporal variation of particle concentrations in experiments conducted under identical initial conditions except seed aerosol concentrations (the initial concentration of m-xylene is 1.1ppm, 2.1ppm and 3.2 ppm in picture a, b and c, respectively) were compared. Table 3. Experimental conditions and results in toluene photooxidation: initial toluene concentration (HC0), initial FeSO4 seed aerosol concentration (PM0), initial NOx concentrations (NO0 and NOx,0-NO0), ratio of HC0/NOx,0, generated SOA mass (Mo), reacted hydrocarbon (ΔHC) and SOA yield (Y)
Experiment
As indicated in Figure 7(a), comparing with the seed-free experiment Xyl-N7, both experiment Xyl-AS10 and experiment Xyl-FA1 had higher particle concentrations while experiment Xyl-FS1 had lower particle concentrations. So, in 1.1ppm m-xylene photooxidation, the presence of (NH4)2SO4 aerosols and mixed aerosols (mixed (NH4)2SO4 and FeSO4) both increased SOA formation, while the presence of FeSO4 suppressed SOA formation. In Figure 7(b) and Figure 7(c), the effects of single (NH4)2SO4 seed aerosols (promotion effect) and single FeSO4 seed aerosols (suppression effect) on SOA formation were consistent with Figure 7(a). However, the mixed aerosols seemed to have different effects on SOA formation in photooxidation systems with different initial concentrations of m-xylene. In Figure 7(b), experiment Xyl-FA2 had similar temporal variation of particle concentrations with its corresponding seed-free experiment Xyl-N8, and in Figure 7(c), experiment Xyl-FA3 had lower temporal variation of particle concentrations than its corresponding seed-free experiment Xyl-N9. It must be noted that the seed aerosols in experiments Xyl-FA1~3 had similar concentrations and components. So, aerosols at the same mixing ratio of (NH4)2SO4 and FeSO4 could either enhance or suppress SOA formation depending on the experimental conditions. It seemed that the promotion effect of (NH4)2SO4 aerosols and the suppression effect of FeSO4 aerosols competed when both of them existed. And the promotion effect of (NH4)2SO4 aerosols was dominant with low initial hydrocarbon concentration in the competition, while the reverse was true with high initial hydrocarbon concentration. This illustrates that the interplay of different compositions of real atmosphere aerosols can lead to complex synergistic effects on SOA formation. According to the composition of the seed aerosols, experiments with inorganic seed aerosols introduced were classified into three groups. In Figure 8, SOA yield (Y) variations as a function of generated SOA mass (Mo) from m-xylene/NOx photooxidation were plotted. The regression lines for each group were produced by fitting the data of generated SOA mass (Mo) and SOA yield (Y) into a one-product partition model. As indicated in Figure 8, experiments with the presence of (NH4)2SO4 had a higher SOA yield curve than the seed-free experiments, while experiments with the presence of FeSO4 seed aerosols had a lower one, indicating the presence of (NH4)2SO4 and FeSO4 seed aerosols increased and decreased SOA yield, respectively. For the experiments with mixed seed aerosols, their SOA yield curve was similar to or a little higher than the seed-free experiments when the SOA mass load was low, but their SOA yield curve was lower than the seed-free experiments when the SOA mass load was high.
Hypothesis for inorganic seed aerosols' effects
In our experiment, we observed that FeSO4 seed aerosols suppressed SOA formation while Fe2(SO4)3 seed aerosols had no effect on SOA formation. It appears that the inhibiting effect of Fe(II) involves its strong reducing properties. Hydrocarbon precursors are oxidized by OH·, NO3·, etc. During the gas phase reaction, the oxidized products usually have a lower saturation vapor pressure and, as a result, condense to the aerosol phase. When these oxidized condensable compounds (CCs) containing carbonyl, hydroxyl, and carboxyl groups (Gao et al., 2004, Hamilton et al., 2005 contact ferrous iron in the aerosol phase, they may react to produce ferric iron and less condensable compounds (LCCs) or incondensable compounds (ICs). The ferrous iron may stop some CCs from being further oxidized and forming low-volatility products (Hallquist et al., 2009), including oligomers (Gao et al., 2004). The experimental results also showed that the presence of neutral CaSO4 seed aerosols seed aerosols have no significant effect on photooxidation of aromatic hydrocarbons, while the presence of acid (NH4)2SO4 seed aerosols can significantly enhance SOA generation and SOA yield. A possible mechanism is shown in Figure 9. Oligomerization is one important step during SOA formation (Nguyen et al., 2011). As proposed by (Kroll et al., 2007), the effect of (NH4)2SO4 seed aerosols may be attributed to acid catalyzed particle-phase reactions, forming high molecular weight, low-volatility products (e.g. oligomers). These processes may deplete the semivolatile CCs in the particle phase, and enhance SOA formation by shifting the gas-particle equilibrium, which is shown in Figure 9, and, therefore force more CCs condense to aerosol phase. Since (NH4)2SO4 and FeSO4 seed aerosols may both influence the semivolatile CCs, there is a competition for CCs to form higher-volatility products (LCCs or ICs) or low-volatility products (e.g. oligomers).
Figure 9.
Hypothesized mechanism for inorganic seed aerosols' effects on SOA formation: ferrous iron Fe (II) reduces or decompose some condensable compounds (CCs), which are oligomer precursors, interrupting oligomerization and generating high volatility products (LCCs or ICs); while acid seed aerosols catalyze aerosol-phase reactions, generating oligomeric products
Conclusion
Effects of various inorganic seeds, including neutral inorganic seed CaSO4, acidic seed (NH4)2SO4, transition metal contained inorganic seeds FeSO4 and Fe2(SO4)3, and a mixture of (NH4)2SO4 and FeSO4, were examined during m-xylene or toluene photooxidation. Our results indicate that the presence of CaSO4 seed aerosols and Fe2(SO4)3 seed aerosols have no effect on photooxidation of aromatic hydrocarbons, while the presence of (NH4)2SO4 seed aerosols and FeSO4 seed aerosols have no effect on gas-phase reactions, but can significantly influence SOA generation and SOA yields. (NH4)2SO4 seed aerosols enhance SOA formation and increase SOA yield due to acid catalytic effect of (NH4)2SO4 seeds on particle-phase surface heterogeneous reactions. While FeSO4 seed aerosols suppress SOA formation and decrease SOA yield possibly due to the reduction of some oligomer precursor CCs. These results reveal that many inorganic seeds are not inert during photooxidation process and can significantly influence SOA formation. These observed effects can be incorporated into air quality models to improve their accuracy in predicting SOA and fine particle concentrations. | 2017-09-17T11:51:52.302Z | 2012-09-12T00:00:00.000 | {
"year": 2012,
"sha1": "b8d070bd425a2cd5bef1ec97eb3ec24b151dd00d",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/38758",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "edeb6e8b4cb4b64c26fa332d69a748dc5fa21595",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
204445110 | pes2o/s2orc | v3-fos-license | Research on the Influence of Economic Policy Uncertainty on Stock Market Heterogeneity Based on Investor Perspective
For the investors in the Chinese stock market, it is mainly divided into institutional investors and individual investors. It is well known that the number of individual investors, that is, the shareholding ratio, occupies the vast majority of the stock market, so changes in their investment behavior and sentiment will inevitably have a profound impact on the stock market. Although the proportion and size of institutional investors is smaller than that of individual investors, because of the concentration of funds and the concentration of investment behavior, the stock market will also play a nonnegligible role. This paper will analyze from the perspective of investors in the stock market, whether the behavior of different investor entities has different degrees and different effects on stock market volatility. Through analysis, it is found that when the proportion of institutional investors increases, the volatility of corporate stocks can be effectively reduced. Volatility in corporate stocks rises when investor sentiment is high. When considering the shareholding ratio of institutional investors and investor sentiment, economic policy uncertainty has a greater impact on corporate stock volatility.
Theoretical Basis and Literature Review
With the establishment of the Shanghai Stock Exchange and the Shenzhen Stock Exchange in 1990 and 1991, China's securities capital market was formally formed. At the beginning of the formation of the capital market, the blind speculation of individual investors prevailed, causing frequent abnormal fluctuations in stock prices, individual stock prices and actual value within the company. There is a lack of necessary internal links between them, leading to instability in the stock market. Subsequently, a group of securities and fund companies were established one after another, forming the first group of institutional investors in the Chinese capital market. Institutional investors quickly approached the capital market with their more specialized theoretical knowledge and scale advantages. In the mature capital market in the West, institutional investors have the information and capital advantages that individual investors can't match, which can largely avoid the harm caused by noise trading, so they are consistently considered rational and can stabilize stocks. The main body of investment in market volatility. However, in the Chinese stock market, with the continuous rise of various types of institutional investors and the continuous growth of the overall strength, the volatility of the stock market has not decreased significantly, and there has been a sudden rise and fall, so whether institutional investors can play The discussion of stabilizing the role of the market has never stopped. The discussion focused on whether institutional investors have fully rational investment behaviors, and whether individual investors can avoid speculative behaviors such as blind self-confidence and herding effects.
Scholars' research on the relationship between institutional investors and stock volatility focuses on the following three types: The first is that institutional investors will aggravate stock market volatility. The reason for this conclusion is mainly because institutional investors have much greater. The financial strength of individual investors and the number of stocks bought and sold, once the impact of large transactions on the stock market is much greater than ordinary individual investors. Zhou (2019) studied the relationship between institutional investors and the stock market bubble during the stock market bubble period based on a set of non-public stock-scoring account statistics (Zhou, 2019). It found that institutional investors prefer stocks with higher bubbles and there is a continuous net purchase. Investing behavior led to institutional investors not playing the expected role of eliminating stock price deviations, but led to the expansion of the bubble and the increase in volatility.
The second is that institutional investors have the role of stabilizing the stock market. Scholars who hold this view believe that the biggest difference between institutional investors and individual investors is that institutional investors are experts to manage their finances, whether it is professional analytical ability or collection. Information has the advantage that individual investors can't match. Because institutional investors can professionally analyze stock market information and follow the principle of "cautious people", they tend to be passive transactions, and are rarely affected by market sentiment and noise. The investment process is programmed to facilitate investment opportunities and control investment risks. Suppress market volatility. Wang and Xue (2018) conducted an empirical test based on detailed position data of institutional investors from 2005 to 2017. The study finds that the competition of institutional investors based on private information of enterprises will lead to the herd effect, but this kind of herding effect is not a deliberately imitated pseudo-hero behaviour (Wang & Xue, 2018). The institutional investors' reaction to the homogeneity of the enterprise is conducive to the information transmission mechanism. Perfection, which can suppress stock price synchronization.
The third view is that the existence of institutional investors is not related to stock market volatility. The theoretical basis on which scholars hold this view is the efficient market hypothesis, that is, under the premise of an efficient market, stock prices reflect all useful information, and institutional investors do not have any information superiority compared with individual investors. Therefore, the investment behavior of institutional investors does not cause fluctuations in stock prices. Liu (2015) proposed that institutional investors with rational emotions can make relatively rational judgments and choices in stock market returns. However, when stock fluctuations are abnormal, irrational emotions prevail, making institutional investors as easy to make irrational as individual investors. Judging, thus exacerbating the stock market plunging (Liu, 2015).
Compared with institutional investors, individual investors are the largest and most heavily influential investor in China's stock market. The research on this angle is mainly based on the theory of bounded rationality and the theory of cognitive bias. Traditional financial theory cannot explain the growing financial market vision. The rapid rise of behavioral finance theory based on psychology and finance has gradually become an important theoretical basis for the research field of investment behavior of contemporary financial market investors. The research on investor sentiment mainly focuses on three points. The first point is to verify the objectivity and evidence of investor sentiment based on psychology and experimental economics. The second point is based on investor psychological bias and research on emotions. The role of asset pricing, and to explain some of the anomalies in financial markets. The third point is the construction of investor sentiment indicators, through the various channels and methods to build quantitative indicators, and then study the impact of investor sentiment on the stock market. Similar to the conclusions of the impact of institutional investors, there is still disagreement about the impact of investor sentiment on the stock market. De Long, Shleifer, and Summers (1990) first proposed noise trading model, when investigating the systematic risk source of asset prices, incorporates investor sentiment into risk considerations, and believes that changes in investor sentiment will lead to asset price volatility through channels that affect systemic risk (De Long et al., 1990). Baker and Bloom (2013) analyzed the changes in investor sentiment and stock returns, and found that when the mood is in a low stage, investors are more sensitive to fluctuations in asset prices, that is, the impact of sentiment on price fluctuations during periods of low and high levels (Baker & Bloom, 2013). The magnitude is different. Zhang and Wang (2013) used multiple regression analysis and impulse response test development, and investor sentiment has a significant positive effect on stock market volatility. However, some scholars believe that there is a negative correlation between investor sentiment and the stock market (Zhang & Wang, 2013). Baker, Bloom, and Davis (2016) constructed the investor sentiment index and global sentiment index of six developed countries, and studied the global index and the domestic index respectively (Baker et al., 2016). The stock market influence found that no matter which kind of sentiment index has a negative impact on stock market returns. Especially for low-value stocks with low profitability and small total issuance, this reaction is more obvious. Yin and Wu (2019) used data mining methods to construct highfrequency indicators of investor sentiment to study its impact on the stock market. It was found that investor sentiment is conducive to predicting changes in stock returns, but this predictive ability will be due to the time of day (Yin & Wu, 2019). The difference, as well as the overall state of the stock market, is significantly different. According to the existing literature, the research conclusions of investor sentiment on stock market return rate and volatility have not been unified, mainly because the influence of emotional changes is bound to change under different external environments. In 2003, there were 33 Chinese securities fund companies with total assets of less than 8 billion yuan. However, by the end of 2017, there were more than 100 securities fund companies with total assets of 165.993 billion yuan. It can be seen that after entering the supernormal development state, the establishment and development scale of Chinese fund companies have increased rapidly, achieving a substantial leap, and most fund companies are listed on the Shanghai Stock Exchange or the Shenzhen Stock Exchange.
Scholars' research on investor sentiment for the stock market In addition to discussing the relevance of the two in different states, scholars' research on investor sentiment is also focused on the construction of emotional indicators, along with information technology. With continuous development, the application of various text data mining has positively promoted the scientific nature of index construction. This paper will use the more classic CICSI index in the Guotaian database as a surrogate indicator of investor sentiment, and use the ISI index to verify the robustness of the research conclusions in the robustness test. According to the possibility of data, the investor's sentiment indicators used in this paper use monthly data from February 2003 to December 2018. In order to better display the trend of the CICSI index and the ISI index for nearly 20 years, it is displayed through the trend chart. It can be seen from Figure 1 that the CICSI index has the left coordinate axis as the main coordinate axis and the value is too large, and the ISI index is the coordinate axis on the right side and the value is small. Because the calculation basis and calculation method of the two are different, the absolute value of the amount is not comparable, mainly the relativeity of the trend. As can be seen from the above chart, in the vicinity of the 2007 global financial crisis and the 2015 stock market crash, both indexes showed abnormal fluctuations.
1. Basic Model Settings
In order to better explore the heterogeneity of investor behavior on the stock market under the premise of uncertain economic policy, this paper will choose the a-share listed company listed on the Shenzhen Stock Exchange and the Shanghai Stock Exchange as the research subject, and apply the panel data model for analysis. Referring to the setting of the panel model in the fourth chapter of this paper, the institutional investors and individual investor sentiment, and the interaction between the two and the economic policy uncertainty index are included in the model.
Variable Selection
This paper selects the monthly stock price volatility of China's A-share listed companies on the Shanghai Stock Exchange and the Shenzhen Stock Exchange as the explanatory variables. Based on the above research and the availability of actual data, the period from February 2003 to December 2018 was selected as the sample interval. Exclude ST, financial, delisted, and some companies with missing data. Stock price volatility the standard deviation of the daily closing price of the above company is used as the monthly historical volatility indicator. The data comes from the Wind database and the Guotaian database. The key explanatory variable EPU index is based on the SCMPEPU index developed by Baker et al. (2016) according to the South China Morning Post and the monthly data of the MLEPU index developed by Baker et al. (2016) according to the two mainland newspapers, Guangming Daily and People's Daily. Key explanatory variable proxy indicator. In this paper, the MLEPU index is the main research variable, and the SCMPEPU index is used as a tool variable for robustness testing. The institutional investors' shareholding ratio is the monthly data of each listed company. The investor sentiment indicator selects the CICSI index as the main measure, and the ISI index as the substitute indicator of the robustness test. The data comes from the RESSET database.
Hausman Test
The research process used between February 2003 and December 2018 included panel data of individual stock volatility data, macroeconomic operational data, and individual enterprise data. The commonly used models for analyzing panel data include random effects models and fixed effects. The model is determined by the Hausmann test to determine the use of the model. It can be seen from the results of the Hausmann test in the Table 1 that the p-value is zero, which significantly rejects the null hypothesis that the random effect is more effective, and the selection of the fixed effect is more suitable for the research content of this paper. In order to ensure the rigor of the research process, this paper controls the time effect and individual effect in the process of empirical analysis, which belongs to the double fixed effect model, and considers the gap between various industries and controls the industry effect.
Sample Empirical Analysis
In the specific analysis process, the SCMPEPU index is selected as the economic policy uncertainty representative index, and CSICI is the investor sentiment proxy index. At the macro level, we control factors such as GDP growth rate, macroeconomic sentiment index, and overall market volatility. At the micro level, we control factors such as the return on net assets, firm size, and concentration of equity. The specific analysis results are shown in the Table 2. In order to eliminate the heteroscedasticity and reduce the trend of volatility, and the relationship between better variables, the economic policy uncertainty index and the firm size value are logarithmically processed. The variables in the above table correspond to their respective coefficients. The first column is the regression coefficient result without considering the shareholding ratio of the institutional investors and the investor's sentiment. The EPU coefficient value is 0. 102, which is significantly positive at the 1% level. The increase in economic policy uncertainty will lead to an increase in the volatility of individual stocks. The ii column is the regression result considering the proportion of institutional investors' shareholding in the enterprise equity distribution. The EPU coefficient value has not changed. The coefficient value of the institutional investor's shareholding ratio is -0. 056, which is significantly negative at the 5% level. When the proportion of institutional investors in corporate equity increases, the volatility of individual stocks declines, and institutional holdings play a role in stabilizing stock returns. The iii column in Table 2 is the coefficient regression result considering the investor's sentiment, and the coefficient value of the investor's sentiment is 0. 009, which is significantly positive at the 1% level, which means that when the investor's mood is high, the stock volatility increases. The iv column in Table 2 is the model regression result considering both the internal institutional investor ratio and the overall investor sentiment. The EPU coefficient value has not changed significantly, and is still significantly positive at the 1% level. The coefficient value of institutional investors' shareholding ratio is still significantly negative at 5%, and the investor's sentiment coefficient value is significantly positive at 1%. In general, the economic policy uncertainty index and investor sentiment have positive effects on individual stock volatility, and institutional investors' shareholding ratio is conducive to easing stock volatility.
Variable
Ⅰ Ⅱ Ⅲ Ⅳ Considering that economic policy uncertainty may be related to institutional investors' investment decisions and investor sentiment, they affect each other. Because this paper focuses on the economic policy uncertainty and the interaction between the two, the impact of individual stock volatility. As shown in the above table, in the column i, ii, and iii of Table 3, the SCMPEPU index is used as a substitute indicator for economic policy uncertainty, and the iv, v, and vi columns are used to ensure the robustness of the research results, and the MLEPU index is used as the SCMPEPU index. Alternative indicator. The coefficient of the interaction term between the EPU index and the institutional investor's shareholding ratio in column ii is 0. 009, and the coefficient of the interaction term between the EPU index and the institutional investor's shareholding ratio in column v is 0. 053, and both coefficient values are positive. Explain that the increase in economic policy uncertainty has led to an increase in the marginal role of institutional investors in the volatility of individual stocks, but the two coefficients are not significant, indicating that institutional investors' shareholding ratio fluctuates against individual stocks on the premise of considering economic policy uncertainty. Have a similar gentle effect. In the third column, the coefficient of the interaction term between the EPU index and the investor's sentiment is 0. 005, and the coefficient of the interaction term between the CEPU index and the investor's sentiment in the vi column is 0. 004, and both coefficient values are significantly positive at the 1% level. It shows that the increase in the level of uncertainty in economic policy leads to an increase in the marginal effect of investor sentiment on individual stock volatility, that is, when the level of economic policy is high, the increase in investor sentiment will lead to stronger volatility of individual stocks.
Robust Test
The research focus of this paper is based on the perspective of investors, considering the uncertainty of economic policy, the influence of corporate institutional investor shareholding and investor sentiment on individual stock volatility. In the process of testing for robustness, the indicators of economic policy uncertainty index and investor sentiment are replaced to verify the robustness of the research conclusions. The specific results are shown in the Table 4.
Variable
Ⅰ Ⅱ Ⅲ Ⅳ Ⅴ The i-th column in Table 4 is the regression analysis result of the original data, which is mainly used to compare and analyze the regression results of the robustness test. In the second column, ISI is the surrogate index of investor sentiment. The ISI coefficient value is 0. 002 and is significantly positive at the 1% level, which is the same as the coefficient value and significance of the CICSI index. In addition to the use of ISI as a surrogate indicator of investor sentiment in column iii, the MLEPU index is used as a surrogate indicator for economic policy uncertainty. The coefficient values of the two are consistently and significantly the same as the coefficient values in the comprehensive study. The robustness of the research conclusions. In the iv and v columns, the SCMPEPU index and the MLEPU index lag 2 periods as substitute indicators for economic policy uncertainty, and the regression results are still stable.
Summary
This paper discusses the impact of institutional investor structure and overall investor sentiment on individual stock volatility in the context of economic policy uncertainty and the consideration of economic policy uncertainty. Through the results of the full sample analysis, it can be found that the increase in the shareholding ratio of institutional investors is conducive to reducing the volatility of individual stocks and playing a role of gradual volatility. When investor sentiment rises, it will have a positive effect on the volatility of individual stocks. This paper focuses on the two interactive term coefficients of economic policy uncertainty and institutional investors' shareholding ratio and investor sentiment. After regression analysis, it is found that institutional investors' shareholding ratio fluctuates against individual stocks considering economic policy uncertainty. The impact is not significantly different. The increase in the level of economic policy uncertainty leads to an increase in the marginal effect of investor sentiment on individual stock volatility. When the level of economic policy is high, the increase in investor sentiment will have a stronger driving effect on individual stock volatility. In order to verify the correctness of the model and the robustness of the research results, a series of alternative indicators for the explanatory variables and key explanatory variables were compared and found that the regression results were robust regardless of which substitution variable was chosen. | 2019-09-26T09:07:05.637Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "8ac7bbd25632c9ace714d14e99cc02edf37720f0",
"oa_license": "CCBY",
"oa_url": "http://scipg.com/index.php/102/article/download/199/305",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "56dd85dba8ae99111185c48d2e7c358887a4c3ad",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
242867145 | pes2o/s2orc | v3-fos-license | Improving Patient Satisfaction at Paediatric Outpatient Clinic Services, Hospital Tuanku Fauziah
This article aims to give an overview of the importance of customer satisfaction. In the context of the healthcare setting, patient satisfaction remains a useful indicator to measure customer satisfaction. Hence, reflecting the quality of healthcare services. Customer expectation and experience play an important role in determining customer satisfaction. Customer satisfaction is the degree to which the products or services provided by the organisation meet the expectation and how is their experience with the products or services. This article emphasised the relationship between customer expectations and experience with customer satisfaction at outpatient clinic services based on previous literature. The reliability and validity of the research instruments used to measure customer satisfaction in this study are discussed to ensure that the research instrument has high reliability and validity. Thus, proving that measuring patient expectation and the patient experience level is important in determining customer satisfaction at outpatient clinic services. The findings of this study revealed that all variables of this study are of high reliability and validity.
Introduction
Patient satisfaction is a subjective and complicated phenomenon yet it directly reflects the competency of the hospital or other healthcare facilities and has been used globally. Satisfaction is used as an indicator of the service delivery quality and help health care service providers and organization to build a better understanding of the patients' needs and to improve services provided (Hussain et al., 2019). Many healthcare organizations around the world are moving towards patient-centred care which is defined as "providing care that is respectful of and responsive to individual patient preferences, needs, and values and ensuring that patient values guide all clinical decisions" (Kuipers et al., 2019). Patient satisfaction surveys are being increasingly identified to be a benchmark to measure the success of the service delivery system functional at hospitals. Hence, patient satisfaction is considered a valid indicator of the quality of care.
Higher patient satisfaction gives benefits to a healthcare organization in terms of achieving customers retention and customers loyalty. Patient satisfaction also will provide consistent profitability, especially to the private healthcare sector. Furthermore, patient satisfaction increases staff morale, increased personal and professional satisfaction, reduced staff turnover. Hence, leads to increased work productivity (Winland, 2000). The purpose of this article is to provide valuable opinions on patient expectations and patient experience are determinants of patient satisfaction. besides to demonstrate that the instrument is highly reliable.
Problem Statement
Patients satisfaction is a subjective and complicated phenomenon yet it directly reflects the competency of the hospital or other healthcare facilities and has been used globally (Al-Neyadi et al., 2018;Yunus et al., 2013). There are many factors affecting patients' satisfaction including waiting time, consultation time, the attitudes of doctors and other health staff, service costs, hospital infrastructure, physical comforts such as the cleanliness of the facility, emotional support, respect for patient preferences and availability of medicine (Sodani et al., 2010).
The increasing number of paediatric patients at the Paediatric Outpatient Clinic leads to various factors that would lead to low patients' satisfaction such as longer waiting times, shorter consultation time, increased staff workload leading to increased stress, thus reduced staff performance or services attitude. the children visiting the Paediatric Outpatient Clinic also are with distinctive physical, emotional and mental developmental stages. Hence it becomes difficult to determine the quality of outcome regarding healthcare (Sengupta et al., 2019). Most paediatric patients are depending on the parents/guardian; thus, any issue or complaints will be expressed by the parents; although some children are independent to voice out their opinion. Therefore, paediatric healthcare satisfaction is a result of the complex and remarkable relationship being shared by the physician, parent/caregiver and the patient (Sengupta et al., 2019).
Problem Diagnosis
The Fishbone diagram in Figure 1.1 identified the root cause or issues that potentially contribute to the lower patient satisfaction at the Paediatric Outpatient Clinic services, HTF.
Figure 1.1: Fishbone Diagram of factor affecting patient satisfaction at Paediatric Outpatient Clinic Services, HTF
The complicated method whereby most of the work processes were performed manually as well as non-systematic appointments were the main issues identified to the cause of ineffective services. Ineffective healthcare provider-patient communication is also a known contributing factor to customer dissatisfaction. The other contributing factors are inadequate human resources, inadequate basic facilities as well an unappealing working environment. Currently, there are no proper measurement tools to measure customer satisfaction toward the clinic services.
Significance of the Research
The importance of this study is to understand issues related to healthcare service quality as well as contribute to the development of knowledge in a field of study. For the past 20 years, the patient satisfaction survey has become an important tool for identifying gaps and develop an effective action plan for the improvement of healthcare quality services (Al-Abri & Al-Balushi, 2014). As Malaysia healthcare services are moving towards patient-centred care, thus patient satisfaction study is very important as it reflects the patient's decisionmaking involvement in improving the healthcare services. The finding of this study would give opportunity and benefit to the organization to improve performance.
Furthermore, the instrument used in this study shall become a standard tool for future customer satisfaction surveys of outpatient clinic services as well as guidance for other researchers as a reference in the future who wish to study a similar research field, hence further improvement can be implemented.
Literature Review
The main objective of any health care organization is to deliver the best possible health care to patients. The study of patient satisfaction with health care services received is of most importance in the context of providing quality patient care services (Rajkumari & Nula, 2017). Satisfaction is used as an indicator of the service delivery quality and help health care service providers and organization to build a better understanding of the patients' needs and to improve services provided (Hussain et al., 2019). Patient satisfaction surveys are being increasingly identified to be a benchmark to measure the success of the service delivery system functional at hospitals. There were many definitions of customers in the healthcare organization. McKillip, 2017, defined healthcare customers as external and internal customers. external customers include patients, patient's families and relatives, referring physicians, doctors' offices, blood donors, insurance companies and other agencies. While internal customers include all the staff in the healthcare organizations such as physicians, nurses, and other professionals and committees. Meanwhile, Zhang et al., 2017 described patients as customers in their study of customer identification in healthcare. In this study, patients were classified as customers of the specialist outpatient clinic, Hospital Tuanku Fauziah.
Customer Satisfaction
Customer satisfaction is defined as perception developed when consuming the product or services, whether it meets the consumer expectation (Chalikias et al., 2016). In the healthcare industry, patients, in general, are consumers of healthcare services. The previous studies highlighted various factors affecting customer satisfaction in healthcare organizations including in hospital and clinic services. Service quality was identified as the most important factor determining customer satisfaction in healthcare (Allahham, 2013;Batbaatar et al., 2017). There are many approaches to evaluate service quality in the healthcare setting. The most widely used known as the SERVQUAL model which consists of 5 dimensions which are tangibles, involved external factors such as physical facility, equipment, and staff appearance; reliability, the ability to provide services accurately to customers (patients) as promised; responsiveness, the attitude of healthcare providers who provide prompt service to the patient; assurance, the knowledge and courtesy of healthcare providers and the ability to deliver trust and confidence on their qualification and competence; and finally empathy, the individual attention and caring attitudes towards each patient (Hafiz et al., 2011;Lee & Kim, 2017). Therefore, these dimensions were used as a guide to measuring customer satisfaction in various healthcare organizations.
Customer Expectation and Customer Satisfaction
Customer expectation is a customer perception toward the product or services provided whether meeting their needs, exceeding beyond the needs or lower than their assumption. Customer expectation is subjective and it differs between each customer. Customer expectation is closely related to customer satisfaction. Previous studies accentuated the significant effect of customer expectation on customer satisfaction (Almsalam, 2014;Chan et al., 2003). The customer may have low satisfaction if the services received did not meet their expectation. In contrast, the customer is satisfied if the services have met their expectation or achieved beyond the expectation. The Healthcare industry is highly competitive and because of improving socio-economic, education and advanced technology, the customer's demand and expectations are also in a rising trend. Therefore, customer satisfaction is increasingly identified to be an established indicator to measure the success of healthcare services (Mohd & Chakravarty, 2014).
Customer Experience and Customer Satisfaction
A positive customer experience is very crucial and a critical component in ensuring customer loyalty, retention and encourage brand reputation. In the healthcare setting, patient experience is an interaction of patient/customer with the healthcare system including care from doctors, nurses, other staff, facilities and other services. Bowling et al., in 2012 described patients' experiences as "their direct, personal observations of their healthcare" and measured patient experience in terms of whether patient 'expectations' were met. Enhancing patient's experience is associated with positive quality outcomes including patient safety and clinical effectiveness (Ahmed et al., 2014). Expectations of experience include cleanliness of the facilities, clear information and signage, convenient and punctual appointments, helpful reception staff, knowledgeable and reassurance doctors with respect and empathy to patients, offering patient-centred care whereby a patient receiving a throughout information on their health condition, benefits and side effects of treatment, being allowed to be involved in their treatment decisions. Hence, customer satisfaction is an outcome measure of a patient's experiences of care, in addition to clinical outcomes and confidence in the health system (Larson et al., 2019). Bleich et al., in 2009 reported that patient experience was significantly associated with satisfaction with the healthcare system and explained 10.4% of the variation around the concept of satisfaction. Furthermore, a good understanding of the patient experience will provide clear guidance for further research, as well as guiding healthcare policymakers for consistent and sustainable improvements in medical care quality (Oben, 2020).
Methods
This study is using the quantitative method. The study involved 43 patients attending Medical Outpatients Clinic which has a similar setting to the Paediatric Outpatient Clinic. The researcher obtained permission to conduct the study from the Head of the Medical Department before the commencement of the study. The questionnaire is distributed among patients attending the Medical Outpatients Clinic. To further analysed the reliability of the questionnaire, the researcher conducted a reliability analysis through SPSS 23 whereby two variables which are patient's expectation and patient's experience were tested. Table 1 exhibit the variables of the instrument used in this study.
Results of Normality and Reliability Test
The results of data assessment for normality and reliability of the instrument are discussed below:
Normality Test
Assessment of data normality is essential to determine whether the data set is normally distributed and is an underlying assumption for using parametric statistical tests.
The normality test for the pilot study was statistically analysed using SPSS 23. The mean z value for the variable patient expectation of the pilot study was between -1.67 to +1.12 while the z value for variable patient experience was between -2.02 to +1.34. The z value between -4.0 to +4.0 is acceptable (Chua, 2011). The normality testing used for this study was through numerical statistical methods as shown in Table 2, which is skewness and kurtosis. As for normal data distribution, the value for skewness and kurtosis must be between -2.0 to +2.0 (Chua, 2013). In this study, the skewness and kurtosis value of the patient expectation variable was -0.605 and -0.194, while the skewness and kurtosis value of the patient experience variable was -0.547 and -0.213. Therefore, the skewness and kurtosis values of both variables are normally distributed.
Reliability
Research reliability is the degree to which the research method produces stable and consistent results. If the same result can be consistently achieved by using the same methods under the same circumstances, the measurement is considered reliable. Reliability is concerned with the consistency of questionnaires (Saunders et al., 2009). In this study, internal consistency is used for the reliability measure for the questionnaire.
Cronbach's Alpha
The internal consistency is estimated via Cronbach's alpha coefficient index, which is the most commonly used form to determine the internal consistency of an instrument (Heale & Twycross, 2015). In general, a score of 0.7 and higher is acceptable reliability.
To further analysed the reliability of the questionnaire, the reliability analysis was conducted through SPSS 23 whereby two variables which are patient's expectation and patient's experience were tested. The Cronbach's alpha value for the patient's expectation variable in this study was 0.940 and the value for the patient's experience variable was 0.898 (Table 3.10). therefore, both variables showed excellent reliability.
Conclusion
This article provides an overview of the importance of positive customer expectations and the customer experience in determining customer satisfaction. Customer satisfaction in healthcare services is a response to the patient evaluation of the healthcare provider service quality based on their consumption experience. Hence, indicates that customer satisfaction is an important element in measuring service quality in an organization or business including in healthcare services. Furthermore, the result of this study demonstrates that the instrument is reliable and can be utilized in studying the relationship between customer expectation and the customer experience in determining customer satisfaction. Finally, the study finding also contributed to the knowledge in addition to the existing literature of a similar study field and shall benefit other researchers as a reference in the future.
Acknowledgement
I wish to express my sincere gratitude to Universiti Teknologi Malaysia (UTM) and Azman Hashim International Business School, for the opportunity for conducting this action research as part of the requirement of the Master of Business Administration program and for providing various guidance and assistance. I would like to thank the Ministry of Health Malaysia (MOH) and Hospital Tuanku Fauziah especially the staff and patients of Paediatric and Medical Outpatient Clinics for the support and willingness to participate in this research. Their involvement and contribution mean a lot in my data collection process whereby I managed to collect relevant and quality data and suggestions with their assistance although in facing the COVID-19 pandemic. Lastly, I would like to express my gratitude to Dr Siti Akma bin Ishak, the Head of the Paediatric Department and my supervisor, Dr Beni Widarman bin Yus Kelana, for the invaluable guidance and assistance throughout this research process. I am truly honoured to receive their support. | 2021-10-15T16:10:05.767Z | 2021-08-14T00:00:00.000 | {
"year": 2021,
"sha1": "c4985bbd0b855c62bb92b88e0c07c860627239af",
"oa_license": "CCBY",
"oa_url": "https://hrmars.com/papers_submitted/10826/improving-patient-satisfaction-at-paediatric-outpatient-clinic-services-hospital-tuanku-fauziah.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b08000cfd075342e063e4eef07170dd5f407df1b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
250568570 | pes2o/s2orc | v3-fos-license | Effectiveness of Two Stress Reduction Interventions in Patients with Chronic Diabetic Foot Ulcers (PSY-DFU): Protocol for a Longitudinal RCT with a Nested Qualitative Study Involving Family Caregivers
Diabetic foot ulcer (DFU) is the leading cause of lower-limb amputations, with a significant impact on patients, families, and society. Since DFU medical treatments represent a major socioeconomic burden, cost-effective interventions are needed. This trial aims to assess the effectiveness of a muscle relaxation intervention compared to a hypnosis intervention versus active and passive control groups on DFU healing, physiological indicators of healing prognosis, and quality of life (QoL) in clinically distressed patients with a chronic DFU. A multicenter, randomized controlled trial with three assessment moments (baseline, two months post-intervention, and four months follow-up) will be conducted. Approximately 170 patients will be randomized and allocated to either treatment or control groups. Primary outcomes will be DFU healing, physiological indicators of healing prognosis, and QoL. Secondary outcomes will include perceived stress, psychological morbidity, and DFU representations. The efficacy of sessions on DFU healing will be qualitatively assessed in 12 patients allocated to the treatment and active control groups, as well as their family caregivers. This study will provide evidence regarding the effectiveness of two psychological interventions for the DFU healing process and the QoL of patients, with direct clinical relevance regarding DFU treatment and recurrence.
Background
Diabetes represents a growing public health concern with a growing global incidence during the last three decades [1] due to lifestyle changes and aging. According to the International Diabetics Federation, in 2021, an estimated 6.7 million adults have died as a result of diabetes or its complications. Over time, patients with diabetes have an increased risk of developing serious comorbidities, such as cardiovascular disease, blindness, kidney failure, and foot ulcerations [2].
Diabetic foot disease is one of the most serious complications of diabetes, impacting nearly 15% of all patients [3]. Diabetic foot ulcer (DFU) is a full-thickness wound below the ankle, in most cases, caused by poor glycemic control, calluses, ill-fitting footwear, underlying neuropathy, peripheral vascular disease, or improper foot care. Around 11-14% of worldwide patients diagnosed with diabetes will develop DFUs [4], which is the leading cause of lower-limb amputations in approximately 80% of these patients [5]. According to Zhang et al., the current DFU prevalence worldwide was 6.7%, while in Europe was 5.1% [6].
In addition to the devastating consequences of DFU development to patients and their families, the socioeconomic cost involved has become a burden. From 2007 to 2021, the direct cost of DFU treatment worldwide has increased from USD 232 to 966 billion, which reflects a dramatic increase. In Europe alone, the total medical costs for the management of DFUs in 2010 were estimated to be USD 105.5 billion, while in 2030, it is expected to reach USD 124.6 billion [2]. Thus, planning cost-effective interventions focused on DFU recovery becomes essential. This is also true regarding the impact on the environment, fewer trips to DFU clinics, fewer hospitalizations, and less clinical waste produced.
It is known that adherence to self-care behaviors and treatment is essential to wound healing. However, psychological factors, such as depression or anxiety, may also negatively influence DFU healing via psycho immunological effects [7,8]. Furthermore, DFU is associated with distress [9], and negative emotions contribute to prolonged infections, delayed wound healing, and poor quality of life (QoL), which is associated with low treatment responses and low remission rates, a major health concern [7, 10,11].
Previous literature has emphasized the impact of stress on wound healing, almost exclusively in acute wounds, e.g., [12]. In fact, stress raises the level of cortisol, which has a negative impact on the immune system, particularly on wound repair due to immunity suppression [13,14]. By increasing the release of proinflammatory cytokines during tissue repair and driving tissue oxygen levels lower, psychological stress delays wound healing [15,16].
Stress may also impact the expression and function of microRNAs (miRNAs), important regulatory molecules that can be used as biomarkers for diagnosis or progression of complications from diabetes due to their ability to fine-tune cellular responses [17]. Thus, by modulating miRNA biogenesis, expression, and complex activity, stress can cause important changes in metabolism that may hamper DFU healing [18].
Glycemic control is one of the main strategies to monitor diabetes, reducing the complication of diabetes and the risk of hypoglycemia. Hemoglobin A1c (HbA1c) testing represents the best method to monitor glycemia in patients with diabetes, as it reflects levels of blood glucose over several weeks. Although distress has been associated with poor glycemic outcomes [19], the role of HbA1c in wound healing is not consensual. In fact, if some studies show direct associations between HbA1c levels and wound-healing rate, e.g., [20], other studies found no association between this biomarker and wound outcomes, e.g., [21].
Since there is evidence that psychological distress affects negatively wound healing [11], it is expected that reducing-stress interventions have positive implications on DFU recovery. Adjuvant interventions as relaxation training techniques have shown promising results in patients with diabetes [22,23] and patients with chronic DFUs [24,25]. Previous research also indicated hypnosis as an effective adjunct treatment in the management of diabetes, contributing to reduced blood glucose levels, better metabolic control, and increased blood flow to extremities, decreasing the risk of diabetic foot problems [26][27][28][29].
Although there is evidence suggesting the benefits of psychological interventions to patients with DFUs, the effectiveness of stress reduction interventions in DFU recovery, physiological indicators of wound healing, and patients' QoL improvement has not been established. This is mainly because there are limited data evaluating the efficacy of psychological treatments in this area of research [30]. Thus, randomized controlled trials of low-cost psychological interventions to focus on the promotion of QoL and DFU healing are required. This paper describes the development of the PSY-DFU study protocol, which focuses on two interventions to reduce and manage stress and summarizes the advantages and limitations of this fully scripted treatment approach.
Objectives
The present study aims to:
1.
Assess the effectiveness of a muscle relaxation intervention with guided imagery (TG1) compared to hypnosis with guided imagery (TG2) versus a neutral guided imagery placebo (ACG) and a group that does not receive any psychological intervention (PCG) regarding DFU healing, physiological indicators of healing prognosis, and QoL in patients with clinical distress and a chronic DFU.
2.
Understand the perspectives of patients and family caregivers on the efficacy of TG1 and TG2 interventions versus ACG sessions for DFU healing.
Primary Specific Aims
The primary specific aims of this RCT are: (i) to compare the impact of both treatment groups (TG1 and TG2) in regard to DFU healing, physiological indicators of healing prognosis (biochemical parameters, inflammatory and angiogenic markers, miRNAs, and immune cells), and QoL, compared to control groups; (ii) to compare patients pre and post-intervention in the TG1 and TG2, controlling for patients' health literacy, clinical characteristics (e.g., duration of diabetes and DFU, type of diabetic foot), and sociodemographic variables (e.g., gender, age, education, socioeconomic level); and (iii) to compare the efficacy of TG1 and TG2 versus ACG to DFU healing, according to participants and their respective family caregiver's perceptions.
Study Design
The current study is designed as a longitudinal, participant-blinded, sham-controlled, cluster randomized controlled trial (RCT) with a nested qualitative evaluation. The RCT study includes three-assessment periods over the course of six months, while the nested qualitative study involves an additional semi-structured interview two weeks after completing the treatment or placebo sessions.
The protocol for this RCT is based on the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) 2013 Checklist [31] ( Table 1). The study design and flowchart of the protocol, following the CONSORT 2010 standards [32], are summarized in Figure 1. The study will be conducted in three major hospitals in the North of Portugal with multidisciplinary diabetic foot consultations, and was approved by the Ethics Committee of the three hospitals. The study is also registered on the ClinicalTrials.gov platform since the 7th of January 2021 (Registration number: NCT04698720). Clinical data (PEDIS) X X X DFU evolution/healing a X X X Impact of DFU on QoL X X X Mental and physical QoL QoL-quality of life; PMR + GI-progressive muscular relaxation with guided imagery; H + GI-hypnosis with guided imagery; ACG-active control group; PCG-passive control group; PEDIS-Perfusion, Extent, Depth, Infection, and Sensation; DFU-diabetic foot ulcer. a In T1 and T2, when the DFU is healed, the DFU progression and DFU representations are not assessed. b Blood pressure and heart rate are assessed before and after each intervention (PMR + GI and H + GI) and placebo (ACG) session.
Participant Recruitment and Selection Criteria
Patients with chronic DFU attending the first consultation of the multidisciplinary outpatient clinical of diabetic foot ulcer from three major hospitals in Northern Portugal will be recruited. In the first consultation, researchers (Researchers 1, 2, and 3) will collect the clinical information of each patient to signalize those that meet eligibility criteria. Patients will be consecutively enrolled in the study.
The inclusion criteria are: (i) being 18 years old or more, (ii) having one or two diabetic chronic ulcers (a non-healing ulcer for six or more weeks and less than 12 weeks; in case of patients with two active chronic DFUs, the ulcer with the largest area will be selected as the index ulcer) at the time of baseline assessment, (iii) reporting clinical levels of psychological distress, and iv) providing written informed consent. Clinical distress is assessed according to the Hospital Anxiety and Depression Scale (HADS) [33] and the Perceived Stress Scale (PSS) [34], with patients scoring ≥11 on the HADS subscales or ≥13 (male patients) and ≥17 (female patients) on the PSS evaluated as being clinically distressed.
Exclusion criteria include: (i) the DFU, at the time of baseline assessment, being a relapse; (ii) having three or more DFUs at the time of baseline assessment; (iii) having undergone a transplant; (iv) being on hemodialysis treatment; (v) having a cancer disease; (vi) having degenerative dementia or severe psychiatric illness (e.g., schizophrenia), (vii) receiving psychological counseling, or (viii) taking psychiatric medication during the study period.
For the qualitative study, participants that completed at least 75% of the intervention sessions or placebo sessions (active control group) will be invited to participate, together with their family caregiver. Participants will be purposively selected according to the type of diabetic foot (neuropathic versus neuroischemic) and DFU progression (positive versus negative). The sample will also be selected according to participants' capacity to provide in-depth and rich-textures information regarding the intervention's effectiveness since qualitative purposive sampling is considered to be more efficient than random sampling [35]. Thus, the quantitative study sample will include four single-per-participant interviews per group (4 conditions × 3 groups), in a total of 12 patients' interviews and 12 caregivers' interviews.
Randomization
All eligible patients will be randomly assigned to one of the four groups-TG1, TG2, ACG, or PCG-in varying size blocks so that, over time, all groups can have a similar number of participants [36,37]. An independent researcher (Researcher 4), unaware of the numeric coding for each group, will create the randomization sequence using an online random number generator (https://www.graphpad.com/quickcalcs/randomize1/, accessed on 15 December 2021) with a 1:1 allocation, using random block sizes of 12. Researchers 2, 3, and 4 are blind to the block size used in the randomization procedure.
A priori stratification was defined considering the three hospitals where data collection will take place, as well as two common comorbid conditions deemed as factors of poor prognostic outcomes in patients with DFUs, specifically chronic kidney disease (CKD) and peripheral arterial disease (PAD) [38,39], to ensure that distribution of patients between groups is balanced in terms of confounders. Thus, randomization is stratified according to (i) hospitals; (ii) CKD and its disease stages (without CKD/CKD stage 1 and 2/CKD stage 3 and 4); and (iii) PAD (without PAD/with PAD), in a total of 12 possible strata.
Participants' blinding will be maintained throughout the conduct of the trial, whereas Researchers 1, 2, and 3 will be aware of the allocation since they are responsible and will conduct the TG1 and ACG sessions. Quantitative data analysts will be blinded to participant allocation. The randomization process will be conducted at the end of each week to reduce the time interval between the randomization and the beginning of interventions.
Participant Timeline
Participants in the intervention and control groups will be assessed at the pre-intervention baseline (T0), at the end of the intervention/two months later at post-test (T1), and six months after T1 at the follow-up (T2). Data will be collected through face-to-face interviews conducted by health psychologists (Researchers 1, 2, and 3). Regarding the biochemical sample collection (biochemical parameter, inflammatory and angiogenic markers, miRNAs, and immune cells), a blood sample will be collected during the consultation at the T0, T1, and T2 assessment moments (Table 1).
Characteristics of Patients
Sociodemographic and clinical characteristics of all participants will be assessed.
Sociodemographic Characteristics
Sociodemographic data will be collected prior to intervention through the Sociodemographic Questionnaire, a questionnaire developed for the purposes of this study that includes variables such as gender, age, education, residence, marital and professional status, having (or not) an informal caregiver, socioeconomic level, smoking history, and alcohol intake, among others.
Clinical Information
Clinical data will be collected through the Clinical Questionnaire, developed specifically for this study, to be completed by the patients' physician or nurse in the three assessment moments. This questionnaire asks about the type and duration of diabetes, complications associated with diabetes, diabetic foot type, DFU location and duration, concomitant treatment, ulcer healing time, and new DFUs. The clinical questionnaire also includes the Perfusion, Extent, Depth, Infection, and Sensation (PEDIS) classification system, proposed by the International Working Group of the Diabetic Foot, thus gathering information regarding the five most important categories to evaluate the DFU stage and progression [40].
Measures of Primary Outcomes
Primary outcomes are DFU healing, physiological indicators of healing prognosis, DFU impact on QoL, and physical and mental QoL.
Degree of DFU Healing
Wound healing is defined as the complete epithelization of the wound and is assessed through the RESVECH 2.0-PT [41], a useful and validated tool to monitor the progression of chronic wounds of any etiology through six main parameters: wound area, depth and involved tissues, wound margins, type of tissue in the wound bed, levels of exudates, and presence of signs of infection/inflammations. This questionnaire will be filled out by the participant's physician or nurse at the end of consultations to monitor DFU progression. Scores range from 0 to 35, where zero indicates complete healing and 35 corresponds to the worse possible wound.
Physiological Indicators of Healing Prognosis
Physiological indicators include a biochemical parameter (HbA1c), inflammatory (IL-6, TNF-α, lymphocyte populations) and angiogenic (VEGF) markers, miRNAs (miRNA-21, miRNA-155), and immune cells (lymphocytes T effective and naïve). The quantification of HbA1c in the plasma will be performed using the competitive inhibition enzyme immunoassay Cloud Clone Corp. The quantification of the plasma inflammatory cytokines levels (IL-6) and tumor necrosis factor (TNF-α) will be performed using a LEGENDplex™ Human Angiogenesis Panel 1 Mix and Match (9-plex). Blood lymphocyte populations will be assessed in the whole blood by flow cytometry and an automated hematological cell counter. Levels of VEGF will be assessed through the serum by ELISA from Cell Signaling Technology. miRNA-21 and miRNA-155 will be assessed through blood collected in a tube from Qiagen (PAXgene Blood RNA Tubes, Qiagen, Hombrechtikon, Switzerland), and subsequently, the RNA will be converted into cDNA by "miScript Reverse Transcription Kit" (Qiagen). Amplification of miR-21 will be done through the light Cycler system with SYBR Green PCR Kit. The control gene will be the RNU6B. Immune cells will be calculated through the ratio of lymphocyte effectors CD4/CD8 to lymphocyte naïve CD4/CD8 cells. Blood samples will be collected and processed in the clinical laboratory of two of the three hospitals where data collection will take place.
Impact of the DFU on QoL
The impact of the DFU on patients' QoL will be assessed through the Diabetic Foot Ulcer Scale-Short Form (DFS-SF) [42], a self-report questionnaire to be fulfilled by participants to evaluate the effect of the DFU on patients' general QoL.
Physical and Mental QoL
Patients' physical and mental health-related QoL will be assessed through the Short-Form Health Survey (SF-36) [43], a self-report questionnaire administered to participants.
Measures of Secondary Outcomes
Data will also be used to assess the following secondary outcomes: perceived stress, psychological morbidity, and DFU representations.
Perceived Stress
The overall stress perceived by patients will be assessed through the Perceived Stress Scale (PSS) [34], a self-reported measure to be answered by participants.
Psychological Morbidity
Psychological morbidity or emotional distress will be assessed through the total score of the Hospital Anxiety and Depression Scale (HADS) [33], comprising both anxiety and depression subscales. HADS is also answered by participants.
DFU Representations
Patient representations regarding the DFU will be evaluated through the Illness Perception Questionnaire-Brief (IPQ-B) [44], a self-report scale administered to participants.
Other Outcome Measures
Health literacy, blood pressure, heart rate, time to complete wound healing, and time to favorable healing prognosis will also be assessed.
Health Literacy
Personal health literacy is evaluated through the Medical Term Recognition Test (ME-TER) [45], a widely used health literacy self-assessment measure administered to participants.
Blood Pressure
Systolic and diastolic pressure, in millimeters of mercury (mmHg), will be assessed through a validated and certified blood pressure measuring device.
Heart Rate
Heart rate, in beats per minute (bpm), will be assessed through a validated and certified blood pressure measuring device.
Time to Complete Wound Healing
This variable corresponds to the time interval (days) between the baseline assessment and the complete DFU healing.
Time to Favorable Healing Prognosis
Based on the wound area, the time to a favorable prognosis will be assessed through the DFU area evolution (reduced versus increased, at the end of the participation in the study, i.e., T1 or T2 assessment moments).
Qualitative Assessment
Approximately 12 patient participants allocated to the TGs and ACG, together with their respective family caregivers, will be invited to take part in semi-structured interviews to share their perceptions on the efficacy of sessions for DFU healing. The interview guide consists of open-ended questions to be administered individually to patients and the family caregiver indicated by the patient. The script will be unchanged throughout the interviews. All interviews will be audio-recorded in digital audio and transcribed verbatim. The transcribed interviews will be anonymized to safeguard the confidentiality of data and participants.
Qualitative analysis of the interviews will be performed shortly after the transcription of the interviews. Interview contents will be inductively coded and theoretically analyzed through the thematic content analysis technique and later compared to the existing literature. To process, organize, code, and thematically analyze data, the NVivo software (QSR International PtyLtd, Melbourne, Australia) will be used. Researchers 1, 2, and 3 will be responsible for the interview implementation and the data analysis.
Trial Arms
All participants will receive standard of care medical/nursing treatment for DFU, according to the guidelines of the Portuguese General Health Direction [46] and the International Working Group on the Diabetic Foot [47].
Intervention Groups
Participants in the intervention groups are allocated in TG1 or TG2. Participants allocated to TG1 will receive four individual sessions of muscle relaxation with guided imagery (MR + GI), a technique of alternately tensing and relaxing muscle groups individually throughout the body. Relaxation intervention begins with diaphragmatic breathing, followed by Jacobson's progressive muscle relaxation, which involves the contraction and subsequent relaxation of the 16 muscle groups of the body (hand, forearm, biceps, upper forehead, eye, nose, mouth, jaw and throat, neck, shoulder, chest, stomach, thigh, leg, and foot). The contraction will be performed for seven seconds while the relaxation lasts for about 40-50 s. The relaxation of the foot muscle group will not be performed on the foot with the wound because dressing and bandages may restrict the foot movements and, together with the typical joint stiffness of the diabetic foot, will render difficult the performance of exercise. After the muscle relaxation exercises, the guided imagery focused on the DFU healing process will initiate. The participant will be instructed to think about his/her current state of health and to imagine the DFU as a decreasing dark area and the healing process as a light associated with pleasant sensations. After being trained by the Lead Researcher, Researcher 1, 2, or 3 will be responsible for conducting the TG1 sessions in the respective hospitals.
In the TG2, participants will benefit from four individual sessions of hypnosis with guided imagery (H + GI), conducted by three qualified hypnotherapists external to the research study. The first session begins with the Eye-Roll Test for Hypnotizability [48]. Each session follows the Hypnotic Protocol with the following steps: pre-talk/absorption/ratification/ alitiation/dissociation/awakening. All hypnotic sessions will train participants in the visual, auditory, and kinesthetic perception of ulcer healing, and will also promote medical treatment adherence.
The protocol for both treatment groups includes four scripted sessions, each one with duration of approximately 45 min, delivered once every two weeks, resulting in approximately a 2-month treatment course. Sessions will be conducted in a consulting room reserved by the hospital for the study, where patients will lie on a specialist couch.
Control Groups
Participants in the ACG will receive neutral guided imagery sessions, i.e., sessions focused on the patient's daily life before the DFU. Neutral sessions are conducted by Researchers 1, 2, and 3 following a scripted protocol that includes four biweekly sessions of approximately 45 min, each one dedicated to a different topic (family, work, friends, and leisure). Initially, the participant will be asked to choose a specific event related to the topic of the session, positive or negative, without telling the researcher which event he/she thought about. Then, the participant will answer several questions regarding the episode to promote a more detailed reconstruction of the event, being instructed to only think about the answers and not to reply orally. When the whole episode is remembered, the participant will be asked to tell what he/she imagined/remembered regarding each of the questions. The goal of the placebo sessions is to control for the effect of the received attention, from psychologists and hypnotherapists, on patients in the treatment groups. It is possible that the privileged contact may positively influence the patient's healing process. Therefore, the attention control condition will allow us to differentiate between the impact of interventions versus the attention given to the ACG.
Participants in the PCG will not receive any intervention besides the standard of care medical/nurse treatment. Both passive and active control patients will complete the same outcome questionnaires as the participants allocated to the interventions, at the same time intervals.
Adherence to the Treatment Plan
Patients will be fully informed about the study goals, procedures, and potential benefits that may arise from the results of the DFU treatment. Assessment moments, intervention, and placebo sessions will be scheduled for diabetic foot appointment days so that patients will not have to go to the hospital on purpose to participate in the study. Before the appointment day, Researchers 1, 2, and 3 will call patients to remind them of their appointment with the research team for the purpose of the study. Participants will be asked to complete the last assessment (T2) even if they miss the mid-term assessment (T1).
To engage the health professional staff (e.g., physicians, nurses, podologists), meetings will be held with the physicians responsible for the multidisciplinary consultation of diabetic foot and head nurses (in the three hospitals involved in the study), in order to introduce the study and clarify any questions that may arise.
Sample Size
Using Sakpal's formula [49] and according to the descriptive results of the pilot study [50], considering the difference in the mean (1.93) and standard deviation (6) of the treatment versus passive control groups, with a statistical power of 80% and a statistical significance level of 5%, a definitive RCT will require 152 participants. Considering a dropout rate for intervention sessions of 11%, a definitive RCT with four groups will require a sample size of 169 participants, with 42 patients per group [49].
Data Analysis
The baseline data of the treatments and control groups will be compared using the chi-squared test for binary variables and the independent sample t-test for continuous variables. To prevent overestimating the impact of both interventions, an intention-to-treat approach will be assumed to evaluate the impact on the primary and secondary outcomes. The comparison between the treatment groups versus control groups over time (differences between and within) will be performed through mixed general linear models of analysis of variance for repeated measures. Effect sizes (Cohen's d coefficient) will also be provided between T0 and T1, T1 and T2, and T0 and T2 to determine if the intervention shows any treatment effect. A significance level of 0.05 will be set for a two-sided test, and the 95% confidence interval will be calculated. In addition, time to complete wound healing and time to favorable healing prognosis (based on the wound size) will be estimated using survival analysis, specifically, single variable and multivariable Cox proportional hazard ratio regression models and Kaplan-Meier plots. All previous analyses will include the DFU stage at the baseline as a control variable. Other confounding clinical factors associated with the outcome variables will also be considered in the statistical analysis.
The data will be analyzed using the Rstudio, R version 3.6.2 (R Core Team, Vienna, Austria) and the SPSS statistics, v. 24.0 (IBM Corp., Armonk, New York, NY, USA).
Ethics
Only patients that provide written consent to participate in the trial and sign a consent form (to allow access to their medical records) will be included in the study. The consent process continues throughout the study as researchers review study procedures and confirm patient's wish to continue at each session and assessment moment. All participants will be aware that they are free to withdraw from the trial at any time without any effect on their future standard of care medical treatment. For the qualitative nested study, patients and caregivers will receive a specific written informed consent to be signed.
The personal data of participants will be kept confidential before and during the study by pseudonymization, with the participant's name being coded. Names and other identifying information will be kept on a single page in the database, only accessible to Researcher 1. At the end of the study, participant anonymization will be fully guaranteed. All personal data collected within the framework of this study will be preserved until the final publication of the results, not exceeding a five-year period. After this period, all data will be destroyed (paper records and biochemical samples) or eliminated (database records).
Discussions
Given the huge challenge of distress in patients with DFUs, and the limited data evaluating the healing process and QoL of patients with chronic DFUs, the study of the efficacy of two economic stress reduction interventions in clinically distressed patients with chronic DFUs is proposed.
This paper describes the PSY-DFU study protocol, the first national and international multicenter study to address the contribution of psychological variables in DFU healing and QoL. The study also tests the effects of two psychological interventions: individual relaxation and hypnosis, both with guided imagery, on wound healing, physiological indicators of wound healing prognosis, and QoL.
Many complicating factors of diabetes compromise the healing process, making effective treatments challenging. The compliance rate in the treatment groups is one of the major difficulties experienced so far. This is, in part, due to the fact that patients with chronic DFUs often present a long history of medical care, and yet many fail to adhere to the medical treatment and self-management, thus developing negative expectations about psychological interventions. Moreover, in the treatment groups, when DFU is completed healed, the intervention ends, which also contributes to the low-rate compliance. Regarding the DFU progress monitoring, this study would benefit from more advanced and accurate techniques to assess wounds, such as imaging devices. However, none of the three participating hospitals have yet adopted imaging systems to routinely measure DFUs. In addition to the limited strategies to improve adherence to treatments, and the use of more traditional tools to measure chronic DFUs, the non-fully blinded design also represents a limitation of this research.
The multicenter and randomized design of this PSY-DFU study, together with the used mixed-methods approach, is one of its major strengths. Results will answer the question, "Which psychological intervention is more effective and has more impact on the primary and secondary study outcomes in clinically stressed patients?" The findings will help to clarify the mechanistic underpinnings of the relationship between distress and chronic wound healing, the knowledge that is still very limited in general and nonexistent in DFU. The RCT is enhanced by the qualitative nested study, as qualitative data are expected to refine findings regarding the effectiveness of each intervention, allowing a better understanding of the impact of psychological factors on the DFU healing process.
Finding psychological and biological clues from specific patient groups with chronic DFUs will help health professionals tailor better treatments with precision that could prevent future diabetic foot wounds from developing or from recurring. Specifically, identifying the specific risk profile of patients will best provide health professionals with important information regarding their psychological vulnerabilities, which may make it difficult for DFU healing. Additionally, survival analysis will help to identify the best assessment moments to initiate psychological interventions to contribute to the DFU healing, and prevent the onset of new wounds. Therefore, this study is of important clinical relevance regarding DFU healing and recurrence, as well as QoL, and may also contribute to the decrease in health care and medical-related costs associated with DFU treatments, including amputations.
Conclusions
The present study intends to contribute with scientific knowledge regarding the progression of chronic DFUs in distressed patients, as well as to test the effectiveness of two stress reduction interventions for DFU healing, including the perspectives of patients and family caregivers. Thus, this study will surely have direct clinical relevance regarding DFU treatment and recurrence, with a significant impact on the QoL of patients and their family caregivers. Informed Consent Statement: Informed consent will be obtained from all subjects involved in the study.
Data Availability Statement: Not applicable. | 2022-07-16T15:19:11.869Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "9012bef708a9ea8ba3275f338e071768b2099117",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/14/8556/pdf?version=1657863879",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0aa20a8f625ae3f3a36fe9e5c0070fa66ea4b687",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
247369995 | pes2o/s2orc | v3-fos-license | A potential preventive method for scar stenosis after esophageal endoscopic mucosal resection using human amniotic epithelial cells in a porcine model
Abstract Objectives The current methods employed for esophageal endoscopic mucosal resection (EMR) involve the risk of adverse postprocedural complications. Therefore, this study aimed to develop a new method to prevent stenosis following a resection procedure using human amniotic epithelial cells in a porcine model. Methods With the consent of a woman who underwent a cesarean section, amniotic epithelial cells were isolated from the amniotic membrane of the delivered placenta. Six swine were used for this study. Under general anesthesia, four EMRs using cap‐fitted microscope ulcers were performed on each porcine esophagus. Of the four ulcers, the two on the oral side were treated by injecting human amniotic epithelial (AE group) cells, and the remaining two on the anal side were left untreated (control group). One week after the procedure, the swine were sacrificed, and the ulcers were evaluated. The epithelialization rate was calculated by dividing the length of the epithelialized portion of each section by the length of the ulcer, which was determined using an optical microscope. Moreover, the mucosal thickening in each section was measured in terms of diameter. Results The epithelialization rate was significantly higher in the AE group than in the control group. Mucosal thickening was not significantly different between the groups. Conclusions Transplanting amniotic epithelial cells into the ulcer promoted ulcer epithelialization. Amniotic epithelial cell transplantation is a potential method for the management of ulcer scar stenosis following esophageal endoscopic submucosal dissection.
BACKGROUND
Endoscopic submucosal dissection (ESD) 1 enables early batch resection of malignant tumors of the gastrointestinal tract. It is widely used because of its high curability and minimal invasiveness. 2 Postoperative ulcer scar stenosis is a major complication of ESD. Endoscopic balloon dilation (EBD), local steroid therapy, oral steroid therapy, and other stenosis prevention methods have been developed to treat this complication. EBD is one of the leading treatments for benign esophageal stenosis 3 ; however, it can cause serious adverse events, including perforation and bleeding. 4 Steroid treatment prevents stenosis by suppressing inflammation and fibrosis, with steroid local injection therapy emerging as one of the most popular treatments for preventing post-esophageal ESD stenosis. However, in studies examining local steroid therapy, the use of steroids has been reported to weaken the esophageal wall and cause perforation. 5 In addition, adverse events including secondary adrenal gland dysfunction, diabetes, and infectious diseases due to oral steroid administration have become problematic. 6 An approach that is free of these adverse events is regenerative medicine to prevent stenosis. Human embryonic stem (ES) cells 7 are an established cell source used in regenerative medicine as they are pluripotent and can differentiate into any specialized cell type. However, ES cells are harvested from fertilized human eggs and are, therefore, subject to ethical problems.
Sakurai et al. 8 noted that epithelialization of an ulcer surface was significantly promoted when epithelial keratinocytes isolated from human oral mucosal tissue were locally injected into the submucosa of a porcine esophagus immediately after endoscopic mucosal resection (EMR). Ohki et al. 9,10 specified a method of culturing oral mucosal epithelial cells, preparing a cell sheet, and transplanting it after esophageal ESD surgery.Although this technique can prevent stenosis and has been applied clinically, it has not been widely used because processes such as culturing are complicated, and their overall cost is high. Therefore, we focused on the amniotic membrane. Amniotic membrane epithelial cells are derived from the upper blastoderm layer that forms the embryo, which is composed of amniotic membrane epithelial cells, basement membrane, and a stromal layer. Since the upper blastoderm can be used as a reference in differentiating the endoderm, mesoderm, and ectoderm, it has long been hypothesized that amniotic epithelial (AE) cells have stem cell-like properties. 11,12 Several analyses on the pluripotency of AE cells have been performed, and differentiation into hepatocyte-like cells 11,13 and insulinproducing islet cell-like cells 14 have been reported. AE cells promote ulcer healing by producing cytokines. 15 In the field of ophthalmology, amniotic membrane trans-plantation has been performed for intractable ocular surface diseases and found to be highly effective. 16 We hypothesized that AE cells result in the early healing of esophageal EMR/ESD ulcers. Therefore, we believe that the transplantation of amniotic membrane cells into esophageal ulcers could be a new preventive method for ulcer scarring. We verified the efficacy of this method by experimentally creating an endoscopically treated ulcer in the porcine esophagus and transplanting AE cells into it.
Amniotic membrane collection, AE cell separation, and thawing
This study was approved by the Ethics Committee of the Graduate School of Medicine, Tohoku University (Permission number: 2019-1-430). The amniotic membrane was collected with the consent of a pregnant woman undergoing a scheduled cesarean section at the Department of Obstetrics, Tohoku University Hospital. The amniotic membrane was collected from the placenta, which was removed after delivery by cesarean section. The isolation of AE cells was performed according to the protocol of Gramignoli et al. 17 TrypLE Select Enzyme (10X) (Thermo Fisher Scientific, Waltham, MA, USA) was added to 1 g of amniotic membrane and incubated at 35 rpm for 30 min at 37 • C to isolate AE cells. The AE cells that had undergone the filtration process were placed in the cell preservation solution NutriFreez D10 (Biological Industries, Cromwell, CT, USA) and stored frozen at -80 • C or lower.
Cryopreserved AE cells were thawed at 37 • C for 1 min and washed with Dulbecco's phosphate-buffered saline (PBS). Two milliliters of normal saline were then added per 1.0 × 10 7 of AE cells.
Fluorescent labeling of AE cells
CellTracker (Thermo Fisher Scientific) CM-DiI was used as the fluorescent label. The adjustment of CM-DiI was performed according to the protocol of the package insert and the reference of Fang et al. 18 The AE cell suspension was prepared by adding PBS to 1.0 × 10 6 cells/ml, and CM-DiI was added to a final concentration of 5 µM. This was then incubated at 37 • C for 5 min and then at 4 • C for 20 min for labeling. After labeling, the cells were washed with PBS and suspended in 2 ml of normal saline per 1.0 × 10 7 AE cells for use.
Transplantation of AE cells into porcine esophageal ulcers
Six swine were used for this study. Under general anesthesia, four EMRs using cap-fitted microscope (EMRC) F I G U R E 1 Schematic diagram of EMRC ulcer. (a) Multiple EMRC ulcers were created in one pig. This figure shows a schematic diagram in which four ulcers were created. Of these, amniotic epithelial cells were injected locally into the two oral-side ulcers. (b) Shamer-like amniotic epithelial cells were locally injected over four margins of the ulcer. A 2 ml suspension containing 1.0 × 10 7 amniotic epithelial cells per ulcer was locally injected. EMRC, endoscopic mucosal resection using cap-fitted microscope ulcers were created in each porcine esophagus. The EMRC method was as follows. Saline was locally injected into the submucosa and bulged. A tip hood (cap) (Distal Attachment, MAJ-295; Olympus Medical Systems Corp., Tokyo, Japan) was attached to the gastrointestinal endoscope. We created a loop with a highfrequency snare so that it could be hooked on the claw of the tip hood. The mucosa was aspirated using a gastrointestinal endoscope, then the snare was squeezed. The generator used was the VIO300D (ERBE Elektromedizin GmbH, Tübingen, Germany), and the settings were: ENDO CUT effect of 2, FORCED COAG effect of 3, and a maximum wattage of 60. The strangled snare was appropriately coagulated and incised, then mucosa was excised to create an ulcer.
Of the four ulcers,the two on the oral side were treated with human AE cell transplantation (AE group), and the remaining two on the anal side were left untreated (control group) ( Figure 1a). For AE group ulcers, a suspension of 1.0 × 10 7 AE cells,in a volume of 2 ml,was locally injected into the ulcer; 0.5 ml was locally injected into each of the four points on the ulcer margin, as shown in Figure 1b.
Only drinking water was given on the first day of the operation. Feeding was resumed 1 day after the operation. Famotidine 40 mg/day was administered up to 3 days after surgery and acetaminophen (20% fine granules) 800 mg/day was administered for 1 week after surgery for analgesic purposes.
One week later, general anesthesia was administered, and each ulcer was observed via gastrointestinal endoscopy. After observation, under deep anesthesia, a 20% potassium chloride solution was intravenously administered to induce death and the esophagus was removed.There are two reasons for assessing epithelial-ization 1 week after surgery. First, existing studies have reported mild to severe stenosis 14 days after ESD. 19,20 Euthanasia was considered when the oral intake of the pigs deteriorated significantly, but the experiment could be carried out as scheduled within 1 week after the operation so we could detect signs of slight stenosis. Second, Ota et al. stated that ulcer healing after esophageal EMR/ESD is as short as 21 days. 21 When the ulcer was completely healed, it would be difficult to compare epithelialization,so we decided to observe it 1 week later.
Pathologic evaluation
Each excised specimen was fixed using 10% phosphate-buffered formalin. After embedding in paraffin, the samples were sliced and hematoxylin and eosin staining, as well as Elastica Masson staining, was performed. Each ulcer was evaluated using an optical microscope and the imaging software cellSens Standard (Olympus Corp.). An ulcer was defined as the part where the mucosa was thinned, the muscularis mucosae were torn, and the epithelium was defective. Epithelialization was defined as the area where the stratified squamous epithelium was regenerated and keratinized within the ulcer. The length of the ulcer in each section and the length of the epithelialized part were measured with an optical microscope. The length of the epithelialized part of each section, divided by the length of the ulcer, was defined as the epithelialization rate. To evaluate the thickening of the mucosa due to fibrosis caused by ulcer scarring, the maximum diameter from the upper end of the muscularis propria to the upper end of the mucosa in each section was measured as the mucosal thickening diameter (Figure 2).
F I G U R E 2
Histological findings one week after EMRC (Elastica Masson staining). #: Long diameter of ulcer, ★: Epithelialized part, *: Non-epithelialized part, and ○: Maximum mucosal thickening diameter of ulcer. The length of the ulcer and the length of the epithelialized part were measured in each section. The epithelialization rate for each ulcer was next determined. In addition, the maximum diameter of mucosal thickening was measured for each section. EMRC, endoscopic mucosal resection using cap-fitted microscope In addition, a fluorescence microscope was used to assess unstained slides to observe the dynamics of the labeled AE cells.
Statistical analysis
The statistical software JMP Pro 15.0.0 (SAS Institute, Cary, NC, USA) was used for statistical analysis. The median ulcer length, epithelialized length, epithelialization rate, and mucosal thickening diameter were compared between the AE and control groups. Comparisons were performed using the Wilcoxon test, and statistical significance was set at p < 0.05.
EMRC ulcer creation with gastrointestinal endoscopy
Twenty-four EMRC ulcers were created in the six pigs. Two EMRC ulcers were fused in one of the pigs, resulting in 22 experimental EMRC ulcers. An AE cell suspension was locally injected into the ulcers of the AE group ( Figure 3). For pigs with healed ulcers, one of the two remaining effective ulcers (the oral side) was used as the AE group. The purity of the isolated human AE cells was 100% and the median viability was 80.50% (interquartile range [IQR]: 79.2%-82.5%). There were no adverse events, such as perforation, during the creation of the ulcers.
F I G U R E 3 Amniotic epithelial cell transplantation. An endoscopic image of amniotic epithelial cell transplantation. The amniotic epithelial cell suspension was locally injected into the EMRC ulcer. EMRC, endoscopic mucosal resection using cap-fitted microscope
The pigs survived for 1 week after the surgery. During this course, there were no other serious adverse events, and the general condition of the specimens was good as they had been treated as safely as possible.
Observation of ulcers at 1 week using gastrointestinal endoscopy
One week later, when the esophagus of each pig was observed with a gastrointestinal endoscope under gen-
Observation of excised specimens
No significant macroscopic changes were observed in the ulcers of either the AE group or the control group ( Figure 5).
The total number of ulcers created by EMRC was 22, comprising 11 ulcers each in the AE and control groups. Pathologically confirmed sections included 83 and 80 sections in the AE and control groups, respectively. The median ulcer major axis measured 24 mm (21-32 mm) in the AE group and 27 mm (22.5-31.8 mm) in the control group; there was no significant difference between the two groups (p = 0.6355). The median EMRC specimen on the major axis measured 21 mm (16-25 mm) in the AE group and 22 mm (12-31 mm) in the control group, with no significant difference between the two groups (p = 0.9738). The AE group had a median ulcer length of 9432. 5-5368.1 µm). The epithelialization rate was obtained by dividing the length of the epithelialized portion of each section by the length of the ulcer, with the exception of one ulcer in the AE group and one ulcer in the control group, in which the epithelialization rate of each section was 100%.The final evaluation included 58 sections of 10 ulcers in the AE group and 55 sections of 10 ulcers in the control group, excluding one section at each end of each ulcer. The median epithelialization rate in the AE group was 60.4% (32.7%-87.3%) and the median in the control group was 39.1% (26.0%-59.2%). The epithelialization rate was significantly higher in the AE group (p = 0.0178) ( Figure 6).
Observation of fluorescently labeled AE cells
A collection of cells believed to be fluorescently labeled AE cells was observed in the preparation of the AE group whereas no cell aggregation was observed in the control group (Figure 8). It could not be confirmed if these cells had differentiated into esophageal mucosal epithelial cells and/or other cells (Figure 9).
DISCUSSION
We conducted the first study to use AE cells isolated from the human amniotic membrane in the prevention of post-esophageal ESD ulcer scar stenosis. The results of this study suggest that local injection of AE cells into ESD ulcers promotes the epithelialization of ulcers and increases the rate of epithelialization. In addition, fluorescently labeled AE cells were found to be localized within the locally injected submucosa of the esophagus.
Amniotic membrane cells are known to express growth factors that promote wound healing. Koizumi et al. 22 speculated that growth factors, such as epidermal growth factor and keratinocyte growth factor, expressed in amniotic cells contribute to shortening the In the porcine esophageal specimen in the photograph, the right side is the oral side, the left side is the anal side. Ulcers Nos.1 and 2 are the AE group and ulcers Nos. 3 and 4 are the control group. Macroscopically, there was no significant change in ulcer healing. EMRC, endoscopic mucosal resection using cap-fitted microscope; AE, amniotic epithelial cells F I G U R E 6 Comparison of epithelialization rates between two groups. A graph comparing the epithelialization rates in the AE and control groups. The epithelialization rate was higher in the AE group, with a statistically significant difference (* p < 0.05) observed. AE, amniotic epithelial cells F I G U R E 7 Comparison between two groups regarding mucosal thickening diameter. Graph of mucosal thickening diameter comparing the AE and control groups. No significant differences were observed between the two groups. AE, amniotic epithelial cells found that AE cells also play a role in regulating cell proliferation and promoting the induction and differentiation of keratinocytes through the mitogen-activated protein kinase pathway and phosphoinositide 3-kinase/Akt pathway. When a mouse model of a wound with a skin defect was treated by injecting a medium containing AE cells, the healing of the wound was significantly improved as compared with controls. 15 In this study, it is possible that the activation of growth factors and cell differentiation promoting wound healing contributed to the promotion of epithelialization, leading to an improvement in ulcer healing in the AE group. It was not clear whether the AE cells differentiated into esophageal mucosal epithelial cells under observation using a fluorescence microscope; however, the transplanted AE cells remained in the locally injected part even after 1 week. This is the first study in which we observed the dynamics of Figures (b) and (c) show hematoxylin and eosin staining at the same site; no significant infiltration of histiocytes was observed in the surrounding areas. No significant infiltration of histiocytes was observed in the control group either. AE, amniotic epithelial cells; TRITC, tetramethylrhodamine isothiocyanate amniotic cells transplanted into ulcers after esophageal endoscopy using techniques described in previous studies.
We found no significant difference in the mucosal thickening diameter between the AE and control groups. Liu et al. 19 cited mucosal deficiency as one contributing factor in scar stenosis of post-esophageal ESD ulcers and stated that the larger the mucosal deficiency, the more likely it was to become stenotic. Unlike peripheral ESD ulcers, the EMRC ulcer created in this study was a local ulcer with a partial mucosal defect. We hypothesize that this may be the reason why there was no significant difference in the diameter of mucosal thickening. Scar stenosis in post-ESD ulcers is thought to be due in part to the persistence of local chronic inflammation, healing, and the development of hypertrophic scars on the skin. 8, 23 Hermans 23 stated that rapid re-epithelialization was an important factor in preventing hypertrophic scars. Even in post-esophageal ESD ulcer scars, scar stenosis may be reduced if reepithelialization is accelerated by promoting healing. It has also been reported that amniotic membrane cells have anti-inflammatory effects. Miyamoto et al. administered a solution in which amniotic cells were suspended by enema to rats with induced colitis, which improved intestinal inflammation, levels of infiltrated neutrophils and monocytes in the mucosa,and expressed inflammatory cytokines. 24 It is believed that the anti-inflammatory effect of the amniotic membrane prevents chronic local inflammation, which may prevent ulcer scar stenosis.
There are several limitations to this study. First of all, this study was done with EMRC rather than ESD. In the future, we would like to continue research to see if there is a similar effect on circumferential ESD ulcers. Second, it is possible that AE cells have anti-inflammatory effects, and the expression of growth factors contributes to the promotion of epithelialization.However,this study did not evaluate the number of inflammatory cells or the quantification of the growth factors. It is necessary to clarify these in the future. Pre-stage experiments, including animal experiments as well as other studies, should also be repeated with the goal of transferring findings into clinical applications.
In conclusion, to develop a method for preventing esophageal ESD post-ulcer scar stenosis, AE cells were transplanted and evaluated in animal experiments. Transplantation of AE cells significantly promoted EMRC ulcer epithelialization. This may have contributed to the promotion of epithelialization through the activation of growth factors and the promotion of cell differentiation by AE cells. This study suggests that AE cell transplantation is a potential method for preventing post-ESD ulcer scar stenosis.
AC K N OW L E D G M E N T S
We thank Clinical Research, Innovation, and Education Center, Tohoku University Hospital (CRIETO) Regenerative Medicine Unit, for help along with the study.
C O N F L I C T O F I N T E R E S T
The authors declare no conflict of interest.
F U N D I N G I N F O R M AT I O N
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. | 2022-03-11T16:22:35.202Z | 2022-03-09T00:00:00.000 | {
"year": 2022,
"sha1": "c9344fbe48d16f1a88dbde0ecb4096c3349c4574",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "f3803c05389ae52e368fb7fed11f589919f237b9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251592296 | pes2o/s2orc | v3-fos-license | Sperm competition in yellow dung flies: No consistent effect of sperm size
Abstract The male competition for fertilization that results from female multiple mating promotes the evolution of increased sperm numbers and can impact sperm morphology, with theory predicting that longer sperm can at times be advantageous during sperm competition. If so, males with longer sperm should sire more offspring than competitors with shorter sperm. Few studies have directly tested this prediction, and findings are inconsistent. Here we assessed whether longer sperm provide a competitive advantage in the yellow dung fly (Scathophaga stercoraria; Diptera: Scathophagidae). Initially, we let brothers with different temperature‐mediated mean sperm lengths compete – thus minimizing confounding effects of genetic background – and found no clear advantage of longer sperm. We then used flies from lines subjected to bidirectional selection on phenoloxidase activity that had shown correlated evolutionary responses in sperm and female spermathecal duct lengths. This experiment also yielded no main effect of sperm size on siring success. Instead, there was a trend for a shorter‐sperm advantage, but only when competing in females with longer spermathecal ducts. Our data corroborated many previously reported findings (last‐male precedence, effects of copula duration and body size), suggesting our failure to find sperm size effects is not inherently due to our experimental protocols. We conclude that longer sperm are not competitively superior in yellow dung flies under most circumstances, and that, consistent with previous work, in this species competitive fertilization success is primarily determined by the relative numbers of sperm competing.
If longer sperm confer an advantage in competitive fertilization, an obvious, direct prediction is that males with longer sperm should sire more offspring than sperm competitors producing smaller sperm. We tested this prediction in the yellow dung fly Scathophaga stercoraria, the classic model species for studies of sperm competition and sexual selection for over 50 years (Simmons et al., 2020). In this species, sperm vary considerably in length due to both environmental and genetic effects Ward, 2000;Ward & Hauschteck-Jungen, 1993). Previous work has documented an influence of sperm length on sperm storage by yellow dung fly females, but did not investigate subsequent paternity (Otronen et al., 1997). Moreover, sperm length also showed no short-term micro-evolutionary response to experimental manipulation of sperm competition risk , so as yet there is no evidence for greater competitiveness of longer sperm in this species.
Combining two independent experiments in the yellow dung fly, we here directly tested whether sperm length affects paternity during sperm competition. Initially, we relied on environmentally mediated sperm length variation while simultaneously minimizing potential confounding effects of genetic background by competing brothers of different sperm lengths with one another. We then capitalized on a correlated genetic response of sperm length to artificial selection on phenoloxidase (PO) activity (Schwarzenbach, 2006;Schwarzenbach & Ward, 2006. For unknown reasons, but perhaps because of trade-offs (see Hosken, 2001), selection for low PO activity resulted in males with longer sperm and females with longer spermathecal ducts in each of three replicate selection lines.
| MATERIAL S AND ME THODS
All flies in the experiment using environmentally mediated sperm length variation among competing brothers stemmed from laboratory cultures held and reared at standard conditions for 2-3 generations (18°C, 60% humidity, 13 h photoperiod). To ascertain brothers with different sperm lengths, we split the clutch of any mother to rear one half at 15°C and the other at 23°C, as temperature had previously been shown to systematically affect sperm length in yellow dung flies . For our matings performed at room temperature (20-22°C), we selected one random male per family emerging from each temperature treatment to compete for fertilization in a random female from another random family. Experimental adults were held for ca. 2 weeks after emergence under ad libitum food conditions until they reached sexual maturity.
We used two slightly different experimental designs: paired and unpaired. In the paired design, we allowed the same pair of brothers to first copulate in a particular (random) order with a non-related female, and thereafter in reverse order with one of her sisters. In the unpaired design, pairs of brothers were randomly assigned to a copulating sequence (long-spermed males first or last), but each pair competed only once. Our tests were blind because we did not know sperm length until after the experiment. For the mating trials we placed a female in a small glass vial, added one of the males, and then recorded copulation duration to the nearest minute before replacing the male. In the paired data set, we gave the males at least 30 min to recover before pairing them with the second female in reverse mating order.
In the tests using selection lines, flies stemmed from populations that had been subjected to bidirectional artificial selection on PO levels. Whereas all three replicate lines selected for high PO concentration generated flies with short sperm (in males) and short spermathecal ducts (in females) as a correlated response, selection for low PO levels produced male flies with long sperm and female flies with long spermathecal ducts (Schwarzenbach, 2006;Schwarzenbach & Ward, 2006. After 13 generations of selection, we paired randomly picked flies among the three replicate lines within each PO selection regime to offset any potential inbreeding effects within lines. From these crosses, we derived three new crossed experimental lines (i.e. 'short-sperm' lines from the three high PO crosses, and 'long-sperm' lines from the three low PO crosses) to stage competitive matings between them. In brief, we allowed virgin females of either selection regime to sequentially mate with two sexually naïve males, one from each regime, half with short-sperm males mating first, and half with long sperm males first. Within each mating trio, the male and female of the same selection regime came from separate line crosses, in all possible combinations between regimes, lines and sexes. As in the tests competing brothers, we combined each virgin female with their first male in a glass vial and supplied the second male after successful initial copulation.
In both experiments, we allowed each double-mated female to oviposit her first clutch of eggs into a smear of fresh dung on a filter paper (typically within 30 min of the second copulation), which we then transferred into a plastic container with abundant dung for larval development at 18°C. We measured the length of one hind tibia of all individuals as a measure of body size (Simmons & Ward, 1991) and froze all parents and emerging offspring at −80°C for later measurement of internal morphology (described below; experiment 1 only), DNA extraction and genotyping. We used as many microsatellite loci as needed to unequivocally assign paternity to one or the other male following well established protocols and using Applied Biosystems GeneMapper software (Bussière et al., 2010;Demont et al., 2011Demont et al., , 2012Demont et al., , 2021. We typically genotyped a random subset of 16-20 offspring from a female's first clutch (typically comprising 30-70 eggs). Since approximately 15% of all families produced only partial (i.e. small) clutches, we included all families with at least eight offspring.
For each competing brother in the first experiment, we removed both testes in insect Ringer solution, released the sperm from the proximal third of the testis (relative to the ejaculatory duct) into a drop of solution on a microscope slide, and measured the total length (head plus tail) of 20 sperm using ImageJ to compute his mean sperm length Ward & Hauschteck-Jungen, 1993). We further measured the lengths of all three spermathecal ducts per female and the area (length and width) of their corresponding spermathecae. We also took the same measurements for a random subsample of 15 individuals per replicate line in the second experiment. Although these were not the specific individuals used in our sperm competition experiment, their sperm and spermathecal ducts had diverged to disparate length after the 13 generations of selection. Specifically, sperm lengths averaged 216.8 ± 0.95 and 212.3 ± 0.62 μm in the low and high PO selection regimes, the corresponding spermathecal duct lengths being 708 ± 6.8 and 679 ± 18.7 μm, respectively (means ± SE of three replicate lines, N = 45; Schwarzenbach, 2006).
| Statistical analyses
Although biologically realistic (Simmons, 2001;Simmons & Siva-Jothy, 1998), second-male offspring proportions (P 2 ) of 0 or 1 may result from one of the two matings being unsuccessful, complete sperm displacement (for P 2 = 1) or from total male infertility. We excluded five brother pairs in the paired data set of the first experiment for which the same competitor achieved no paternity whatsoever, thus potentially indicating infertility. For the remaining trials, we conducted our analyses both including and excluding cases of P 2 = 0 or 1, as it was not possible to ascertain successful sperm transfer.
For all analyses of relative paternity shares, we performed gen- length between the competitors. We initially also included the two-and three-way interactions between female body size and the relative differences in copula durations and sperm lengths, but subsequently dropped non-significant interactions from our final models. We considered these interactions because female size predicts the size of the sperm-storage organs and thus their storage capacity Schwarzenbach, 2006;Thüler et al., 2011) and so potentially the degree of sperm displacement (Lüpold, Reil, et al., 2020), and it might itself result in sperm allocation by males due to size-dependent fecundity (Kelly & Jennions, 2011;Wedell et al., 2002). Sperm transfer (and ultimately paternity) increases with copula duration (Demont et al., 2021;Parker & Simmons, 1994;Simmons et al., 1999), which could interact with female size (above) or with relative sperm lengths, capturing potential trade-offs between sperm size and number. Finally, the interaction of relative sperm lengths and female size could provide information on differential sperm competitiveness in response to female sperm-storage structures (Miller & Pitnick, 2002). To account for the non-independence of data for competing brothers tested in both mating orders in the paired assay, we included male family identity (brother pair) as a random effect, plus an observation-level random effect served to mitigate overdispersion.
The second experiment was based on N = 126 double matings, though missing data reduced this data set to N = 117 for our analyses. Here, we analogously performed our final GLMM on the proportion of offspring sired by the second male (P 2 ; paired variable) with female and last-male selection regimes as fixed factors (including their interaction), female body size, the relative differences in body size and copula duration between the two males as fixed effects, and male × female replicate line combinations (N = 24) and experimental block as random effects. Again, an observation-level random effect addressed overdispersion. Note that sperm and spermathecal duct lengths had been pre-determined in other flies from the selection lines and found to be distinct; hence, we did not additionally include these variables in our analysis because they were subsumed in the fixed effect of selection regime. When considering the relative difference in sperm length between brothers across all N = 81 competitive mating trials with complete data (including both mating orders in the paired assay), there was some evidence for a competitive advantage of longer sperm as estimated by the proportional paternity of the second male (P 2 = 0.63 [0.55-0.71]). In the absence of a direct main effect of the relative difference in sperm lengths (β = 0.27 ± 0.32, 2 1 = 0.67, p = 0.47), there was a weak, albeit statistically non-significant trend for P 2 being jointly explained by the relative difference in sperm lengths interacting with the relative difference in copula durations (β = 0.74 ± 0.37, 2 1 = 4.04, p = 0.07; Figure 1a). P 2 increased with the relative difference in copula durations (positive [main] effect: β = 1.06 ± 0.32, 2 1 = 11.00, p = 0.004), as well as with the relative difference in male body sizes (positive [main] effect: β = 1.10 ± 0.33, 2 1 = 10.45, p = 0.006). However, there was no effect of female body size (β = 0.36 ± 0.33, 2 1 = 1.21, p = 0.32) or the type of assay (paired versus unpaired: 2 1 = 3.05, p = 0.11). Because P 2 was either 0 or 1 for 29 of the 81 trials, which could (but need not) indicate (unsuccessful) copulations without sperm transfer by the second or the first male, respectively, we repeated the above analysis for those 52 trials with at least some paternity by both competitors (strong inference subset of data). P 2 was still biased toward the second male (P 2 = 0.56 [0.48-0.63]), although its 95% confidence interval now included equal paternity between competitors.
F I G U R E 1 (a) The predicted proportion of offspring sired by the second male P 2 (with 95% confidence bands) increases with the relative difference in copula durations and in sperm lengths between the competing brothers (first experiment), (b) whereby female body size (HTL: Hind-tibia length) incurs additional complex effects. To depict interactions among the continuous variables, sperm length and female body size were partitioned into three bins representing the mean, −1 SD smaller, and +1 SD larger than the mean values. The two-way interaction in panel a is based on the full data set (N = 81) of the first experiment, whereas the three panels in B reflect the three-way interaction of the reduced data set (removing P 2 values of 0 and 1; N = 52)
| Tests using phenoloxidase selection lines
The experiment using selection lines revealed a three-way interaction between female and second-male selection regime and relative copulation durations between males (N = 117 trials across 24 combinations of male and female replicate selection lines; β = −2.36 ± 1.09, 2 1 = 4.97, p = 0.04; Figure 2a). Hence, when competitive fertilization occurred in females with short (or normal) spermathecal ducts (high PO), second males with short sperm (high PO regime) sired most offspring, regardless of the relative copulation durations between competitors. Second males of the long-sperm (low PO) regime, however, lost paternity if their copulation was not longer than that of their competitor. When females had long spermathecal ducts, it was the short-spermed second males that lost paternity in case of relatively short copulations, although they clearly outperformed their longspermed competitors whenever they copulated for longer. Besides this three-way interaction, however, the remaining interactions, and all main effects, were not statistically significant (all 2 1 ≤ 3.05,
p ≥ 0.16).
When excluding the cases with P 2 of 0 or 1 (as above), leaving N = 64 trials, the non-significant three-way interaction and the two-way interactions involving copula duration did not improve model fit and were therefore removed from the model ( 2 1 < 1.43, p > 0.23). Among the remaining predictors, a two-way interaction between male and female PO types indicated a short-sperm advantage in females with long spermathecal ducts (β = −1.40 ± 0.55, 2 1 = 6.39, p = 0.02), but equal paternity success of sperm types when spermathecal ducts were short (Figure 2b). Additionally, there was a positive effect of the relative difference in copula durations (β = 0.41 ± 0.13, 2 1 = 9.45, p = 0.007), and a negative effect of female body size (β = −1.67 ± 0.81, 2 1 = 1.06, p = 0.05). The relative difference in male body sizes or sex-specific PO types did not affect P 2 (all 2 1 ≤ 1.06, p ≥ 0.33).
| DISCUSS ION
Our major finding was that longer sperm did not confer any consistent paternity advantage in yellow dung flies, the classic model species for studies of sperm competition and sexual selection (Simmons et al., 2020). In all tests, paternity was primarily biased toward the second of two competing males, consistent with numerous previous reports for this species (reviewed in Simmons, 2001;Simmons et al., 2020). In tests taking advantage of temperature-mediated sperm length variation across 81 brother-brother competitions to minimize genetic influences, this second-male advantage was greater when second males had relatively longer sperm and copulated for longer ( Figure 1). In the reduced, stronger inference data set (excluding P 2 = 0 or 1), the interaction between male traits was further influenced by the size of the female, and thus likely by the length of their spermathecal ducts Schwarzenbach, 2006;Thüler et al., 2011). An interaction between female spermathecal duct length and both relative sperm length and relative copula duration of competitors was also observed in our second experiment, using flies from PO selection lines with (genetically) correlated responses in sperm length and female spermathecal duct length (Schwarzenbach, 2006;Schwarzenbach & Ward, 2006. Taken together, these results point toward complex but subtle interactions between multiple female and male reproductive traits that likely underlie competitive fertilization success, rather than any straightforward expected advantage of longer sperm due to, for instance, swimming speed or sperm displacement capacity (Fitzpatrick & Lüpold, 2014;Lüpold & Pitnick, 2018;Simmons & Fitzpatrick, 2012).
Our experimental manipulation to generate brothers of different sperm length was prompted by previous, somewhat surprising results that warmer rearing temperature systematically elongates sperm of emerging male yellow dung flies, for unknown functional reasons . Hot temperatures are F I G U R E 2 (a) The predicted proportion of offspring sired by the second male P 2 (with 95% confidence bands) increases with the relative difference in copula durations between the competitors, more strongly for short-spermed males in longer female spermathecal ducts (lines resulting from phenoloxidase (PO) selection; second experiment, N = 117 trials). (b) Mean P 2 (± 95% CIs) of long-versus short-spermed males in females with short or long ducts (reduced data set with P 2 values of 0 and 1 removed; N = 64) also well known to affect various other aspects of reproductive success of this (and many other) species, including juvenile mortality or reproductive behaviour, and likely also including male fertility (Blanckenhorn et al., 2014. It is therefore possible that our hot temperature treatment may have induced physiological effects on sperm, ultimately affecting male fertility beyond the morphological changes in focus here (Sales et al., 2019;Walsh et al., 2019). In the extreme, it is possible that larger sperm are more competitive (e.g. at swimming), but other thermally induced defects offset this advantage and so confounded our results in the experiment competing brothers. Blanckenhorn et al. (2014) Of the traits contributing to differential paternity outcomes in the yellow dung fly, copula duration is the best studied and has previously often been shown to covary positively with the number of sperm transferred and, ultimately, paternity share (Bussière et al., 2010;Demont et al., 2021;Parker & Simmons, 1994;Simmons et al., 1999). Prolonged copulations therefore likely indeed confer a simple numerical advantage among sperm, consistent with the general raffle principle predicted by classical sperm competition theory (reviewed in Parker & Pizzari, 2010). Whether sperm of different males are mixed or displaced and ultimately discarded (as is the case in the yellow dung fly: , relative sperm numbers are likely to contribute to competitive fertilization success, with increased sperm production or transfer being a near-ubiquitous response to female multiple mating among the species studied so far (Birkhead & Møller, 1998;Lüpold, de Boer, et al., 2020;Parker, 1982;Parker & Pizzari, 2010;Simmons & Fitzpatrick, 2012). Our results therefore are broadly in line with this general pattern.
Beyond copula duration predicting relative sperm numbers, their interactions with relative sperm lengths in both our experiments nevertheless indicate that competitive fertilization may not merely result from numerical advantages (Parker, 1990(Parker, , 1993). Yet, our two experiments were inconsistent with regard to the direction of these interactive effects on P 2 , and partly influenced by the size of the females and/or their spermathecal ducts. In our first experiment competing brothers, longer sperm tended to enhance the effect of relative copula duration (and putative sperm numbers), according to our reduced (strong inference) data set particularly when competing in relatively large females (with larger sperm-storage organs; Thüler et al., 2011; Figure 1). Competitive advantages of relatively longer sperm have been reported in various species (Fitzpatrick & Lüpold, 2014;Pitnick, Hosken, & Birkhead, 2009;Simmons & Fitzpatrick, 2012). Selection for longer sperm is thought to be particularly prevalent in small, internally fertilizing organisms, including insects in which the highly confined space of the female reproductive tract causes direct sperm interactions and often displacement of resident sperm from female sperm-storage structures by incoming sperm (Immler et al., 2011;. Although the precise mechanism remains unknown, at least in Drosophila it has been shown in different experimental contexts that longer sperm are better at displacing shorter rival sperm, or at resisting displacement by them (Lüpold et al., 2012;Manier et al., 2013;Miller & Pitnick, 2002). Genetic correlations between sperm length, sperm displacement ability, and the size of female sperm-storage organs (Lüpold et al., 2016;Miller & Pitnick, 2002) may thus promote selection for longer sperm to explain the taxonomically widespread examples of interspecific covariation between these traits (reviewed by Lüpold & Pitnick, 2018).
Despite the subtle long-sperm effect found in the brotherbrother competitions, our other test uncovered no such effect, as also documented in several other taxa (Boschetto et al., 2011;Dziminski et al., 2009;Gage & Morrow, 2003;García-González & Simmons, 2007;Laskemoen et al., 2010;Morrow & Gage, 2001;Simmons et al., 2003). In our second experiment using artificially selected yellow dung flies, a three-way interaction between female spermathecal duct length, relative sperm lengths and copula durations revealed that second males with shorter sperm achieved higher P 2 , however only when competing in females with long spermathecal ducts (Figure 2). An advantage of shorter sperm in longer ducts is difficult to explain, unless small sperm size translates into more sperm transferred to females. Such a conclusion seems plausible given the corresponding copula durations, but is complicated by the realized positive genetic correlation between sperm and spermathecal duct lengths following selection on PO in yellow dung flies (Schwarzenbach, 2006; see also Hosken et al., 2001;Thüler et al., 2011). This correlated response particularly contrasts with another experimental evolution study in which direct manipulation of the mating system, and thus the degree of post-mating sexual selection, resulted in no divergence in sperm or spermathecal size, but instead in significantly larger testes and relatively higher paternity shares in polyandrous yellow dung fly lines .
It is therefore likely that the paternity advantage of second males with shorter sperm (from high PO lines) in females with longer spermathecal ducts (from low PO lines) was caused by some other correlated effects of immunocompetence, rather than sperm length per se (also see Arnaud et al., 2005). Since female yellow dung flies typically have three spermathecae (but occasionally four) and store sperm differentially between them (Bussière et al., 2010;Demont et al., 2021;Walters et al., 2022), it is also possible, albeit speculative, that females derived from different genetic and selective background varied in how they stored and used sperm due to different spermathecal duct lengths. Such effects could further extend to female accessory gland secretions known to affect sperm survival (Thüler et al., 2021), either by differential fluid composition or sperm sensitivity. Even if the functional links between these findings remain elusive, they do point toward the female × male × male interactions underlying paternity found here being sensitive to environmental variation, and to the limits of generalizing experimental findings on fitness outcomes. & Fitzpatrick, 2012), and likely female influences on sperm performance and biases between competing sperm via variation in their selective environment (Eberhard, 1996;Firman et al., 2017;Gasparini et al., 2020;Pitnick, Wolfner, & Suarez, 2009; including yellow dung flies: Demont et al., 2021;Thüler et al., 2021).
To date, idiosyncratic fitness outcomes among females and competing males have been studied primarily at the genotypic level (reviewed in Lüpold, Reil, et al., 2020) and have been thought to weaken sexual selection on male traits by reducing the contribution of their residual variance to paternity (Birkhead, 1998;Neff & Pitcher, 2005;Pitnick & Brown, 2000). However, as sex-specific fitness traits and the interactions between them are typically context-or condition-dependent, this might generate enough variation to facilitate directional sexual selection and even the evolution of extreme phenotypes, while at the same time overcoming the depletion of genetic variation in the population through such selection (Lüpold, Reil, et al., 2020). writing -review and editing (lead).
ACK N OWLED G EM ENTS
We thank M. Demont | 2022-08-17T06:16:19.552Z | 2022-08-16T00:00:00.000 | {
"year": 2022,
"sha1": "8a1c536cd0ea5b18d435cddd22035ba2b281e867",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Wiley",
"pdf_hash": "285bfe700c266451f2ae91ba7c920edc65cf3693",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213749248 | pes2o/s2orc | v3-fos-license | Disparities in Childhood Obesity in Low Socioeconomic Status and Racial/Ethnic Populations: An analytical literature review
Since childhood obesity is linked with an increased risk of obesity in adulthood, obesity in children and adolescence brings a multitude of adverse health outcomes including, but not limited to cardiovascular disease, sleep apnea, diabetes, some forms of cancer, hypertension, and death. This study focuses on analytical evaluation of disparities of childhood obesity in low socioeconomic status and racial/ethnic populations. The analytical review was conducted on the literature available online focusing five dimensions for the analysis is expressed in the following points: (1) What is evel of incidence of childhood obesity in the United States, (2) What is definition of childhood obesity? (3) What are the factors that impact obesity? (4) What is the appropriate theoretical framework for research on childhood obesity? (5) What are the knowledge gaps and the recommended future research? The prevalence of obesity in children and adolescents is very alarming and needs to be addressed because this health status, being overweight/obese, has a significant and unfavorable impact on not only the health of young Americans today but also the future health of young Americans. Using the percentile categories to determine childhood obesity, there are noteworthy differences when comparing obesity rates by race/ethnicity, gender, and socioeconomic status. There was no significant correlation between race/ethnicity and overweight/obese when controlling the income. When addressing disparities in childhood obesity it is important to understand not only the causes of obesity, but also other factors which may amplify the causes of obesity. Socioeconomic status during childhood, being male, white, has a high possibility of adiposity in adolescence. Exposure to media and marketing, the reduced access and availability of quality and affordable food products is an example of a factor that may amplify the cause of obesity.
Incidence of Childhood Obesity in the United States
Among the many public health concerns in the United States, childhood obesity is one of the most important. In the past three decades, the rate of obesity in children and adolescents living in the United States has increased exponentiallyalmost tripled since the 1970s. The prevalence of obesity is approximately 33% of school-aged children (Ogden, Caroll, Kit, & Flegal, 2014). According to Hales, Caroll, Fryar, & Ogden (2017), in the United States, almost 1 in 5 school-age children and young people (age 6-19) are obese. The prevalence of obesity in children and adolescents is very alarming. A major reason why this public health concern needs to be addressed is the health risks associated with obesity.
Research has shown that childhood obesity is linked with an increased risk of obesity in adulthood. There is a 70% chance an individual will remain overweight or obese in adulthood if this individual was overweight as an adolescent (Freedman et al., 2005). Obesity in children and adolescence brings a multitude of adverse health outcomes including, but not limited to cardiovascular disease, sleep apnea, diabetes, some forms of cancer, hypertension, and death (Rogers et. al, 2015). Being overweight has a significant and unfavorable impact on not only the health of young Americans today, but also their future health. Hence, the importance of addressing the causes of obesity and finding solutions that will strengthen the fight against childhood obesity.
The ever-changing demographic of diversity in racial and ethnic groups in the United States is a factor to consider when studying public health concerns such as childhood obesity. The Federal Interagency Forum on Child and Family Statistics (2013) explains the changes that are occurring in the demographics of children in the United States: There were 73.7 million children ages 0-17 in the United States in 2012, accounting for almost 24 percent of the population. Racial and ethnic diversity among America's children ages 0-17 continues to grow. By 2050, about half of the American population ages 0-17 is projected to be composed of children who are Hispanic, Asian, or of two or more races. Specifically, it is projected that 36 percent of the American population ages 0-17 will be Hispanic (up from 24 percent in 2012); 6 percent will be Asian (up from 5 percent in 2012); and 7 percent will be of two or more races (up from 4 percent in 2012) (p. vii).
In 2010, 18% of children were categorized under the weight status of obese, with higher rates found in Mexican Americans (23 %) and black (26%) children. In 2011, 22% of all children lived in poverty and more than half of the childhood population lived in substandard housing, which brings an increased risk of disease (Barr, 2014). The data mentioned above shines light to the fact that health disparities found in children are different based on the socioeconomic and racial/ethnic groups they belong to.
According to De Chesnay (2020), vulnerable populations "… are those with greater-thanaverage risk of developing health problems by virtue of their marginalized sociocultural status, their limited access to economic resources, or their personal characteristics, such as age and gender…" (p.5). Based on the data discussed above and the definition of vulnerable population, the author is led to the conclusion that children in the United States belonging to minority/ethnic groups and low socioeconomic status are considered a vulnerable population.
The author's first goal is to focus on the disparities and prevalence of childhood obesity among racial/ethnic minority and low socioeconomic populations. Second is to address the possible contributing factors that amplify the causes of obesity found in ethnic minority and low socioeconomic populations. The author will also discuss relevant studies and findings to address the objectives mentioned above. Lastly, to discuss knowledge gaps and future nursing research focusing on childhood obesity.
METHODS
The method of this research is an analytical review of pertinent literature in the Google Scholar and other databases available on the internet
Definition of obesity
It is important to first understand the definition and criteria used to determine childhood obesity. Obesity is described using the "…ratio of one's weight (measured in kilogram) to the square of one's height (measured in meters) …" (Barr, 2014, p. 162) Using the percentile categories above to determine childhood obesity, there are noteworthy differences when comparing obesity rates by race/ethnicity, gender, and socioeconomic status.
According to the U.S. Department of Health and Human Services (2013), the population of children at highest risk to develop obesity are children ages 2-4 (pre-school) in low-income families. When looking at this vulnerable population nationally, rates of obesity is 11.9% in black and Asian children; 12.3% in white children; 17.9% in Hispanic children, and the highest rate of obesity was found among American Indian/Alaskan Native children -20.7% (U.S.
Department of Health and Human Services, 2013).
In a study conducted by Ogden, Caroll, Kit, & Flegal (2012), when comparing females ages 6-11, the rate of obesity for black girls is 1.7-2.8 times greater compared to the rate of obesity in white females. In the male population (ages 6-11), the rate of obesity in black males is 1.3 -1.8 times more than the rate in white males. The sample used for this study were 4111 participants which was a representative sample of the United States child and adolescent population at the time of the study. According to the authors, the main limitation of this study is the rather small sample size that was based on two years of data collected by the National limited to a short snapshot of the fifteen months during the children's early development.
Social, cultural, and economic environmental factors that impact obesity
When addressing disparities in childhood obesity it is important to understand not only the causes of obesity, but also social, cultural, and environmental factors that are especially extensive in ethnic minority and low-income populations which may amplify the causes of obesity. Research has shown that exposure to media and marketing is an example of a factor that may amplify the cause of obesity. Children between the ages of 8-18 years old in ethnic minorities have been found to have more entertainment media use than the majority of children.
Hispanic and black youth spend more time watching TV or movies and playing video games compared to white youth Low-income children have also been found to watch more TV and have higher exposure to media compared to higher-income children (Kumanyika & Grier, 2006). This extensive media consumption exposes ethnic minority and low-income youth to a variety of food advertising that may have a strong impact on a child's food preference (Borzekowski & Robinson, 2001).
A recent study conducted by Powell, Wada, & Kumanyika (2014) studied the racial/ethnic and income disparities in children's exposure to televised food and beverages in the United States.
One of the findings from this study indicate that "… higher proportions of black population were associated with greater exposure to ads in all food categories.." and "… larger than average associations between the prevalence of child/adolescent black population… were found for sweets, beverage, snack and fast-food restaurant product categories for children … and beverages and sweets for adolescents…" (p.126). According to Powell, Wada, & Kumanyika (2014), "The associations with exposure for both children and adolescents were significantly higher… for regular soda versus diet soda advertisements with higher proportions of black children/adolescents and lower median household income." (p.128). These finding bring to light the challenge of highlighting the importance of lowering the consumption of fastfood and other high-calorie food and beverages in this vulnerable population that has a high exposure to food and beverage TV ads. It highlights the need for heightened effort to promote healthier food alternatives to unhealthy products.
Other factors that is found in the ethnic/minority and low-income population that may amplify the cause of obesity is the reduced access and availability of quality and affordable food products. African-American neighborhoods have many fast-food restaurants compared to white neighborhoods (Block, Scribner, & DeSalvo, 2004). Low-income neighborhoods have also been found to have fewer supermarkets than higher-income neighborhoods, this may limit the citizens of these neighborhoods access to healthier food products (Morland, Wing, & Diez Roux, 2002). Minority families live in neighborhoods with a higher concentration of fast food options and lower healthy food outlets, also known as "food deserts" (Barr, 2014).
Theoretical Framework
After review of the research studies and findings, it is apparent that there are still gaps in knowledge that must be filled to effectively fight the battle against childhood obesity. As nurse researchers it is important to highlight the importance of using nursing theory to guide one's research. Nursing theory not only serves to guide research but also provide insights regarding nursing practice. According to Meleis (2018), "Nursing theories have provided nurse researchers with new propositions for nursing research that could not have been as well articulated if theories from other disciplines were used…" (p.3 6). Meleis (2018) also points out that theory guides research and practice because nursing theory provides nurses with the framework for the nursing process.
The author has chosen the Health promotion model as the theoretical framework that may be used to guide nursing research focusing on finding solutions to weaken the increase of childhood obesity in the United States today. According to Whittemore, Chao, Popick, & Grey (2013), "The Health Promotion Model classifies health behavior determinants into individual characteristics and experiences (i.e., prior related behaviors and personal factors) and behavior-specific cognitions and effect (i.e., perceived benefits and barriers, interpersonal influences, and situational influences." (p. 55).
The Health Promotion Model not only addresses personal factors but also the behavior-specific effects such as interpersonal and situation influences. This is the reason that the author deemed this theoretical framework as the appropriate choice to guide nursing research addressing the prevalence of childhood obesity. This nursing theory helps address the limitations several of the studies discussed earlier had which was to investigate any life circumstances, attitudes, and behaviors that may contribute to the prevalence of childhood obesity
Conclusion
Knowledge gaps and future research The prevalence of obesity in children and adolescents is very alarming and needs to be addressed because this health status, being overweight/obese, has a significant and unfavorable impact on not only the health of young Americans today, but also the future health of young Americans. The research studies discussed revealed that gender, race/ethnicity, and socioeconomic status plays a significant role in the prevalence of childhood obesity (Kendzor, O Caughy, and Owen, 2012;Moss & Yeaton, 2011;Ogden, Caroll, Kit, & Flegal, 2012;Rogers et al., 2015). As evident by the findings and limitations of the research studies discussed above, there are still gaps in current knowledge of the prevalence of childhood obesity in racial/ethnic and low socioeconomic populations. Future research should examine the relationship between other demographic such as social relationships, public policies, activity levels, birth and maternal characteristics with weight status variables. Future research should also focus more critically on actionable targets for change,that include factors specific to each household, neighborhood, community, and region.
When conducting future research studying this vulnerable population, it is important for the researcher to avoid bias. According to the Oxford Dictionary, bias is "an inclination or prejudice for or against one person or group, especially in a way considered to be unfair" (Lexicon Dictionaries, n.d.). It is important to first understand that bias exists in all research, regardless of the research design or stage of the research process. Although bias is difficult to eliminate, it is important to understand because "…bias impacts on the validity and reliability of study findings and misinterpretation of data can have important consequences for practice." (Smith & Noble, 2014, p.100).
Researchers must also be aware to avoid broadly categorizing race and ethnicity by not including racial and ethnic subpopulations. According to Barr (2014, p.88), "In studying the health disparities that exist in our society and the potential means to reduce them, it is important to understand how the categories of race and ethnicity are used and what they mean." It is also critical that researchers strive to obtain the skills to be culturally competent. Betancourt and colleagues (2005, p. 499) puts it beautifully, "the goal of cultural competence is to create a health care system and workforce that are capable of delivering the highest-quality care to every patient regardless of race, ethnicity, culture, or language proficiency. | 2020-01-02T21:49:43.122Z | 2019-12-18T00:00:00.000 | {
"year": 2019,
"sha1": "e69c1b9f45e0bfb535bd2f53f941f029236e6b76",
"oa_license": null,
"oa_url": "https://doi.org/10.35974/isc.v7i1.2027",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8a8198d60cde91775badf151d220e0ec0f5f46f8",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4812626 | pes2o/s2orc | v3-fos-license | The validity of parental reports on motor skills performance level in preschool children: a comparison with a standardized motor test
Motor skills are interrelated with essential domains of childhood such as cognitive and social development. Thus, the evaluation of motor skills and the identification of atypical or delayed motor development is crucial in pediatric practice (e.g., during well-child visits). Parental reports on motor skills may serve as possible indicators to decide whether further assessment of a child is necessary or not. We compared parental reports on fundamental motor skills performance level (e.g., hopping, throwing), based on questions frequently asked in pediatric practice, with a standardized motor test in 389 children (46.5% girls/53.5% boys, M age = 3.8 years, SD = 0.5, range 3.0–5.0 years) from the Swiss Preschoolers’ Health Study (SPLASHY). Motor skills were examined using the Zurich Neuromotor Assessment 3–5 (ZNA3–5), and parents filled in an online questionnaire on fundamental motor skills performance level. The results showed that the answers from the parental report correlated only weakly with the objectively assessed motor skills (r = .225, p < .001). Conclusion: Although a parental screening instrument for motor skills would be desirable, the parent’s report used in this study was not a valid indicator for children’s fundamental motor skills. Thus, we may recommend to objectively examine motor skills in clinical practice and not to exclusively rely on parental report. What is Known: • Early assessment of motor skills in preschool children is important because motor skills are essential for the engagement in social activities and the development of cognitive abilities. Atypical or delayed motor development can be an indicator for different developmental needs or disorders. • Pediatricians frequently ask parents about the motor competences of their child during well-child visits. What is New: • The parental report on fundamental motor skills performance level used in this study was not a reliable indicator for describing motor development in the preschool age. • Standardized examinations of motor skills are required to validly assess motor development in preschoolers.
Introduction
Motor skills are interrelated with a number of developmental domains such as cognition, perception, language, and social and physical development [1,2,6,7,9,22]. For example, Cameron et al. [5] reported that in 3-4-year-old motor skills correlated positively with performance in a Kindergarten achievement test, including language skills (e.g., reading, vocabulary, and phonological awareness), and mathematical problems. Michel et al. [19] found that 5-7-year-old children with impaired motor skills showed lower pre-academic skills and lower performance in inhibition tasks compared to children without motor impairments.
Furthermore, several studies showed that fundamental motor skills (FMS) are essential for the engagement in physical activities and to discover the environment [2,24]. FMS include locomotor (e.g., moving from place to place: walking, running, jumping, skipping, hopping, sliding, etc.) and object control skills (e.g., throwing, catching, kicking) [8,24]. Therefore, the competence in FMS is linked to health-related outcomes such as cardiorespiratory fitness, muscular strength, and body weight [16,21]. Stodden and co-workers stated that children who perceive their motor competence as low engage less in physical activity and, thus, bear a higher risk of becoming unfit and obese [24]. Both reduced physical activity and high body weight further promote low perception of motor competence which will eventually result in even lower motor competence [24]. As a result, children find themselves in a "negative spiral of disengagement" [24]. In fact, less engagement in physical activities can also affect the social interaction with peers negatively, especially in the preschool age, and may lead to social exclusion [1,23]. Smyth and Anderson [23] found that children with developmental coordination disorder (DCD) spent more time alone or were more watching other children play compared to children without motor difficulties. These authors discussed that children with DCD might be excluded first from physical and then from social games. Moreover, potential cooccurring difficulties (e.g., cognitive deficits, language impairment, etc.) might have an additional influence on the exclusion. However, actual causality remains open.
To avoid this negative spiral, it is important to assess motor performance early enough so that therapeutic intervention and support for the child may be introduced. Thus, the evaluation of FMS performance level in early childhood and the identification of atypical or delayed motor development is crucial in pediatric practice. In fact, pediatricians regularly assess FMS performance level during well-child visits by asking parents whether their child can already perform a certain task (e.g., climbing stairs, riding a bicycle, swimming) [3,11]. Parental reports are an attractive option for receiving information about the development of the child. They are time and cost effective, and easy to implement. Parents have knowledge of the unaffected behavior and the skills of their children, whereas in clinical practice motivation and cooperation of the child may lead to ambiguous evaluation. Although evidence exists that parents provide valid and reliable reports regarding early motor milestones during the first years of life [4,15,17], we do not know whether FMS performance level reported by parents during the preschool years ultimately reflect the child's performance in a standardized motor test. To our knowledge, there is no study examining parental reports on motor skills in typically developing preschool children (which was also stated in [20]). In pediatric practice, it would be beneficial to know whether questions on daily motor activities of the child correlate with motor skills measured by a standardized test. Questions about daily motor activities aim to identify indicators for motor skills performance level. So far, it has not been examined whether these questions deliver some additional information on motor development.
Thus, we constructed a 6-item questionnaire of FMS based on questions frequently asked in pediatric practice [3,11] and compared the answers with objectively measured FMS performance level using the Zurich Neuromotor Assessment 3-5 (ZNA3-5), a standardized test instrument with good psychometric properties. Our aim was to evaluate whether a parental report What is Known: • Early assessment of motor skills in preschool children is important because motor skills are essential for the engagement in social activities and the development of cognitive abilities. Atypical or delayed motor development can be an indicator for different developmental needs or disorders. • Pediatricians frequently ask parents about the motor competences of their child during well-child visits.
What is New: • The parental report on fundamental motor skills performance level used in this study was not a reliable indicator for describing motor development in the preschool age. • Standardized examinations of motor skills are required to validly assess motor development in preschoolers.
on FMS performance level observed in everyday activities can deliver valid data about the level of motor skills development in the preschool age as measured by a standardized test procedure.
Participants
Our analysis included 389 children between 3 and 5 years of age (181 girls/208 boys, M age = 3.8 years, SD = 0.5, range 3.0-5.0 years). The data presented here were collected within the Swiss Preschoolers' Health Study (SPLASHY) that investigated typically developing preschool children in 84 child care centers [18]. Originally, 476 children participated in the SPLASHY study. For this analysis, we excluded children below the age of 3 years and above the age of 5 years. From this sample (n = 417), 24 parents did not fill out the motor questionnaire. Out of the remaining 393 parents, 389 parents answered at least three items, so that a total parental report score could be calculated.
Fundamental motor skills were measured with static balance (standing on one leg) and dynamic balance (walking on a straight line, hopping on one leg, side-to-side jumping, and running). The instruction for static balance was "stand on your right/left leg as long as you can". Timing started when the child lifted one foot off the floor and stopped when the child touched the floor with the lifted foot, or shifted the foot of the standing leg more than 2 cm, or when the time limit of 30 s was reached. Instructions for the dynamic balance tasks were the following: (1) Walking on a straight line: the child was asked to walk on the cord by putting one foot in front of the other. The heel of the anterior foot had to touch the toes of the foot behind. A qualitative score was given from 0 to 4 (0 = Perfect performance, heel touching toes; 1 = Distance between the two feet, feet straight; 2 = Feet not straight and/or misses the line 1-3 times; 3 = Feet perpendicular and/or does not touch the line > 3 times; 4 = Not able to walk with both feet on the line), (2) Hopping on one leg: the child has to hop as many times as possible on one leg, next to the cord. The task was done for each leg, and two trials for each leg were given. A qualitative score was given from 0 to 4 (0 = Can hop on both legs more than 7 times; 1 = Can hop on only one leg more than 3 times; 2 = Can hop on both legs from 1 to 3 times; 3 = Can hop on only one leg from 1 to 3 times; 4 = Cannot hop on either leg), (3) Side-to-side jumping: the child was asked to stand beside the cord and to jump forth and back over the cord sideways while keeping the feet together. A qualitative score was given from 0 to 4 (0 = Perfect performance, very smooth jumping; 1 = Jumping is correct but not very smooth; 2 = Touchdown with two feet at the same time, jumping very stiff; 3 = Total body involvement, poor coordination in relation to the line direction; 4 = Jumping about but not in relation to the line) and (4) Running: the child had to run 20 m around the chairs (5 × 4 m). A qualitative score was given form 0 to 4 (0 = Rolling motion of feet with adjustment of upper body; 1 = Rolling motion of feet, stiff upper body; 2 = Running with partial rolling motion of feet; 3 = Running without any rolling motion of feet; 4 = Cannot run (no flight phase)). For the analyses, all ZNA3-5 performance was expressed as standard deviation scores (SDS) calculated from age-and sex-adjusted normative values. Positive values are corresponding to above average performance and negative values to below average performance.
Parents filled in an online questionnaire (see abstract) containing questions about swimming, climbing stairs, hopping, riding, balancing, and throwing (Table 1). For each FMS item, the parents had to rate the stage of development. Responses were combined into three categories: 0-1-2 (Table 1). A sum score for the parental FMS questionnaire (parental FMSQ) was calculated by taking the average score across the six items (if at least three items were answered), multiplied by the amount of all items.
Statistical analyses
Statistical analyses were performed using SPSS (IBM, SPSS; Version 22.0, Chicago, IL, USA). Descriptive statistics were calculated by means ± standard deviations for continuous variables and percentages for categorical variables. The main outcome variables ZNA scores and sum parental FMSQ score were normally distributed. For parental FMSQ, sex effects were tested with the Mann-Whitney U Test and age effects with Spearman's rank order correlations. Corresponding effect sizes were calculated. SDS scores for ZNA were sex and ageadjusted and therefore these effects were no more examined. The relationship between ZNA outcome and parental FMSQ outcome was investigated using partial correlation, with age and sex as control variables. Furthermore, the sample was divided in three tertiles by age to test whether parental report delivers reliable information for all age groups in the preschool age: first tertile n = 129, M = 3.3 years, range 3.0-3.5; second tertile n = 130, M = 3.8 years, range 3.5-4.1, and third tertile n = 130, M = 4.4 years, range 4.1-5.0. Partial correlations were compared with Spearman's rank order correlations, which are more adequate for ordinal variables but do not allow to include control variables. Correlations from both analyses were very similar in magnitude and significance level (Table 2). Therefore, only partial correlations controlled for age and sex are discussed.
Results
Parental FMSQ scores ranged from 3 to 12 with a mean sum score of Median = 8.00 (SD = 1.80) (Fig. 1). Frequencies of each answer category per items are shown in Table 1. There was no sex difference in the sum score of the parental FMSQ, (p = .31), while we found small sex differences for the items riding, U = 16,273.0, p < .05 (effect size r = .14), and throwing, U = 13,781.0, p < .05, (effect size r = .12), with boys showing a higher score on both items. Furthermore, there was a strong age effect, r = .506, p < .001; older children scored higher than younger children. The internal consistency between the six FMSQ items was expressed by a Cronbach alpha of .50. The questionnaire was mainly filled out by mothers (84.3%, in 14.7% exclusively by the fathers). We compared the children included in the analyses with the 24 excluded children without motor questionnaire data; they did not differ in age, sex, SES, and the tested ZNA tasks (p > .05). ZNA scores were age and sex adjusted, therefore no corresponding effects can be reported.
Overall, the parental FMSQ correlated weakly to moderately with the ZNA total dynamic balance tasks, r = .225, p < .001 and weakly with static balance r = .137, p < .05. The FMSQ item jump revealed the strongest correlations with ZNA outcomes (Table 2); significant correlations were found between jumping and walking on a straight line, hopping on one leg, and total dynamic balance (r = .158-.228) ( Table 2).
The three items-stairs, ride, and balance correlated with several tasks from the ZNA, while the items swim and throw did not correlate with any tasks from the ZNA.
The same partial correlations between ZNA motor tasks and the FMSQ items were performed for three different age groups. Correlations for parental FMSQ sum and ZNA total dynamic balance were nearly the same in all age groups (r = .196-.284, p < .05). As for the overall analysis, the item jump from the FMSQ correlated most frequently and strongest with ZNA tasks in all age groups (r = .220-.325), while swim and throw were not correlated with any of the ZNA tasks. Between the three age group, differences occurred, but no systematic differences in amount or magnitude of significant correlation was observed. Correlations only significant in a single age group were the following: only in the youngest group static balance (ZNA) was correlated with FMSQ sum (r = .307, p = .003) and stairs (r = .276, p = .009), and, only in the middle group, the item stairs (FMSQ) and walking on a straight line (ZNA) were correlated (r = .232, p = .020). The item balance (FMSQ) and running (ZNA) were correlated in the first and second group (r = .290/.265, p < .05).
Discussion
The findings of this analysis of the SPLASHY data showed that the rating of FMS performance level by parents correlated Parental FMSQ 14 Fig. 1 Frequency distribution of the parental report sum score weakly to moderately with standardized measured FMS performance level in the preschool age. Out of the six questioned motor skills four items-climbing stairs, jumping, riding, and balancing-correlated weakly with measured motor skills. Swimming and throwing did not correlate with any motor tasks from the ZNA.
Climbing stairs, jumping, and riding from the FMSQ were correlated weakly with measured total dynamic balance and single tasks from the ZNA static balance, walking on a straight line, and hopping on one leg. The item jump from the FMSQ correlated slightly stronger with ZNA outcomes than climbing stairs and riding; still, the correlation found between jump and the corresponding ZNA tasks hopping on one leg was weak to moderate. No correlations were found between FMSQ items and side-to-side jumping. Balance from the FMSQ was correlated only with running. This is surprising because the performed ZNA tasks, walking on a straight line, side-to-side jumping, and hopping on one leg substantially include balancing skills, even though, more than running. Another unexpected result was that static balance, measured separately, did also not correlate with balance from the FMSQ. A reason might be that 33% of the parents did not know whether their child can balance, so for the item balance fewer children were included, which can result in a power problem. However, the correlation coefficients were below .10, so there was truly no significant association.
The items swim and throw did not correlate with any task from the ZNA. The report on swimming might be influenced more by the environment, such as the opportunity to learn swimming than the actual motor competence. The ZNA did not include object control, so it was not expected that throwing would correlate high with other FMS. The analysis separated for different age groups revealed some weak and moderate, significant correlation but confirmed altogether the weak association between FMSQ and ZNA.
The internal consistency of the FMSQ was rather low indicating that single items may not measure a unique construct. Given the diversity of the items asked, this finding was expected. As we also examined and reported results of single items, low internal consistency is no strong limitation for the study. An explanation for the generally weak correlations could be that the variability within the items was sometimes too small, for example, only 0.5% reported that their child could not climb stairs, but over 90% could climb the stairs without holding the banister. It could also be that parents do not provide valid data on children's FMS performance level during the preschool years. Other studies have shown that parental reports on motor milestones in the first 2 years are a valid marker of motor development of infants [4,15] indicating that parents deliver valid data about the motor competence of their child. This current study shows that this may not be the case as children grow older. The parental report may also not be valid because parents may not have had the opportunity to observe the questioned FMS if they do not spend much time with their children or spend time doing activities for which no FMS are needed. However, only for the item balancing parents reported not to know if their child can balance. Further, certain items such as ride or swim can be related to not having much opportunity to swim or ride rather than be an indicator of the motor skill level. The low correlations between the questionnaire items and the ZNA outcomes may be explained by our sample that included only typically developing children. There is evidence that parental reports in clinical populations are more valid [20]. Miller et al. [20] reported in a sample of 2 years old with developmental disorders (e.g., autism, global developmental delay, developmental language disorder) that parental report on language and fine motor skills did not differ significantly from the measured skills. Finally, the asked and tested motor skills were possibly too different in their nature. Although all skills are indicators for gross motor competence, the asked items are more complex motor skills, while the tested skills are more basic motor skills. In this context, it has to be mentioned that the ZNA3-5 primarily measures motor abilities, which-to a large extend-cannot be practiced and are not dependent on the environment [10]. In fact, a motor test focusing more on skills may correlate higher with the parental report presented in this study.
Some limitations of the study need to be mentioned. For instance, the variability within certain items was small (e.g., for the item balance, 33% of the parents reported not to know the level of performance). The internal consistency of the FMSQ was rather low. Moreover, we did not ask if the child had the opportunity to do all the tasks. However, the percentage of children not able to swim or ride a bike was according to age.
In sum, parental report presented in this study did not provide valid data on motor development, tested by the ZNA3-5 in preschoolers. A parental report may be a valid instrument, if the items are further adapted: The items should not be strongly dependent on the environment of the child (e.g., opportunity to swim) and better differentiate between children with varying motor skills within the same age group (e.g., more categories per item). However, whether parental questions really allow a valid description of motor development and identification of children with delayed motor development remains unclear. Thus, we conclude that the evaluation of FMS performance level in healthy preschool children by their parents may not replace an objective examination of the motor skills with standardized instruments. Parental report may be considered as a screening instrument in combination with an objective examination. Given the importance of motor development due to the interrelatedness with other developmental domains and social interactions, efforts to facilitate the best possible assessment of motor development should be pursued. | 2018-04-03T00:00:38.028Z | 2018-02-09T00:00:00.000 | {
"year": 2018,
"sha1": "3764fbe1f4165f0c535be44e4e8cab93173efdda",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00431-017-3078-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f93060fce4b9e5bbddfd252d305b7589f813cc06",
"s2fieldsofstudy": [
"Education",
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254089120 | pes2o/s2orc | v3-fos-license | Kinetin induces cell death in root cortex cells of Vicia faba ssp. minor seedlings
The double fluorescence staining with acridine orange and ethidium bromide (AO/EB) revealed that treatment of Vicia faba ssp. minor seedlings with kinetin-induced programmed cell death (PCD) in root cortex cells. Kinetin-induced cell death reflected by the morphological changes of nuclei including their invagination, volume increase, chromatin condensation and degradation as well as formation of micronuclei showed by AO/EB and 4,6-diamidino-2-phenylindol staining was accompanied by changes including increase in conductivity of cell electrolytes secreted to culture media, decrease in the number of the G1- and G2-phase cells and appearance of fraction of hypoploid cells as the effect of DNA degradation without ladder formation. Decrease in the number of mitochondria and in the activity of cellular dehydrogenases, production of reactive oxygen species (ROS), appearance of small and then large lytic vacuoles and increase in the amount of cytosolic calcium ions were also observed. The PCD was also manifested by increased width and weight of apical fragments of roots as well as decreased length of cortex cells which led to shortening of the whole roots. The kinetin-induced PCD process was almost completely inhibited by adenine, an inhibitor of phosphoribosyl transferase, and mannitol, an inhibitor of ROS production. These cell-death hallmarks and pathway of this process suggested that the induction of kinetin-specific vacuolar type of death, expressed itself with similar intensity on both morphological and metabolic levels, was a transient protecting whole roots and whole seedlings against elimination.
Introduction
Development of organisms depends on many physiological processes, including cell division and programmed cell death (PCD), which control proper growth of multicellular (van Doorn and Woltering 2005;Delaval and Birnbaum 2007;Hübner et al. 2009) as well as unicellular organisms (Shemarova 2010). Cell division, elevating the number of cells (Delaval and Birnbaum 2007) as well as PCD, eliminating physiologically redundant, damaged or abnormal cells (van Doorn and Woltering 2005) plays an important role in the wide range of differentiation processes (Barciszewski et al. 2007).
In animals, cell death proceeds through apoptosis, microand macroautophagy and non-lysosomal type of death and necrosis (van Doorn and Woltering 2005) as well as via mitotic catastrophe (Hübner et al. 2009;McCall 2010). In plants, cells can die via vacuolar and necrotic as well as mixed type of death but not via apoptosis . However, some of the morphological and metabolical features of animal and plant cell death, regardless of its type, are similar (van Doorn and Woltering 2005; Collazo et al. 2006;Jan et al. 2008;van Doorn et al. 2011).
PCD can be induced via internal as well as external signals including environmental cues (van Doorn and Woltering 2005) as well as many chemical agents (Rao et al. 2008). Plants produce numerous substances (Taraphdar et al. 2001;Rao et al. 2008) which are widely studied with respect to anticancer therapy (Taraphdar et al. 2001;Doležal et al. 2007; Rao et al. 2008). There are phenols and phenolic acids, polyphenolic flavonoids, sugars, glicoproteins, lignins, alkaloids (Taraphdar et al. 2001;Rao et al. 2008) as well as cytokinins, known plant and animal growth regulators (Barciszewski et al. 2007), which can induce PCD in human and animal as well as in plant cells (Carimi et al. 2003;Choi et al. 2008;Doležal et al. 2007;Mlejnek et al. 2003). In human and animals, kinetin riboside, isopentenyladenosine and benzylaminopurine riboside inhibit growth and promote apoptosis prior to cell differentiation process (Ishii et al. 2002). Kinetin ribosides induce apoptosis and suppress HeLa and mouse melanoma (B16F-10) cell growth through the classical mitochondrium-dependent pathway, including disruption of the mitochondrial membrane potential, releasing cytochrome c, activation of caspase-3 and up-and down-regulation of Bcl-2 and Bad proteins. However, human skin fibroblast CCL-116 and bovine primary fibroblast cells are resistant thus no significant changes in Bad, Bcl-XL and cleavage of PARP were observed (Choi et al. 2008). It was demonstrated that kinetin ribosides showed very strong cytotoxic activity against various cancer cell lines and non-cytotoxic activity towards the normal murine fibroblast cell line (NIH/3 T3; Doležal et al. 2007). It is worth noting that kinetin ribosides are more effective anticancer agents then other cytokinins (Griffaut et al. 2004;Doležal et al. 2007).
It was reported that naturally occurring plant cytokinins, kinetin and zeatin or 6-benzylaminopurine (BAP), did not trigger tumour death. They are not active against human M4 Beu and murine B16 melanoma cells (Griffaut et al. 2004), myeloid leukaemia HL-60 cells and human epidermal keratinocytes (Ishii et al. 2002;Berge et al. 2006). However, zeatin and BAP can induce PCD in plants (Carimi et al. 2003). BAP, at 13-and 27-μM concentrations, induced PCD in both carrot (Daucus carota L.) and Arabidopsis thaliana (L.) Heynh cell cultures, respectively, accelerating senescence of leaves, causing their yellowing with PCD hallmarks including chromatin condensation, oligonucleosomal DNA degradation (laddering), cytochrome c release and inhibition of cell proliferation (Carimi et al. 2003). BAP induced PCD in cells of epidermal and sub-epidermal layers in cotyledons of Lycopersicon esculentum and Solanum aviculare (Gahan et al. 2003), and its hallmarks were similar to those observed during apoptosis in mammalian, insect and nematode species (Gahan et al. 2003). BAP can also inhibit the PCD process. Such an inhibitory effect was observed in Nicotiana suaveo-lens×Nicotiana tabacum hybrid cells at high levels (0.8, 4.0 or 20 mM) of BAP. However, 0.04 μM of BAP at 28°C induced changes similar to apoptosis suppressing the percentage of dead cells and extending nuclear fragmentation. In the hybrid cells, at higher levels of BAP, positive terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling (TUNEL) signals and accumulation of formazan, indicating production of reactive oxygen species (ROS), were detected less frequently than at its lower levels (Kobori et al. 2007). However, application of TUNEL method to study cell death would not be an unequivocal test because it shows DNA breaks which are not necessarily related to the studied processes (Kobori et al. 2007).
Kinetin naturally occurring in human, animals and plants ( Barciszewski et al. 2007), which does not induce cell death in human and animal cells (Berge et al. 2006;Ishii et al. 2002), has not been studied in plants so far. Fluorescence staining with acridine orange/ethidium bromide (AO/EB) allowing to express the level of cell death as a cell death index together with 4, 6-diamidino-2-phenylindol (DAPI) staining showed morphological changes in nuclei and nuclear chromatin, indicating that kinetin acted as an inducer of programmed death in root cortex parenchyma cells of Vicia faba ssp. minor seedlings. Kinetin-induced PCD process accompanied with changes in the number of cells in G1 and G2 phases of the cell cycle, in the activity of cellular dehydrogenases, in the ROS production, amount of cytosolic calcium ions, conductivity of cell electrolytes secreted from roots to the culture media and in the morphology of cells and roots was almost completely inhibited by adenine, an inhibitor of phosphorybosyl transferase (Mlejnek and Doležel 2005), and mannitol, the ROS scavenger (Jennings et al. 1998).
Plant material, treatment and analyses
Roots of 3-day-old V. faba ssp. minor seedlings treated with respective agents, were used in the studies which were carried out to show the most important hallmarks of PCD induced by 46.0-μM concentration of kinetin (Sigma) and mechanism of its induction using adenine (50 μM; Sigma) and mannitol (50 μM; POCH) with or without kinetin.
To show hallmarks of kinetin-induced PCD, (1) length of seedling roots, (2) weight and (3) width of 2-cm long apical fragments, (4) conductivity in the culture media using the conductivity meter (Elmetron, Poland) as well as (5) cell lengths were measured. Moreover, (6) estimation of DNA content and the number of the cells in phases of the cell division cycle after DAPI staining, (7) measurement of the activity of cellular dehydrogenases with 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT; Sigma), (8) determination of the distribution and the amount of calcium ions after chlortetracycline (CTC; Sigma) staining, (9) detection of ROS with nitroblue tetrazolium (NBT; Sigma) and (10) the effect of adenine and mannitol on kinetin-induced cell death were carried out. To detect cell death, (11) the staining with acridine orange (Sigma) and ethidium bromide (Serva) was carried out while (12) morphology of nuclei and nuclear chromatin was examined after AO/EB and DAPI (Sigma) staining. Some of the analyses (6, 7, 11 and 12) were done in two zones of roots (Fig. 1a, b; zone I containing meristem cells and zone II containing growing and differentiating cells).
In planta cell death estimation
Non-fixed, 2-cm long apical fragments of roots treated with kinetin for 48, 72 and 96 h were cut from the seedlings, washed two times with 0.01 M Na . phosphate buffer at pH 7.4 (PHB) and next stained with the mixture of 100 μg . ml −1 AO and 100 μg . ml −1 EB at PHB for 5 min. Then, the fragments were washed two times with PHB and fixed with 1 % glutaraldehyde (Merck) at PHB for 15 min, cut along their long axes into very thin sections, washed three times with PHB, observed and photographed using fluorescence microscope (Byczkowska et al. 2012). The changing colour of chromatin, from green to red, allowed to distinguish living, dying and dead cells after the measurement of resultant fluorescence intensity (RFI) of green AO, migrating into the nuclei via cell membrane which did not change its permeability or integrity, and red EB, permeating to dying or dead cells in which cell membrane permeability or integrity was changed (Ribble et al. 2005;Kobori et al. 2007).
The RFI values increased with the changing colour from green to red (Byczkowska et al. 2012). This staining also showed changes in the morphology of nuclei and nuclear chromatin. About 350-400 nuclei in each of three experimental series were analysed.
DAPI staining and determination of DNA content
The 2-cm-long apical fragments of roots of the untreated or 72h kinetin-treated seedlings were fixed in cold Carnoy's (96 % ethanol and glacial acetic acid; 3:1) for 1 h, washed with 96 and 70 % ethanol and hydrated. Then, these fragments were stained with DAPI according to the following procedure: 5-min pretreatment with 0.2 M citric acid and 0.1 % Tween; 5-min staining with DAPI (2 μg ml −1 ) together with 0.1 M Na 2 HPO 4 and 0.2 M citric acid in 9:1 ratio; 5-min washing with the mixture of Na 2 HPO 4 and citric acid (Hotz et al. 1992). After this procedure, the roots were cut along their long axes into thin sections, washed three times with PHB, analysed and photographed under UV light of fluorescence microscope (Kaźmierczak 2010). The microphotographs were used to measurement of DNA content using the Scn Image software. DNA content of about 450-550 nuclei in each of three experimental series was used to prepare the histograms and to determine the number of cells at particular stages of cell division cycle. The microphotographs were also used to present the morphological changes in nuclei and nuclear chromatin of dying cells.
Determination of cellular dehydrogenase activity Activity of dehydrogenases secreted from 2-cm-long apical parts of roots of the untreated and 72-h kinetin-treated seedlings were measured with MTT. Fragments of roots, divided into zone I and zone II ( Fig. 1a, b), were washed with 0.01 M Na . phosphate saline buffer (PBS) and then placed in 0.9 ml of PBS buffer with 100 μl of 0.5 mg . ml −1 MTT for 2 h. Next, the 375 μl of the reaction mixture were added with 1,125 μl of acidified isopropanol, and absorbance at 570 nm was spectrophotometrically (JenaMed) measured. Dehydrogenases activity expressed in U (unit) were calculated as the amount of blue formazan produced per 1 min of 1 g of fresh weight of 2-cm-long apical part of roots.
Determination of ROS production, analyses of the amount and distribution of calcium ions and vacuole detection
Reactive oxygen species were determined in two root zones ( Fig. 1a, b) with 0.05 % NBT in PHB. The 2-cm-long living apical fragments of roots of the untreated and 48-, 72and 96-h kinetin-treated seedlings were washed with PHB and stained in dark for 1 h. Next, these fragments were fixed with 2.5 % glutaraldehyde in PHB for 15 min, then they were washed two times with PHB, analysed and photographed under the white light of microscope.
Calcium ion determinations were carried out in the apical part of roots of the untreated and 48-, 72-and 96-h kinetintreated seedlings fixed with 2.5 % glutaraldehyde at PHB for 10 min. The plant material was washed three times for 5 min with 50 mM Tris-HCl pH 7.45 buffer (THB), stained for 5 min with 100 μM CTC at THB, washed three times for 2 min with THB, analysed and photographed under B2A filter of fluorescence microscope. Microphotographs were used to analyse distribution and amount of calcium ions by measurement of CTC fluorescence intensity with the Scn Image software. Vacuoles were determined in the roots of seedlings which were untreated or treated for 72 h and for 7 days with kinetin under the white light microscope and under the florescence microscope after staining with AO mixture, respectively. Their membrane nature was confirmed under the phasecontrast microscope.
DNA extraction and separation DNA from the 2-cm-long apical part of roots of the untreated and 72-h kinetin-treated seedlings without meristems were extracted on ice with 2 % SDS, 0.5 M NaCl, 100 mM Tris-HCl and 50 mM EDTA at pH 8.0. DNA isolation and electrophoresis after RNA digestion with RNase A were carried out at 100 V for 3 h on a 1.5 % (w/v) agarose gel with 0.50 μg . ml −1 ethidium bromide according to Byczkowska et al. (2012).
Detection of cell death and estimation number of dying and dead cells induced by kinetin
Double-coloured staining of nuclei with AO and EB based on their diverse abilities to permeate via cell membrane allowed the detection of cell death because AO permeated intact cells and emitted green fluorescence as a result of intercalation in the double-stranded DNA, while increasing changes in cell membrane induced by kinetin allowed EB to intercalate into nuclear DNA and gradually, by red colour, EB masked green colour of AO. Thus the computerised measurement of increasing RFI values of both fluorochromes allowed to count the number of living, dying and dead cells.
The results showed that kinetin induced cell death only in a mid cortex of parenchyma cells of the zone II ( Fig. 1b; Fig. 2b) but not in the meristem cells of the zone I of the roots (Figs. 1a, 2a). In the root zone I of untreated or kinetintreated plants for 48, 72 and 96 h, cells were alive ( Fig. 2a) with unchanged green nuclei (Fig. 2c). In the zone II of roots of untreated plants after 72-and 96-h culture, some of cortex cells were dead; however, their number did not exceed about 4 % at 72 h (Fig. 3a). Kinetin treatment induced cell death in root zone II. Dying cells accounted for 40 % (Fig. 3b) and 51 % (Fig. 3c) of the total number of cells after 48-and 72-h exposition, respectively, and most of them (about 35 and 45 %, respectively) were at the early stage of PCD ( Fig. 3 (b', c')) with green-yellow and partly condensed nuclear chromatin (Fig. 2d, e). The rest of them (about 5 and 6 %, respectively) were dark yellow and bright orange with degraded nuclear chromatin (Fig. 2 f, g) indicating that they were at the late stage of PCD ( Fig. 3 ( c')). After 96-h treatment with kinetin, the number of dying cells in root cortex lowered to 32 % of the total cell number (Fig. 3d). About 55 % of them were at the early while 45 % were at the late stage of PCD (Fig. 3 (d')). After 72h treatment, dead cells appeared. Their number was 11 %, but after 96-h treatment, it decreased to about 4 % (Fig. 3c, d). The nuclei of these cells had structurally normal dark orange or bright red nuclear chromatin (Fig. 2h). The differences between numbers of indicated cells were statistically significant (0.01<p<0.05).
Effect of kinetin on morphology of nuclei and nuclear chromatin and DNA degradation during death The AO/EB as well as DAPI staining allowed the observation of morphological changes in nuclei and nuclear chromatin after 48-, 72-and 96-h treatment with kinetin. The normal structure of nuclei of living cells (Figs. 2c, 4a) changed during kinetin-induced cell death. The results showed increasing condensation of nuclear chromatin ( Fig. 2d-g, 4b), formation of micronuclei (Fig. 4c), invagination (Fig. 4d), degradation of chromatin (Fig. 4e) and nuclei fragmentation (Fig. 4g). The 96-h treatment with kinetin showed increasing nuclei degradation (Fig. 4f) and nuclei fragmentation (Fig. 4h).
However, the agarose gel electrophoresis of DNA extracted from the roots of seedlings treated for 72 h with kinetin showed that this degradation was not internucleosomal and only a "small smear" was visible (Fig. 5), while the profile area of the unornamented nuclei, in dying cells, increased by about 40 %.
Kinetin changed the number of cells in particular phases of cell division cycle DAPI-microcytophotometric analysis allowed the estimation of DNA content for assigning cell nuclei to an appropriate phase of the cell cycle. It was shown that in the control plants cultured for 72 h in the zone I of roots, cells in the G1-, S-and G2-phase constituted about 53, 31 and 16 %, respectively ( Fig. 6a (a')), while the numbers of G1-and S-phase cells decreased by about 15 % and 9 %, respectively, and the number G2-phase cells increased by 4 % in the zone II ( Fig. 6b (b'); p<0.05). Moreover, there were approximately 20 % of endoreplicated cells with nuclei containing more than 4C DNA (Fig. 6b (b')). The 72-h treatment with kinetin significantly affected DNA content in the seedling root cells. In the zone I, the number of G1-phase cells increased to about 75 %, the numbers of the other two types of cells decreased (Fig. 6c (c'); p<0.01). In the zone II of the kinetin-treated roots, the number of G1-phase cells was lowered by about 45 % (p<0.01) in comparison with the kinetin-treated zone I and by about 8 % (p<0.05) in comparison with the zone II of untreated roots; the number of S-phase cells did not change while that of G2-phase cells decreased by about 3 % (Fig. 6d (d'); p<0.05). The loss of G1-and G2-phase cells was replaced by 10 % of cells containing lower than 2 C DNA, forming the hypoploid fraction ( Fig. 6d (d')). Fig. 3 The number of living, PCD-dying and dead cells in the root cortex V. faba ssp. minor seedlings of untreated (a) and treated with kinetin for 48, 72, 96 h (b-d) as well as of dying cells at early and late stage of PCD (b'-d') and those treated for 72 h with adenine (e), adenine with kinetin (f), mannitol (g) and mannitol with kinetin (h). Error bars represent the SE of the mean of three independent experiments Cellular dehydrogenase activity and conductivity during kinetin-induced cell death MTT determination is a useful colorimetric method to estimate the number of human, animal and fungal cells in suspensions (Mosmann 1983;Freimoser et al. 1999). According to this method, the number of viable cells has a linear relationship with absorbance of the MTT converted by dehydrogenase to formazan. The presented studies showed that the activity of dehydrogenases secreted from the zone I of roots of the 72-h kinetin-treated seedlings did not changed (p>0.05); however, that in the zone II decreased to about 40 % (p<0.01; Fig. 7a, b) in comparison to the control series. The activity of dehydrogenases after removal of kinetin from the 72-h culture was at same level (data not shown). The lowered level of dehydrogenase activity was accompanied with the significantly (p<0.01) decreased (by about 54 %) number of mitochondria counted at the profile area of cortex cells from the root zone II of 72-h kinetintreated seedlings (Fig. 7 (a1, a2)). Kinetin-induced cell death was also manifested by increment of conductivity of cell electrolytes secreted by the whole roots. At 72-h treated roots, its value increased from 11.5±1.56 to 20.15±3.25 μS (p<0.01), while at 96-h treated roots, it decreased to 13.75± 2.20 μS (p<0.05).
Effect of adenine and mannitol on the number of kinetin-induced dying cells
Seedlings treated for 72 h with 50 μM adenine, the inhibitor of phosphoribosyl transferase, without or with kinetin showed that more than 95 % of cells of V. faba ssp. minor seedling roots were alive. Adenine induced death only in about 1 % of cells from the zone II of roots (Fig. 3c), while in the series with kinetin, about 4 % of cells were dying and about 1 % were dead in this zone (Fig. 3d). The differences between numbers of dying and dead cells in these series were not statistically significant (p>0.05). Application of 50 μM mannitol induced cell death in about 2.5 % of cortex cells from the zone II of root of 72-h treated seedlings ( Fig. 3e; 1.5 % were dying and 1 % were dead). Mannitol with kinetin increased the number of dying cells by about 1 % after 72-h treatment with kinetin (Fig. 3f). Their number was similar (1-2 %) after 48-and 96-h treatment with kinetin and mannitol. Differences between values were not statistically significant (p > 0.05). Moreover, mannitol inhibited ROS production in kinetin-treated roots (Fig. 8b (b')) and the microscopic pictures of root cortex cells were similar to the untreated series of V. faba ssp. minor seedlings (Fig. 8a). The number of ROS-producing cells estimated per microscope area in the roots treated with kinetin and mannitol for 48 h was 1 %, then it increased to 10 % (72 h) and decreased to 2 % (96 h).
Kinetin induced vacuole formation, increased the amount of cytosolic calcium ions, affected cell and root lengths as well as increased weight and width of roots
In the parenchyma cells of root cortex of V. faba ssp. minor seedlings in the kinetin-treated series under the white light microscope, the presence of vacuolar structures ( Fig. 8d; not observed in the untreated series, Fig. 8c) with confirmed membrane nature by phase-contrast microscope (Fig. 8 (d')) was shown. These vacuoles underwent fusion forming large vacuoles ( Fig. e) with acid pH (green colour after AO staining) in root cells of seedlings treated with kinetin for 7 days.
The intensity of calcium ion CTC fluorescence staining in 48-, 72-and 96-h treated V. faba ssp. minor seedling root cells increased by 10, 30 and 14 %, respectively, in the levels of cytosolic calcium ions (Fig. 8i). However, only the 72-h treatment resulted in statistically significant changes (p<0.05; Fig. 8i). 6 Histograms (a-d) displaying the percentage frequency distribution of cells (a'-d') in the G1-(peak around 9 a.u.), G2-(peak around 19 a.u.) and S-(represented as the gaps between the peaks for phases G1 and G2), endoreplication-(>4C DNA) phases and hypoploid fraction (<2C) indicated by an arrow, microcytophotometrically determined in the zone I (a, a', c, c') and II (b, b', d, d') of untreated (a, a', b, b') and kinetintreated (c, c', d, d') V. faba ssp. minor seedling roots. Error bars (a'-d') represent the SE of the mean of three independent experiments Fig. 7 The number of mitochondria in the zone II of mid parts cortex cells of the untreated (a1) and kinetin-treated roots (a2) and activity of cellular dehydrogenases secreted (b) from the zone I and II of untreated and kinetin-treated V. faba ssp. minor seedling roots. Scale bars are 50 μm. Error bars represent the SE of the mean of three independent experiments Fig. 8 Microphotographs of ROS production (a, b, b'; dark arrows), formation of small (c, d, d'; dark arrows) and large (e) lytic vacuoles, calcium ion distribution (f, g, white arrows) and its amount (h) expressed in arbitrary units (a.u.) of fluorescence intensity of the zone II of untreated (a, c, f, h) and kinetin-treated (b, b', d, d', e, g, h) V. faba ssp. minor seedling roots. Scale bar in a is 100 μm, in b is 50 μm, in b' and c, d, e, f, g is 10 μm and in d' is 20 μm. Error bars in the "i" figure represent the SE of the mean of three independent experiments After 72-h incubation of V. faba ssp. minor seedlings with 46.0 μM kinetin, the average length of roots decreased by about 45 % from 5.31 ± 0.59 cm to 2.86 ± 0.41 ( Fig. 1; p<0.01), while the weight of 2-cm-long apical parts of roots increased from 56.9±3.7 to about 100.2± 4.4 mg (p < 0.02). Width of the apical part of roots increased by about 40 % from 1.50 ± 0.09 to 2.1 ± 0.05 mm (p<0.05). Moreover, the zone II cells diminished from 208.5±25.5 μm to 135.4±13.0 (p<0.05).
Discussion
Isoprenoid and aromatic N 6 -substituted adenine derivatives, endogenously occurring as free bases, nucleosides, nucleotides and glucosides, known as cytokinins, often present at very low concentrations, are important plant growth regulators and animal cell differentiation factors (Barciszewski et al. 2007;Doležal et al. 2007).
The main aim of this research was to show that the kinetin-induced cell death in roots of V. faba ssp. minor seedlings is connected with changes of plasma, nuclear (Zhao et al. 2001) and mitochondrial (Carimi et al. 2003) membranes-the first and common symptoms of plant and animal PCD process (van Doorn and Woltering 2005; Cabello et al. 2009), during which mitochondria and then nucleus tend to be the last organelles to be degraded during the execution phase of PCD (van Doorn and Woltering 2005; Cacas 2010). All this allowed to apply the unequivocal unique AO/EB fluorescence staining method to detect PCD (Byczkowska et al. 2012) and to show changes in nuclei which are the unquestionable hallmarks of this process in animals (Ribble et al. 2005) and in plants (van Doorn 2011;van Doorn et al. 2011). This method due to EB ability to penetrate via cellular membrane enables the detection of both the first and the last steps of cell death, making it a very universal method for cell death evaluation (Byczkowska et al. 2012). Using this method, it was shown that kinetin induced PCD process only in the root cortex parenchyma cells but not in the root meristem cells. The number of dying cells increased after 48-and 72-h treatment with kinetin, and after 96 h, their number decreased.
The fact that kinetin induced death in the cortex cells was also evidenced by the number of cells in particular phases of the cell cycle. It was shown that in the zone I of roots, where the dying or dead cells were not detected, kinetin arrested cells at G1 phase whereas in the zone II of roots, kinetin decreased the number of G1-and G2-phase cells and the fraction of hypoploid nuclei appeared, where the G2-and G1-phase cells, after progressive degradation, could be shifted. These results clearly suggesting that the G1-and G2-phase cells of the zone II of V. faba ssp. minor roots were directed by kinetin to PCD were in agreement with those presented for animal cells which undergo apoptosis after cytokinin treatment (Barciszewski et al. 2007). Cytokinins regulating cyclin-dependent kinase activities control cell proliferation of many tumour cell lines arresting it at G1/S and/or G2/M transition points triggering apoptosis (Havlíček et al. 1997). The block of cell proliferation and PCD induction was also detected in carrot (D. carota L.) and in A. thaliana cell cultures after treatment with BAP (Carimi et al. 2003).
PCD is also manifested by increment of conductivity of culture media, resulting from the changes of cell membranes potential leading to leakage of cell electrolytes (Kawai-Yamada et al. 2004;Palavan-Unsal et al. 2005). In the present research, kinetin induced up to two-fold increase in cell electrolyte leakage. Both removal of kinetin from the culture media (data not shown) and 96-h treatment with kinetin reduced cell electrolyte leakage. Kinetin also decreased by about 40 % cellular dehydrogenase activities released from cells of the zone II of roots, unchanging activities of cellular dehydrogenases released from the zone I of roots. This study showed that the measurement of total cellular dehydrogenases activity, which is usually used to determine the number of animal viable cells (Mosmann 1983;Freimoser et al. 1999), might be also applied to in planta systems, where the number of living cells might be expressed as a percentage of dehydrogenases activity. These values (about 40 %) correlated with similar percentage (about 40 %) of alive root cortex cells and were also similar to the number of mitochondria (about 45 %) remaining in the root cortex cells of V. faba ssp. minor after 72h treatment with kinetin. Their number may describe a minimum number of functional mitochondria designated as the "point of no return" (van Doorn 2005), indicating that these cells are protected against kinetin-induced death.
Besides the above, some other values of PCD hallmarks induced after 72-h treatment with kinetin oscillated around 40 %, namely decrease in root and cell length, increment in weight and width of 2-cm-long apical parts of roots as well as increase in nuclear profile area. All these findings suggested that kinetin-induced cell death process was specific and expressed itself with similar intensity on both morphological and metabolic levels.
Being in agreement with the point of view of van Doorn et al. (2011) which indicates that apoptosis is not present in plants, our results showed that kinetin, the animal and plant cell growth and differentiation regulator (Barciszewski et al. 2007), in the root cortex cells of V. faba ssp. minor seedlings induced programmed cell death. It was characterised by (1) (7) inhibition of longitudinal growth of cells leading to decreased length and increased width of roots. However, kinetin-induced PCD effects including increase in (8) cytosolic calcium ions and (9) ROS production (also hallmarks of animal necrotic or plant non-autolytic type of cell death; Cacas 2010; Collazo et al. 2006;Jan et al. 2008;van Doorn 2011;van Doorn et al. 2011) "it does not automatically mean that the example is to be classified as a necrotic PCD" (van Doorn 2011). Decrease in the number of dying cells after 96-h treatment with kinetin accompanied with decrease in conductivity, amount of cytosolic calcium ions and ROS producing cells strongly indicated that this process was transient. ROS production, being of mitochondrial and/or cellular origin (Mlejnek et al. 2003), is connected with the loss of mitochondrial membrane potential which was caused by depletion of mitochondrial ATP levels leading to the oxidative DNA lesions followed by DNA fragmentation (Roy et al. 2008). DNA degradation in V. faba ssp. minor roots might be also induced by ROS or by calcium ions (Jan et al. 2008), ubiquitous signal molecules, that are involved in the regulation of almost all cellular functions (Bergner and Huber 2008), whose cytosolic concentration increased during kinetininduced cell death. Endoplasmic reticulum (ER) is the main intracellular Ca 2+ reservoir (Bergner and Huber 2008), and when membrane of ER loses its potential and/or its integrity, during PCD, it is released. These ions might be rapidly taken up by mitochondria, rendering cells less responsive to death stimuli (Cacas 2010) and this also might induced specific nucleus-containing nucleases (NUC 1, DNaseI and DNaseII) which led to DNA condensation, fragmentation and marginalisation (Jan et al. 2008). Chromatin condensation as well as micronuclei formation and loss of cell membrane integrity (van Doorn 2011;van Doorn et al. 2011) are the hallmarks of apoptotic animal cell death (Scott and Logan 2008); however, they resemble PCD induced in plants by BAP, another natural plant cytokinin (Carimi et al. 2003;Gahan et al. 2003;Mlejnek et al. 2003;Barciszewski et al. 2007). However, in the kinetin-treated roots, internucleosomal DNA degradation was not visible, similarly as after ACC application in V. faba ssp. minor roots during first steps of aerenchyma formation (Byczkowska et al. 2012). Kinetin-induced cell death also led to aerenchyma formation (data not shown) but did not cause the elimination of all cortex cells from the root and whole roots and/or whole of V. faba ssp. minor seedlings. This was supported by decrease in the number of dying cells after 96-h treatment with kinetin. Moreover, about 4 % (after 48 h), 6 % (after 72 h) and 15 % (after 96 h) of cells were directed to the late-degradation-PCD stage, suggesting protection of some cells by eliminating others. This fact can be also explained by the lack of DNA ladder which is observed during cell death but mainly in cell cultures e.g. after treatment with BAP (Carimi et al. 2003).
All of these results, as well as the fact that aerenchyma formation is a vacuolar type of PCD (van Doorn 2011;van Doorn et al. 2011), indicate that cell death induced in V. faba ssp. minor seedling roots is the kinetin-specific vacuolar type of death.
Analyses of hallmarks fundamental for PCD (Jan et al. 2008;Cacas 2010;van Doorn 2011;van Doorn et al. 2011) showing the nature of kinetin-induced cell death also allowed to propose a probable mechanism of its induction. It seems that kinetin is converted with phosphoribosyl transferase to corresponding monophosphates (Mlejnek and Doležel 2005), purine ligands specific for histidine kinases receptors (AHK2, AHK3 and AHK) discovered in ER membrane of Arabidopsis and Zea mays (Caesar et al. 2011;Doležal et al. 2007;Barciszewski et al. 2007). Then, monophosphates binding to receptors induce efflux of calcium ions from ER (Bergner and Huber 2008) and directly or indirectly arrest cells in the G1 phase and G2 phase and direct some of them onto the path of programmed death with DNA condensation, nuclei segmentation and DNA degradation by nucleases (Jan et al. 2008;van Doorn et al. 2011). | 2022-11-30T14:11:35.870Z | 2012-11-11T00:00:00.000 | {
"year": 2012,
"sha1": "3cba262849afbe565f475d06b6d8be491318fdc8",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00709-012-0466-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "3cba262849afbe565f475d06b6d8be491318fdc8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
255640971 | pes2o/s2orc | v3-fos-license | The Feasibility and Safety of Endoscopic Submucosal Dissection for Circumferential Superficial Esophageal Squamous Cell Neoplasms
Background It remains controversial whether endoscopic submucosal dissection (ESD) is still appropriate for circumferential superficial esophageal squamous cell neoplasms (SESCN), and few studies compared the short-term and long-term outcomes of ESD with radical surgery. Methods A total of 140 patients with SESCN who underwent ESD or surgery between February 2014 and October 2021 were retrospectively reviewed. The characteristics of patients, operative time, postoperative complications, overall survival (OS), recurrence-free survival (RFS), and quality of life (QOL) were compared between the ESD and surgery groups. The effect of different methods to prevent esophageal stenosis after ESD were analysed. Results Drinking, family history of cancer, macroscopic type, and intrapapillary capillary loop (IPCL) type were independent risk factors for deep submucosal invasion (SM ≥ 200 μm). Smoking and IPCL type were independent predictive factors for angiolymphatic invasion. The average operative time of ESD was significantly shorter than that of surgery (174.5 ± 51.16 min vs. 255.9 ± 88.18 min, p < 0.001). The incidence of perioperative complications in ESD group was significantly lower than that in surgery group (5.5% vs. 19.4%, p = 0.015). The ESD group had significantly better functional scale scores for emotional functioning, cognitive functioning, and global health status, and lower rates of pain, dyspnoea, insomnia, appetite loss, diarrhoea, reflux, and trouble with taste than the surgery group. No significant difference in OS and RFS between ESD and surgery group. Conclusions ESD can significantly shorten the operative time and reduce perioperative complications. Additionally, on the premise of using appropriate measures to prevent postoperative stenosis, ESD can be the first choice for the treatment of SESCN, which could provide better QOL, and the long-term prognosis of ESD is no less than that of surgery.
Introduction
Esophageal squamous cell carcinoma has become the sixth most commonly occurring cancer in China, and its mortality rate ranks fourth among all kinds of cancers in China [1]. In recent years, with the development of endoscopic diagnostics, the number of superficial esophageal squamous cell neoplasms (SESCNs, including squamous cell carcinoma and precancerous lesions) detected at an early stage has increased. Currently, the treatment of SESCN mainly includes endoscopic resection (ER) and surgical resection, but both have their own limitations. Although surgical resection is an effective and radical treatment for SESCN, the incidence of postoperative complications such as bleeding, anastomotic 2 of 15 fistula, pulmonary infection, chylothorax, empyema, functional gastric emptying disorder, reflux esophagitis, and anastomotic stricture is high. In addition, surgical resection causes major surgical trauma, and patients typically experience a poor quality of life postoperatively. Relevant studies have reported that the 5-year survival rate of patients with SESCN located within the mucosa and/or submucosa undergoing ER is approximately 85-95%, which is comparable to that of surgery [2,3]. Moreover, ER is associated with fast recovery, minimal trauma and better postoperative quality of life (QOL) for patients due to esophagus preservation.
With the continuous development and improvement of endoscopic mucosal dissection (ESD), the size of the SESCN is no longer a limitation for ER. Thus, near-circumferential lesions, or even entire circumferential lesions, can be removed en bloc by ESD. However, if the SESCN involves the circumference of the lumen, then the problem of postoperative esophageal stricture after ESD cannot be ignored. Therefore, the indications for ER are still controversial. According to the 2020 Japanese endoscopic submucosal dissection/endoscopic mucosal resection guidelines for esophageal cancer, ER is less recommended for cT1a-EP/LPM superficial squamous cell carcinomas with a major axis length of 50 mm and involving the entire circumference of the esophagus upon implementing preventive measures for stenosis [4]. Although there are several strategies for prevention and treatment in post-ESD esophageal stricture [5][6][7][8][9], the occurrence of other postoperative complications, including perforation, massive haemorrhage, and postoperative infection, is also closely related to the size of the lesion, the depth of invasion, and the extent of resection of the lumen [10,11].
Due to rich submucosal lymph vessels, the deeper the depth of invasion, the higher the risk of angiolymphatic invasion and lymph node metastasis [12,13]. As a result, ER is generally indicated for patients with very low or no risk of lymph node metastasis. It is particularly important to evaluate the invasion depth of the SESCN and the status of angiolymphatic invasion during preoperative endoscopy. Few reports have analysed the clinicopathological characteristics of circumferential SESCN, indicated how to screen for lesions with a low risk of deep submucosal invasion or angiolymphatic invasion or determined whether they meet the criteria for the ESD procedure. Therefore, the aim of our study was to investigate the risk factors associated with the depth of tumour invasion and angiolymphatic invasion. Meanwhile, we compared the short-term and long-term outcomes of ESD versus surgery for SESCN and evaluated the benefits of ESD on the improvement of QOL for those patients, in order to assess the feasibility and safety of the ESD procedure for circumferential SESCN.
Patients
This study included 146 consecutive patients with SESCN who underwent ER and surgical resection at Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College between February 2014 and October 2021. The inclusion criteria were as follows: (1) high-grade intraepithelial neoplasia or squamous cell carcinoma of circumferential lesions confirmed by pathology; (2) preoperative chest or/and abdomen CT showing no definite thoracic lymphadenopathy or metastasis; and (3) complete preoperative and postoperative clinical and pathological data. Patients were excluded if they (1) had other advanced malignancies in other sites; (2) underwent esophagectomy previously; (3) received any neoadjuvant therapy; or (4) had a serious cardiovascular or cerebrovascular disease, liver and kidney dysfunction, severe blood system diseases, immune system diseases, or severe mental disorders. Finally, a total of 140 patients met the above criteria and were selected as participants ( Figure 1). The patients were divided into an ESD group and a surgery group according to patient's preference after full communication with the patient. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). This study was approved by the Ethics Committee of the National Cancer Center/Cancer Hospital, Chinese Medical College and Peking Union Medical College (No. 19/191-1975), and written informed consent was obtained from all the patients before the operation. the above criteria and were selected as participants ( Figure 1). The patients were divide into an ESD group and a surgery group according to patient's preference after fu communication with the patient. The study was conducted in accordance with the Dec laration of Helsinki (as revised in 2013). This study was approved by the Ethics Com mittee of the National Cancer Center/Cancer Hospital, Chinese Medical College and Pe king Union Medical College (No. 19/191-1975), and written informed consent was ob tained from all the patients before the operation.
Evaluation Parameter
All patients underwent white light endoscopy (WLE) and magnifying endoscop with narrow band imagining (ME-NBI) to estimate their lesions. EUS was performed t estimate the depth of lesions, if the patient was tolerant to it. Then, iodine staining wa
Evaluation Parameter
All patients underwent white light endoscopy (WLE) and magnifying endoscopy with narrow band imagining (ME-NBI) to estimate their lesions. EUS was performed to estimate the depth of lesions, if the patient was tolerant to it. Then, iodine staining was performed using 1.25% Lugol's solution to further determine the extent of the lesion. The baseline characteristics and short-and long-term outcomes of patients were compared between the ESD and surgery groups. The baseline characteristics of patients included age, sex, body mass index (BMI), family history of tumour, smoking history, drinking history, lesion location, lesion size, macroscopic type, intrapapillary capillary loop (IPCL), histological differentiation degree, depth of invasion, and comorbidities. The lesion location was classified according to the 8th edition of the esophageal TNM staging criteria of Union for International Cancer Control/American Joint Committee on Cancer (UICC/AJCC). The upper part is less than 25 cm away from the incisors, and the middle section is 25-30 cm away from the incisors, while the lower section is ≥30 cm away from the incisors. The macroscopic types and depth of invasion were classified according to the Paris endoscopic classification of superficial neoplastic lesions [14]. IPCL classification was based on the classification standard of Haruhiro Inoue and Japan esophageal society (JES) [15,16]. The short-term outcome measures were the rates of en bloc and complete resection in the ESD group, operative time, and perioperative complications. En bloc resection was defined as resection of a lesion in one piece. Complete resection was defined as a resected specimen with tumour-free lateral and vertical margins. The operative time was defined as the total time of ESD or surgery procedures. Perioperative complications included bleeding, perforation, anastomotic fistula, esophageal scar stenosis, and anastomotic stricture. Longterm outcomes included overall survival (OS), recurrence-free survival (RFS), and quality of life after the ESD or surgery procedures. OS was defined as the period before death. RFS was defined as the period before any type of recurrence. QOL was assessed by the validated Chinese version of the European Organization for Research and Treatment of Cancer (EORTC) Quality of Life Questionnaire-Core 30 (QLQ-C30) and Quality of Life Questionnaire-OES18 (QLQ-OES18) 6 months after the treatment or at the end of follow-up.
Operation Procedure
ESD group: The Japanese Olympus GIF-Q260J electronic gastroscope was used for ESD treatment. The specific steps are as follows: (1) Endoscopic marking: after esophageal lesions were stained with 1.25% Lugol's solution, a dual knife (KD-650Q, Olympus, Japan) was used to mark 5 mm outside the lesion; (2) Tunnel establishment: the tunnel entrances were established at the 12 and 6 o'clock positions on the oral side of the lesion, submucosal injection was performed with an injection needle (NM-200 L-0523, Olympus, Japan), and a preincision was made outside the marked point on the oral side of the lesion; (3) Circumferential incision of the lesion's anal mucosa to prepare for rendezvous; (4) Gradually perform submucosal dissection along both sides of the tunnel from the side of the lesion until it joins with the anus, leaving only the mucosa at the entrance of the tunnel; (5) The mucosa at the entrance of the tunnel was peeled off, and the specimen was removed; (6) Electrocoagulation and haemostasis were performed on the exposed blood vessels and active bleeding at the wound using electrobiopsy forceps (FD-410LR, Olympus, Japan). 109 patients of present study were categorized into five groups according to different interventions to prevent esophageal stenosis, such as repeated endoscopic balloon dilation (EBD), polyglycolic acid (PGA) felts (Neoveil, 100 × 50 × 0.15 mm; Gunze Co., Tokyo, Japan) with autologous esophageal mucosa (AEM), PGA with temporary stent implantation (TSI), PGA with AEM transplantation and TSI, and PGA with AEM transplantation and self-control stricturepreventing water balloon (SSWB). 50 patients received repeated EBD and 2 PGA felts with AEM were positioned on the surface of the ulcer. Meanwhile, 48 patients received TSI. We measured the length of the ulcer endoscopically after resection, and stents with appropriate lengths were selected. Then, 43 PGA felts with AEM tissues and 5 PGA felts without AEM tissues were made onto a covered metal mesh stent (CMMS) (MTNSE-S-20/160-A-8/650, MTN-SE-S-20/100-A-8/650, MTN-SE-S-18/120-A-8/650; Nanjing Micro Technology Co., Ltd., Nanjing, China). The endoscope was passed through the stent with grasping forceps through the biopsy channel to grasp the distal steel lasso loop of the stent. Eventually, the stent was positioned on the surface of the ulcer. Before stenting, an overtube was placed through the mouth (MD-48618, Sumitomo Bakelite Co., Ltd., Tokyo, Japan) to facilitate stent passage and protect the laryngopharyngeal mucosa from injury. Besides, 9 PGA felts with AEM tissues were made onto SSWB, and finally they were positioned on the surface of the ulcer.
Surgery group: Radical esophagectomy and two (mediastinal and perigastric) or three (cervical, mediastinal, and perigastric) fields of regional lymphadenectomy were routinely performed. The anastomotic site was related to tumour location. In general, cervical anastomosis was performed in patients with upper esophageal tumours. A supraaortic arch esophagogastric anastomosis was performed for patients with middle or lower esophageal lesions. Reconstruction of the alimentary tract was performed using the stomach or jejunum.
Postoperative Management and Follow-Up
ESD group: Patients with esophageal stent implantation were asked to continue fasting for the first three days after ESD and then underwent CT on the 3rd day to determine whether there were perforations or stent migrations. On the 4th day after ESD, the patients were allowed to start a liquid diet, and on the 7th day, the gastric tube was removed. After ESD, the patient received a proton pump inhibitor (rabeprazole 20 mg; Changao, Nanjing, China) for 6 consecutive days and antibiotics for 3 days q12 h (amoxicillin sodium sulbactam sodium 1.25 g; Ruiyang, Shandong, China). A scheduled endoscope examination was performed once a week to confirm the position of the stent, which was removed during the 6th-8th week after ESD depending on the patient's tolerance. Patients were examined by endoscopy 2 weeks and 3 and 6 months after stent removal and then annually. Patients without esophageal stent implantation were asked to continue fasting for the first three days after ESD and received routine nutritional support, protection of gastric mucosa, inhibition of gastric acid secretion and indwelling gastric tubes for gastrointestinal decompression. The mice were fed liquid food for 3 days and gradually transitioned to a normal diet. Patients returned to the hospital 1 month after ESD for re-evaluation by endoscopy, and the gastric tube was removed. Patients were examined by endoscopy 1, 2, 3, and 6 months after ESD and then annually. When the standard endoscope (GIF-H290; Olympus, Tokyo, Japan) cannot pass through the stenosis, it is defined as esophageal stricture (ES). Patients with ES underwent regular endoscopic esophageal balloon dilation until the standard endoscope (GIF-H290; Olympus, Tokyo, Japan) could pass through the esophageal stenosis during re-examination.
Surgery group: Routine fasting was performed after the operation. A nasogastric tube was left postoperatively for gastric decompression, and an indwelling gastrointestinal tube was left postoperatively for enteral nutrition. The gastric tube was removed 3 days after the operation, and liquid food was started after 2 weeks of nasogastric feeding and gradually transitioned to a normal diet. Patients were re-examined by endoscopy 3 and 6 months after surgery and then annually thereafter.
Statistical Analysis
The statistical calculations were conducted employing SPSS 22 software (SPSS Inc., Chicago, IL, USA). Independent-samples t-test was used to compare continuous variables that were normally distributed. Nonparametric Mann-Whitney U tests were used when variance was not normally distributed. The chi-square test or Fisher's exact probability method was used for the comparison of classified variables between two groups. Independent risk factors were analysed by univariate and multivariate logistic regression. The Kaplan-Meier method was used for survival analysis. A log-rank test was used for comparison of survival curves. p < 0.05 was considered to be significantly different.
Patients and Clinicopathological Features
A total of 140 patients (95 males, 45 females; mean age 62.74 years, range 45-85) were enrolled, and 109 patients were treated with ESD, while 31 patients were treated with surgery. Patient and tumour characteristics are summarized in Supplemental Table S1. Lesions were detected in the upper esophagus in 11 patients, in the upper-middle esophagus in 24 patients, in the middle esophagus in 41 patients, in the middle-lower esophagus in 54 patients, and in the lower esophagus in 10 patients. The overall median longitudinal diameter of the lesions was 70 mm (30-190 mm). Regarding the macroscopic types, 43 lesions were type 0-IIa, 96 were type 0-IIb, and 1 was type 0-IIc according to the Paris endoscopic classification [17]. Of the 140 lesions, 118 (84.3%) were carcinomas and 22 (15.7%) were high-grade intraepithelial neoplasias (HGINs).
Preoperative Endoscopic Ultrasonography (EUS) Findings
Most patients (130 of 140 patients) underwent preoperative EUS (Supplemental Table S2). Only 77 (59.2%) were consistent with the depth of postoperative pathological diagnosis. Preoperative EUS accounted for 19.7% of patients with underdiagnosis of invasion depth and up to 59.4% with overdiagnosis (Supplement Table S2). For lesions infiltrating the submucosa (≥SM) assessed by preoperative EUS, the risk of postoperative pathological infiltration into the submucosa was 2.789 times that of the preoperative assessment of lesions confined to the mucosal layer (<SM) (Supplement Table S3).
The Relationship between Clinicopathological Data and Depth of Invasion/Angiolymphatic Invasion
In the univariate regression models, smoking history, drinking history, family history of cancer, lesion location, macroscopic type, slightly elevated/depressed (WLE), and IPCL (JES classification) were significantly related to deep submucosal invasion (SM ≥ 200 µm), while smoking history, drinking history, complications with early laryngeal tumour, macroscopic type, slightly elevated/depressed (WLE), IPCL (JES classification), and deep submucosal invasion (SM ≥ 200 µm) were significantly related to angiolymphatic invasion (p < 0.05). (Tables 1 and 2). Multivariate stepwise logistic regression analysis revealed that drinking history, family history of cancer, slightly elevated/depressed (WLE) and IPCL (JES classification) were independent predictive factors for deep submucosal invasion (SM ≥ 200 µm). Meanwhile, smoking history and IPCL (JES classification) were independent predictive factors for angiolymphatic invasion (Tables 1 and 2).
Comparison of Clinicopathological Characteristics between the ESD and Surgery Groups
There were no statistically significant differences in age, sex, BMI, lesion location, longitudinal diameter, macroscopic type, pathological type, angiolymphatic invasion, or nerve invasion between the ESD group and the surgery group (p > 0.05). The difference between the ESD group and the surgery group was statistically significant in the depth of invasion (p < 0.05). The lesions in the ESD group were mainly from EP to SM (<200 µm), while the deep submucosal invasion (SM ≥ 200 µm) rate of the lesions in the surgery group was higher (Table 3).
The Short-Term Outcomes and Long-Term Outcomes of ESD and Surgery
The main treatment outcomes are summarized in Table 4. All patients achieved en bloc resection. The complete resection rate was 99.1%. The average operative time of ESD (174.5 ± 51.16) was significantly shorter than that of surgery (255.9 ± 88.18) (p < 0.001). The rate of perioperative complication was significantly higher in the surgery group than in the ESD group (19.4% vs. 5.5%, p = 0.015). Immediate perforation occurred in one patient during ESD, and it was successfully closed after the application of titanium clips without other severe complications. Delayed perforation occurred in two patients in the ESD group, and both patients recovered after conservative medical treatment. There were three patients in the ESD group who developed delayed bleeding postoperatively, all of whom underwent endoscopic haemostasis, and no severe complications regarding bleeding were observed. Anastomotic fistula occurred in one patient in the surgery group, and it was cured after conservative treatment. There were five patients in the surgery group suffering from wound infection, which healed after multiple dressing changes. Esophageal scar stenosis occurred in 85 patients after ESD. Anastomotic stricture occurred in four patients after surgery. The median follow-up times were 29.7 months (range 3.38-78.52) and 39.3 months (range 3.75-79.51) in the ESD and surgery groups, respectively. In the ESD group, 29 patients were pathologically reported to have deep submucosal invasion (SM ≥ 200 µm) or angiolymphatic invasion. Of them, eight patients received radiotherapy, three patients received surgery, and the remaining patients did not undergo any additional treatment. During follow-up, one patient presented with a local recurrence 2 years after ESD and was cured by secondary endoscopic therapy. Another patient with deep submucosal invasion (SM ≥ 200 µm) and angiolymphatic invasion pathologically after ESD who refused any additional treatment died of distant metastasis 5 years postoperatively. Perioperative nonoperative-related death was found in one patient who underwent ESD. In the surgery group, three patients died due to tumour recurrence 1-3 years postoperatively ( Table 4). The overall survival time and recurrence-free survival time of both groups were not significantly different (Figure 2).
Comparison of the QOL Scores between ESD and Surgery
In terms of the mean EORTC-QLQ-C30 functional scores, the ESD group had significantly better functional scales for emotional functioning, cognitive functioning and global health status than the surgery group. According to the symptoms scales of the EORTC-QLQ-C30 and the EORTC-QLQ-OES18, patients in the surgery group had significantly higher rates of pain, dyspnoea, insomnia, appetite loss, diarrhoea, reflux, and trouble with taste than those in the ESD group (Table 5). Table 5. Functional and symptom scales of the EORTC-QLQ-C30 and the EORTC-QLQ-OES18 (mean ± SD).
Comparison of the QOL Scores between ESD and Surgery
In terms of the mean EORTC-QLQ-C30 functional scores, the ESD group had significantly better functional scales for emotional functioning, cognitive functioning and global health status than the surgery group. According to the symptoms scales of the EORTC-QLQ-C30 and the EORTC-QLQ-OES18, patients in the surgery group had significantly higher rates of pain, dyspnoea, insomnia, appetite loss, diarrhoea, reflux, and trouble with taste than those in the ESD group (Table 5).
The Effect of Different Methods to Prevent Esophageal Stenosis after ESD
We categorized patients into five groups according to different interventions to prevent esophageal stenosis, such as repeated EBD, PGA with AEM, PGA with TSI, PGA with AEM transplantation and TSI, and PGA with AEM transplantation and SSWB (Figure 3). PGA with AEM and TSI reduced the mean number of balloon dilatations from 10.8 ± 8.28 (only repeated EBD) to 2.9 ± 4.05 (p < 0.001), and the mean number of balloon dilatations was also significantly decreased to 3.1 ± 3.52 in the PGA with AEM transplantation and SSWB group (p = 0.008) ( Table 6).
Discussion
We tried to identify clinical and endoscopic characteristics that could help det mine which patients with SESCN were good candidates for ER. In this study, we ret spectively reviewed 140 patients with confirmed SESCN who were treated with ER esophagectomy in our hospital, comprehensively presented their clinical and patholo cal features and presented several original findings. The results of the multivariate an ysis suggested that drinking history, family history of cancer, slightly elev ed/depressed (WLE), and IPCL (JES classification) were independent predictive fact for deep submucosal invasion (SM ≥ 200 μm), while smoking history and IPCL (JES cl sification) were independent predictive factors for angiolymphatic invasion.
Previous studies have reported that the lymph node metastasis rate was almos when SESCN was restricted to the mucosa, but the rate of lymph node metastasis
Discussion
We tried to identify clinical and endoscopic characteristics that could help determine which patients with SESCN were good candidates for ER. In this study, we retrospectively reviewed 140 patients with confirmed SESCN who were treated with ER or esophagectomy in our hospital, comprehensively presented their clinical and pathological features and presented several original findings. The results of the multivariate analysis suggested that drinking history, family history of cancer, slightly elevated/depressed (WLE), and IPCL (JES classification) were independent predictive factors for deep submucosal invasion (SM ≥ 200 µm), while smoking history and IPCL (JES classification) were independent predictive factors for angiolymphatic invasion.
Previous studies have reported that the lymph node metastasis rate was almost 0 when SESCN was restricted to the mucosa, but the rate of lymph node metastasis increased to 5.9-14.8% in patients whose tumours invaded the submucosa without vascular invasion and to 25.5-33.3% in patients whose tumours invaded the submucosa with vascular invasion [17][18][19][20][21]. EUS was once considered the most useful modality for judging the depth of lesion invasion. However, previous studies suggested that the accuracy rate of preoperative EUS in judging the depth of invasion was affected by many factors, such as the diameter of lesions greater than 3 cm and the endoscopist's skill level [22,23]. In addition, approximately 20% of cSM lesions and 30% of cSM1 lesions were pathologically diagnosed as EP or LPM lesions with ER specimens [24]. In this study, it was worth mentioning that up to 59.4% of them had excessive judgement (Supplement Table S1). This may be because the lesions included in our research were all circumferential, and they were often accompanied by inflammation, erosion, etc., which may have affected the endoscopist's judgement to a certain extent. In addition, some lesions located in the upper esophagus or lower esophagus near the cardia were difficult for the endoscopist to perform EUS. Moreover, a recent systematic review also revealed that the sensitivity and specificity of EUS in determining the depth of invasion of SESCN were similar to those of WLE and ME-NBI, but the overdiagnosis rate of EUS was relatively high [25]. These overdiagnosed patients with circumferential SESCN would be treated with more invasive interventions, such as esophagectomy or definitive chemoradiotherapy, if we did not have our proposed treatment strategy of ER. Therefore, we believe that for SESCN, there are still some limitations in judging the depth of invasion using EUS alone.
As we know, SESCN are often associated with changes in IPCL. Relevant investigation showed that magnification endoscopy observation of IPCLs allowed in vivo discrimination between intramucosal and submucosally invasive cancer [26]. Inoue classification is the most commonly used in clinical practice [15,16]. However, due to the slightly vague concept expression between its various types, the experience of different endoscopists may contribute to differences in the accuracy of the final results. In 2017, Tsuneo Oyama et al. concluded that the overall accuracy of type B microvessels in assessing the depth of invasion of the SESCN was 90.5% in the JES classification [15,16,27]. Moreover, the JES classification was relatively simple and easy for endoscopists to grasp. According to the results of this study, we considered that the JES classification is superior to EUS in judging whether the lesion has deep submucosal invasion clinically. In addition, previous studies showed that non-flat lesions were more likely to infiltrate into the deep submucosa than flat lesions according to the morphology under endoscopy [28,29], which was consistent with our findings (Table 1). In addition, we also found that a family history of tumours and alcohol consumption were independent risk factors for deep submucosal invasion.
It is worth noting that 15.71% (22/140) of patients had angiolymphatic invasions. Previous studies have demonstrated that angiolymphatic invasion is closely correlated with lymph node metastasis [30][31][32][33]. A relevant study showed that angiolymphatic invasion may occur even in stage T1a SESCN, with an incidence of approximately 3.12% [34], which was close to our results (4.29%). Further analysis revealed that T1b SESCN complicated with angiolymphatic invasion accounted for 11.43%, which may be because the esophageal submucosa is rich in vascular tissue, providing sufficient blood supply and lymphatic return to the esophageal mucosa. Notably, the present study found that the risk of angiolymphatic invasion was significantly associated with JES classification. In addition, we also found that smoking was an independent risk factor for angiolymphatic invasion. Interestingly, in a previous study, men were more likely to develop angiolymphatic invasion than women [31], which may be explained by the fact that the ratio of smoking was higher in males than in females.
Although the lesions included in this study were all circumferential esophageal lesions, the ESD procedure achieved high en bloc and complete resection rates (100% and 99.1%, respectively), which were even in line with the noncircumferential lesions in previous research [35]. Thus, we supposed that the size of the lesion is no longer a limiting factor for endoscopic treatment of SESCN. More importantly, ESD was significantly less time consuming and less invasive than surgery and improved the quality of life of patients. In the present research, we used the EORTC QLQ-C30 and EORTC-QLQ-OES18 to compare the effects of ESD and surgery on postoperative QOL in patients with circumferential SESCN. We found that patients in the surgery group had significantly higher rates of pain, dyspnoea, insomnia, appetite loss, diarrhoea, reflux, and trouble with taste than those in the ESD group, which may be caused by large surgical trauma and postoperative anatomical changes. Moreover, the results of the QOL scale showed that the ESD group had significantly better functional scales for emotional functioning, cognitive functioning and global health status than the surgery group. Therefore, compared to surgery, ESD provides postoperative QOL for patients.
In general, postoperative stenosis has become a major concern in terms of longterm outcomes after ESD. Previous studies reported that the incidence rate of esophageal stenosis in patients with wholly circumferential lesions reached 100% [36,37]. Currently, EBD is considered to be an effective treatment for post-ESD stenosis. In addition, several management strategies exist for strictures after esophageal ESD [6][7][8][9]. According to different approaches to prevent postoperative stenosis of the esophagus, we categorized patients into five groups according to different interventions to prevent esophageal stenosis, such as repeated EBD, PGA with AEM, PGA with TSI, PGA with AEM transplantation and TSI, and PGA with AEM transplantation and SSWB. It is worth noting that PGA with AEM and TSI could lessen esophageal stenosis occurrence to 53.5%, and PGA with AEM transplantation and SSWB reduced stenosis incidence in 55.6% of patients. Moreover, the mean number of balloon dilatations of these two methods was significantly less than that of repeated EBD alone. Therefore, our experience may offer some alternatives to decrease the risk of esophageal stenosis for ESD in circumferential SESCN. For patients who underwent ESD in this research, all survived well except for one patient who refused any additional treatment postoperatively; the patient died of distant metastases 5 years after ESD. Comparatively, three patients died due to tumour recurrence 1-3 years postoperatively in the surgery group. In addition, the depth of submucosal invasion did not influence overall survival or recurrence-free survival in the ESD and surgery groups.
To our knowledge, this is the first study to compare the QOL and efficacy of ESD versus surgery for circumferential SESCN. Despite the sample size is small, but this study still includes the largest number of cases till now, and we believed the quality of the data warrant serious consideration of our findings. Besides, we also found the risk factors of the incidence of deep submucosal invasion and angiolymphatic invasion for circumferential SESCN. However, there are still several limitations in the present study. First, this study was conducted at a single institution and a retrospective exploratory design. Additional multicentre and prospective studies with high quality, large sample sizes, and strict operations are required for further verification. Second, the follow-up period of some patients was relatively short and the outcomes need to be further analysed and discussed after extending the follow-up period. Thus, confirmation studies with larger multi-institutional population and adequate follow-up duration are required to confirm the feasibility, safety and suitable criteria of ESD for circumferential SESCN. Nevertheless, the results of this study may provide a useful foundation for future studies.
In conclusion, on the premise of using appropriate measures to prevent postoperative stenosis, ESD could provide better perioperative outcomes in terms of operative time, perioperative complications, and QOL compared with surgery for patients with circumferential SESCN. Meanwhile, the present study demonstrates that there was no significant difference in recurrence rate and mortality between surgery and ESD. It is appropriate to use ESD to treat circumferential SESCN for those elderly patients or patients with poor basic physical conditions who cannot tolerate surgical treatment. Such patients could benefit more from ESD than surgery. The present results might provide endoscopists with useful information for preprocedural decision-making for circumferential SESCN.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of the National Cancer Center/Cancer Hospital, Chinese Medical College and Peking Union Medical College (protocol code No. 19/191-1975 and date of approval 2019-07-11).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author, upon reasonable request. | 2023-01-12T17:03:07.950Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "07255a5dc7e6f8511863e6020a735f166e461c0c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/2/471/pdf?version=1672992669",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f52ebbb7e0705fdf0cb2b021196f2077f90cc9cc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
73712878 | pes2o/s2orc | v3-fos-license | The impact of eggshell colour and spot area in Japanese quails : II . Slaughter and carcass characteristic
This study was carried out to investigate the effects of eggshell colour and spot properties (colour and size of the spot area) on growth performance and carcass traits of Japanese quail (Coturnix coturnix japonica) eggs. Study material were allocated to five groups according to their eggshell and spot colours: black spots on greyish white coloured eggshell (I), blue spots on greyish white coloured eggshell (II), diffuse brown spots on greyish brown coloured eggshell (III), brown spots on light green colored eggshell (IV), and small brown spots on greyish brown coloured eggshell (V). The size of the spotted area was determined in each egg group using digital image analysis. The groups did not differ for body weight and length of the shank at the end of the growth period. However, the groups differed significantly for carcass yield after slaughter (not eviscerated) and carcass yield. These parameters were highest in Group I (82.08 and 76.09%) and lowest in Group III (80.20 and 73.86%). Digital image analysis demonstrated that heart length, cardiac fat area, gizzard width, and intestine length varied between the groups. Cardiac fat area was largest in Group III (0.86 cm2) and smallest in Group V (0.65 cm2). Gizzard width was greatest in Group I (2.63 cm) and smallest in Group V (2.47 cm). Intestine length was greatest in Group V (78.45 cm) and smallest in Group IV (72.39 cm). Body weight, shank length, and slaughter and carcass weight do not vary in relation to eggshell colour or the size of the spotted area. The lengths of intestine and heart, gizzard width, and cardiac fat area do vary in relation to eggshell colour or the size of the spotted area.
Introduction
The eggshell provides mechanic protection for the egg content and the developing embryo.It also serves as a barrier against microorganisms and as a source of calcium for the embryo.The eggshell has other important functions such as enabling gas exchange between the egg and the environment by avoiding excessive dehydration, by means of the permeable pores on its surface.
In nature, the colour of the eggshell camouflages the egg.In quail eggs, both the background colour and spot colour of the eggshell vary.The background colour of the eggshell varies from white to light tan and light brown.In quail eggs, the colour of the spots can be blue, black, or brown (Narahari et al., 1988).While the blue-green coloured biliverdin is an antioxidant eggshell pigment, the brown coloured protoporphyrin is a pro-oxidative eggshell pigment (Moreno and Osorno, 2003).In some species, the eggshell colour is associated with the body composition and the immune status (immune competence) of both the female and the progeny (Moreno et al., 2004;Siefferman et al., 2006;Krist and Grim, 2007).It has been reported that the intensity of the blue-green colour of the eggs of the pied flycatcher (Ficedula hypoleuca), apart from being positively correlated with the biliverdin content of the eggshell, is also an indicator of the nutritional status of the laying females (Moreno et al., 2006).Duval et al. (2013) reported that the eggshell of the eggs laid by female quails fed a diet with a restricted antioxidant capacity contained more protoporphyrin and less biliverdin.On the basis of these data, the researchers suggested that the brightness of the eggshell would decrease whilst the colour intensity of the eggshell would increase in eggs laid by female quails fed as such.Several studies have pointed out to a positive correlation between eggshell colour and the health status of the female and/or chick, and parameters of immune capacity such as maternal antibody levels and yolk testosterone and yolk lutein concentrations (Moreno et al., 2005;Hargitai et al., 2008).The pigments held responsible for the formation of reddish spots in spotted eggs, according to the structuralfunction hypothesis, namely, the protoporphyrins, are deposited in the parts of the eggshell characterized by less calcium deposition.García-Navas et al. (2011) reported that in eggs of blue tit (Cyanistes caeruleus), the level of the protoporphyrin pigment in the eggshell is partly correlated with eggshell thickness.
The hatching weight of the chick is an important indicator of its future body weight.The egg weight, eggshell weight, eggshell thickness, and egg yolk and egg white weights are significant traits, which have an effect on the egg quality, hatchling weight, and hatching results (Khurshid et al., 2003;Seker et al., 2004).The effect of eggshell colour on the hatching weight of chicks has been investigated in previous research.Hassan et al. (2013) determined that, in quail eggs, four different eggshell colours (light, dotted, spotted, and dark) had a significant effect on the chick weight and the proportion of the chick weight to the egg weight (chick weight/egg weight).It has been reported that eggshell colour, in relation to egg quality, may affect late stage embryonic development as well as chick development from hatching to the start of feed consumption (Farghly et al., 2012).Furthermore, Farghly et al. (2015) determined that the hatching weight differed among chicks that hatched from white, spotted violet, and spotted brown quail eggs.
This study aimed to investigate the impact of eggshell colour and maculation (spot colour and size of spotted area) on chick hatching weight, weekly body weights, and slaughter and carcass traits, and to determine the variance of these traits by sex.The digital image analysis method was used to determine the differences between the study groups for the area and length measurements of some visceral organs (heart, gizzard, and intestine).
Material and Methods
Hatching eggs, collected within a three-day period from a private holding raising Japanese quails, constituted the material of the study.The eggs underwent macroscopic examination at the laboratory, and broken, cracked, abnormally shaped, and soiled eggs were excluded from the study.The experiment involved 1,062 eggs from 16-wk-old Japanese quail (Coturnix coturnix japonica), which reached 95% egg production.
Prior to being incubated, the hatching eggs were macroscopically examined and allocated to five groups according to their eggshell and spot colour.Each egg was numbered individually.The size of the spotted eggshell area of each egg included in all groups was determined by digital image analysis (Table 1).A mechanism with a measurement scale was set up for the digital imaging process.The individually numbered eggs were placed in the mechanism and were imaged at an approximate distance of 20 cm.Each egg was photographed from one side, and then turned 180º to the other side to be photographed again.The size of the spotted area was expressed in cm 2 .
Prior to incubation, all eggs included in each group were weighed individually to determine egg initial weight.The eggs were randomly placed in the setter with three repeats, such that each tray provided for one repeat.In the setter, the temperature was set at 37.6 o C and the relative humidity was adjusted to 60-65%.On day 14 of embryonic development, individual compartments were established for each egg included in the same group, and the hatching period was initiated by placing eggs of the same group in an individual compartment.In the hatching machine, the temperature was set at 37.2 o C and the relative humidity was adjusted to 65-70%.
The chicks, which hatched from the incubated eggs included in each study group, were weighed individually and identified using wing bands.To determine their body weights, the individually identified chicks were transferred to growth cages.During the first two weeks of the growth period, 638 quail chicks were raised.The chicks included in each group were individually weighed on a weekly basis so as to determine and record their weekly body weights.Following the weight measurement performed in the second week, the chicks were sexed.Quails were sexed at three weeks of age, according to the appearance of the breast feathers (feather sexing).Subsequently, 80 (4 × 20; 4 = number of repeats, 20 = number of animal in each repeat) animals of each sex were assigned to each group.The weekly body weights of 80 chicks from each group, and in total 400 chicks from five groups, were determined.Furthermore, in the 5th week, the shank length of each chick was measured on an individual basis.A digital calliper was used to measure the shank length.Shank length was expressed in mm as the distance between the proximal and distal ends of the tarsometatarsal bone.
During the five-week growth period, the animals received a diet containing 3000 kcal energy/kg and 22% crude protein.
In the 5th week, a total of 400 animals (80 × 5) from five groups were weighed to determine their body weights.The quails with body weights closest to the mean value (either above or below the mean value) were chosen for slaughter.In each group, 200 quails were selected for slaughter (5 × 40; 20 females and 20 males).The selected animals were weighed one more time prior to being slaughtered, and their weights were recorded together with their sex.
Before the abdomen was cut open, the carcass weight after slaughter (not eviscerated) was determined.Subsequently, the abdomen was cut open, and the visceral organs except for the kidneys were removed, aiming to determine the weight of the eviscerated carcass.The carcasses were cut into the breast, wings, right thigh, left thigh and back, and the weights of these cut-up parts were measured.Of the edible visceral organs, the heart, liver, and gizzard (full) were weighed.Of the inedible visceral organs, the full intestine was also weighed.
Following the slaughter of the animals and the cuttingup of their carcasses, of the visceral organs, the heart, gizzard, and intestine were placed into separate storage bags together with the wing number of the animals showing their individual identification number and group.Before being imaged for digital image analysis, the visceral organs were stored in a deep freezer at −18 o C for approximately a week.Subsequently, the visceral organs were imaged and their images were stored in a flash memory device.The length, width, and surface area measurements of the visceral organs were performed using 399 of these images with a favourable resolution for digital image analysis.
The definition of the measurement areas of the visceral organs and images of these measurement areas are presented below (McLelland, 1990): Gizzard area-1 and Gizzard area-2 (cm 2 ): the size of the area of the lateral surfaces (left and right surfaces) of the muscular stomach (gaster); Gizzard length (cm): the distance between the thin craniodorsal muscle (m.tenuis craniodorsalis) and the thin caudoventral muscle (m.tenuis caudoventralis); Gizzard width (cm): the distance between the thick caudodorsal muscle (m.crassus caudodorsalis) and the thick cranioventral muscle (m.crassus cranioventralis).
Cardiac fat area (cm 2 ): image of the fat-filled groove visible on the auricular surface of the heart; Heart length (cm): the distance between the base of the heart (basis cordis) and the apex of the heart (apex cordis); Heart width (cm): the width of the heart at the atrioventricular level.
Intestine length (cm): total length of the small intestines (duodenum, jejunum, and ileum), the large intestine (colon), and the cloaca.
The SPSS package was used for the statistical analysis of the data obtained in this study.Group data pertaining to the characteristics determined were compared by analysis of variance, and groups that differed were detected by Duncan's multiple comparison test.To determine whether sex had any impact on each of the characteristics investigated, the data was analysed by the two-sample t test to determine the significance of the difference between two mean values.
Results
Although the hatching weights of the chicks, their body weights in the 3th, 4th, and 5th weeks, and their shank lengths in the 5th week were similar in all of the study groups, the body weights of the chicks in the 1st and 2nd weeks differed significantly between the study groups (P<0.001)(Table 2).
In all of the study groups, the body weights of the female quails in the 5th week were greater than those of the males (Table 3), and this difference between the sexes was statistically significant (P<0.05,P<0.01, P<0.001).Furthermore, statistically significant differences were also determined between the male and female quails in Groups I (2.94), IV (6.41), and V (2.00) for shank length in the 5th week (P≤0.05,P<0.001).
In all of the study groups, the slaughter weights (g) of the female quails were greater than those of the males (P<0.001),whilst the carcass yields (%) of the males were higher than those of the females (P<0.05,P<0.01, P<0.001) (Table 5).Furthermore, in all of the study groups, the weights of the liver, gizzard, and intestine were greater in the female animals when compared with males, and this difference between the sexes was statistically significant (P<0.001)(Table 6).
The study groups differed significantly for heart length (P<0.001),cardiac fat area (P<0.01),gizzard width (P<0.01), and intestine length (P<0.01)(Figures 1, 2, and 3) (Table 7).The heart length and gizzard width were highest in Group I (2.94), the cardiac fat area was largest in Group III (7.40), and the intestine length was greatest in Group V (2.00).
Discussion
The mean hatchling weight calculated for all of the study groups was 8.50 g.The hatching weight of the chicks was highest in Group I (8.61 g) and lowest in Group V (8.43 g).The chick weights determined for the quail chicks in the present study were lower than the hatchling weight (10.02 g) reported by Obregón et al. (2012) and higher than the hatching weight (7.80 g) reported by Caglayan and Seker (2013) Table 4 -Slaughter and carcass characteristics of the study groups a-c -differences between mean values with different letters in the same row are statistically significant (P<0.001).F -ANOVA, P-value -significance level (α = 0.05).WCAS -carcass weight after slaughter (not eviscerated); CYAS -carcass yield after slaughter (not eviscerated).
1 Excluding the weight of all visceral organs, except for the kidneys.
2 Calculated on the basis of the carcass weight.
A -left lateral surface-1; B -right lateral surface-2; C -length; D -width.The body weight of the female quails measured in the 5th week being greater than that of the males was in agreement with previous reports made for Japanese quails by Yalcın et al. (1996), Aytac and Karabayir (2012), and Caglayan and Seker (2015).In the 5th week, the highest body weight for the female quails was 233.43 g in Group III, whilst the highest body weight for the male quails was 208.79 g in Group V.These values were higher than the 198.06 g for females and 163.28 g for males reported by Alkan et al. (2010) and 139.56 g for females and 137.54 g for males by Karadavut and Taşkın (2014).
In the present study, the mean shank length was 33.96 mm.This value was reported as 3.7 cm for five-week-old animals by Wilkanowska et al. (2013) and as 3.93 cm for six-week-old animals by Momoh et al. (2014).The results of the present study demonstrated that the shank length of five-week-old quail chicks ranged between 33.94-34.63mm in females and between 33.44-34.10mm in males.The shank lengths measured in the 5th week in the present study were higher than the shank lengths reported by Caglayan and Seker (2015) for females (28.38 mm) and males (27.78 mm).
The slaughter weights of the animals in Groups I, II, III, IV,and V (213.69,210.41,218.10,213.17,and 215.04 g,respectively) were higher than the slaughter weights of Japanese quails measured by Bonos et al. (2010), Hassan et al. (2015), and Caglayan and Seker (2013) as 154.6 g, 168.29 g, and 116.53 g, respectively.While the slaughter weights measured in all of the study groups in the present study were similar to the slaughter weight (213.58 g) reported by Obregón et al. (2012), the carcass weight determined in the present study (159.93 g) was higher than that reported by the same researchers (128.97g).The carcass yields determined in Group I (76.09%) and Group II (75.67%) were similar to the carcass yield previously reported by Bonos et al. (2010) (76.37%).The carcass yields determined In the present study, the slaughter weights being greater in female quails in comparison with males in all of the study groups was in agreement with the results of several literature reports (Alkan et al., 2010;Bonos et al., 2010;Aytac and Karabayir, 2012;Kosshak et al., 2014;Ojedapo and Amao, 2014;Hassan et al., 2015;Karadavut and Taşkın, 2014;Caglayan and Seker, 2015), but differed from the report of Caglayan and Seker (2013).Furthermore, the carcass yields of the males being higher than those of the females in all of the study groups was in agreement with the results reported by Alkan et al. (2010), Kosshak et al. (2014), andHassan et al. (2015).
The mean values determined in the present study for the heart weight and cardiac fat area (2.18 g and 0.78 cm 2 ) were higher than the values reported by Caglayan and Seker (2013) (0.99 g) and Hassan et al. (2015) (1.5 g) and lower than the value reported by Guluwa et al. (2014) (4.05 g).
Average gizzard length and gizzard width (2.84 and 2.55 cm) were higher than those determined by Omonona et al. (2014) as 2.10 and 1.83 cm, respectively.
In the present study, the mean intestine length and intestine weight values calculated for all of the study groups were 74.76 cm and 9.21 g, respectively.While the mean intestine length determined in this study was similar to that reported by Wilkanowska et al. (2013) for five-weekold Japanese quails (73.3 cm), this value was higher than that reported by Hassan et al. (2015) for six-week-old animals (54.86 cm).Furthermore, the total length (63.44 cm) of the small (55.6 cm) and large (7.84 cm) intestines and the intestine weight (5.35 g) reported by Hena et al. (2012); the total length (56.41) of the small (50.02 cm) and large (6.39 cm) intestines and the intestine weight (3.86 g) reported by Guluwa et al. (2014); and the total intestine length (61.93 cm) and intestine weight (7.69 g) reported by Samadi and Sahneh (2015) were lower than the results obtained in the present study.
Conclusions
Body weight, shank length, and slaughter and carcass weight do not vary in relation to eggshell colour or the size of the spotted area.The eggshell colour or the size of the spotted area affect wing weight, left thigh weight, yield of the carcass after slaughter (not eviscerated), carcass yield, breast yield, wing yield, and gizzard yield.The lengths of intestine and heart, gizzard width, and cardiac fat area vary in relation to eggshell colour or the size of the spotted area.
Figure 3 -
Figure 3 -Measurement of the total length of the intestines.
Table 1 -
Distribution of eggs according to the trial group (cm 2 ) for quail t -independent samples (t test); P-value -significance level (α = 0.05).
Table 3 -
Body weights of animals included in each study group with respect to sex a-c -differences between mean values with different letters in the same column are statistically significant (P<0.001).F -ANOVA; P-value -significance level (α = 0.05).
Table 2 -
Body weight and shank length of the study groups (± standard error of the mean)
Table 5 -
Slaughter and carcass characteristics of the study groups with respect to sex 1Excluding the weight of all visceral organs except for the kidneys. 2 Calculated on the basis of the carcass weight.
Table 6 -
Slaughter and carcass characteristics of the study groups with respect to sex
Table 7 -
Digital image analysis measurements of the length, width, and surface area of some visceral organs for the different study groups -c -differences between mean values with different letters in the same row are statistically significant (P<0.001).F -ANOVA; P-value -significance level (α = 0.05). | 2019-01-01T23:42:32.097Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "a23b2a2afd5b8906878cf148ad46038c854b9fbc",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/rbz/v45n9/1516-3598-rbz-45-09-00509.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a23b2a2afd5b8906878cf148ad46038c854b9fbc",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
15779537 | pes2o/s2orc | v3-fos-license | Where is the COBE maps' non-Gaussianity?
We review our recent claim that there is evidence of non-Gaussianity in the 4 Year COBE DMR data. We present some new results concerning the effect of the galactic cut upon the non-Gaussian signal. These findings imply a localization of the non-Gaussian signal on the Northern galactic hemisphere.
I EVIDENCE FOR CMB NON-GAUSSIANITY
In a recent paper [1] we showed that the 4 Year COBE DMR data exhibits evidence of non Gaussianity at a high confidence level. We made use of statistical tools described in more detail in [2,3]. Since then our result has been corroborated by two other groups [4,5]. In this review we revisit our analysis, and the tests to which we have subjected it, including some new results.
In our analysis we propose, and work with, an estimator for the normalized bispectrum, denoted by I 3 ℓ . We refer the reader to [1] for its definition. We then applied this estimator to the inverse noise variance weighted, average maps of the 53A, 53B, 90A and 90B COBE-DMR channels, with monopole and dipole removed, at resolution 6, in ecliptic pixelization. We use the extended galactic cut of [6], and [7] to remove most of the emission from the plane of the Galaxy. We apply our statistics to the DMR maps before and after correction for the plausible diffuse foreground emission outside the galactic plane as described in [8], and [9].
By means of Monte Carlo simulations we also found the distributions P ℓ (I 3 ℓ ) for what we should have seen assuming a Gaussian signal, which is then processed by the experimental set up associated with DMR. These P (I 3 ℓ ) were inferred from 25000 realizations (see Fig. 1). The observed I 3 ℓ and the distributions P ℓ (I 3 ℓ ) are plotted in Fig. 1. One immediately notices the presence of a significant deviant.
In order to quantify this deviant we define the goodness of fit statistic where the constants β ℓ are defined so that for each term of the sum X 2 ℓ = 1. The definition reduces to the usual chi squared for Gaussian P ℓ . We build a X 2 for the COBE-DMR data from the P ℓ (I 3 ℓ ) inferred from Monte Carlo simulations, taking special care with the numerical evaluation of the constants β ℓ . We call this function X 2 COBE . We then find its distribution F (X 2 COBE ) from 10000 random realizations. This is very well approximated by a χ 2 distribution with 12 degrees of freedom. We then compute X 2 COBE with the actual observations and find X 2 COBE = 1.81. One can compute P (X 2 COBE < 1.81) = 0.98. Hence, it would appear that we can reject Gaussianity at the 98% confidence level.
II IS IT A SYSTEMATIC EFFECT?
We checked that this result could not be due to the following systematics: 1. Foregrounds contamination: • Dependence on shape ("custom" versus constant elevation) • Dependence on elevation • Dependence on monopole and dipole subtraction, before or after the cut, with or with out galaxy.
Possible small residual errors in corrections for
• Spurious offsets induced by the cut.
• Instrument susceptibility to the Earth magnetic field.
• Errors due to incorrect removal of the COBE Doppler and Earth Doppler signals. • Errors in correcting for emissions from the Earth, and eclipse effects. • Artifacts due to uncertainty in the correction for the correlation created by the low-pass filter on the lock-in amplifiers (LIA) on each radiometer • Errors due to emissions from the moon, and the planets.
Assumptions in Monte Carlos:
• Dependence on power spectrum tilt • Dependence on smooth versus discontinuous power spectrum • Dependence on beam shape • Dependence on pixelization. In fact, the confidence level quoted above reflects the worse line up of systematics. If we try to correct for systematics, in general the confidence level for rejecting Gaussianity is enlarged to beyond 99%, as we describe in more detail in [3]
III WHERE IS THE NON-GAUSSIANITY?
We now concentrate on a subset of tests involving the galactic cut which we applied to our result. Changing the galactic cut affects sample variance besides eliminating possible contaminations from the map. We considered extensive variations of the cut, including additions of polar cap cuts to the extended cut.
We found that cuts from the pole affect the result more than cuts from the equator. This suggests that the effect may be localized near the Poles. We therefore decided to compare the effect of applying cuts only in the North or South galactic poles. We considered cuts down to 60 • (2668 pixels excluded). We find the curious result that cutting Northern caps is more damaging for the nonGaussian spike than cutting Southern caps ( fig. 2). Indeed the first few Southern cap cuts appear to increase the spike. Non-Gaussianity could in principle be localized in Fourier space without being localized in real space 1 . Examples of such behaviour are given in [10]. We believe that the signal we have found is essentially localized in Fourier space. However the results we have just presented suggest that our signal may indeed be also localised in real space, around the Northern galactic cap. | 2014-10-01T00:00:00.000Z | 1999-03-02T00:00:00.000 | {
"year": 1999,
"sha1": "0e6d4b4e081bb80d2715fb6c2a1369cc7588e6c1",
"oa_license": null,
"oa_url": "http://cds.cern.ch/record/380697/files/9903051.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0e6d4b4e081bb80d2715fb6c2a1369cc7588e6c1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
100233433 | pes2o/s2orc | v3-fos-license | A low-cost, high-efficiency light absorption structure inspired by the Papilio ulysses butterfly
The nano-hole array structure in the black scales of the butterfly can be viewed as a natural solar collector. A low-cost, high-efficiency light absorption structure, inspired by the Papilio ulysses butterfly, was optimized using a finite-difference time-domain method. The results show that the nano-hole structure of Papilio ulysses contributes to light absorption. The shape of the holes affects the angular dependence of absorption. The absorption efficiency was found to be strongly affected by three parameters: H (the depth of the hole), D (the thickness of the hole-wall) and L (the size of the hole). These parameters were swept together in numerous simulations. The optimized nano-hole array saves 84% more material than a thin film of equal absorption (90%) at a wavelength of 600 nm.
Introduction
The poor infrared absorption of crystalline silicon resulting from its indirect band gap poses a challenge towards its use in solar photovoltaics. Currently, commercial solar cells have 200-300 mm crystalline silicon active layers that efficiently absorb light. This thickness needs to be reduced to several micrometers. 1 A thinner active layer has the added advantage of efficient charge-carrier transport. Thus, an effective technique for light trapping in thin active layers needs to be developed. 2,3 Various structures employing random or periodical structured surfaces to increase the absorption in thin lm photovoltaics have been widely investigated, and an alternative strategy is modify the structure of the active layer itself. Vertically aligned nanorod or nanocone arrays in the active layers have been considered. 4,5 Theoretical studies have shown that these structures can improve light absorption. 6,7 Han and Chen 3 studied a nano-hole array, and their results shows that it required twelve times less silicon by mass to obtain the same ultimate efficiency as a standard 300 mm crystalline silicon wafer. Their calculations 3 show that nano-hole arrays are more efficient than nanorods for practical thicknesses. The absorption structure in buttery scales were also developed as stealth skills, 8,9 light trapping schemes, 10 infrared absorption 11 and thermal sensors. 12 Thus, stimulated by the design principles found in nature, super black lms were manufactured by imitating a scales microstructure. 13,14 Moreover, we studied the disordered structure of Papilio ulysses and discovered their unusually omnidirectional light absorption properties. 15 The black scales in butteries can be considered as a structure used to absorb solar energy. 11,[16][17][18] Some of the black scales are constructed with ridges and a nano-hole structure. The tapering of the ridges reduces the extent of back-reection and scattering, while the nano-hole structures enhance the absorption and reduce the amount of material. The periodic arrangements of the ridges and nano-holes lead to a strongly anisotropic angular absorption. 15,19 This anisotropic angular absorption was also investigated in this study.
The subfamily Papilioninae is known for its variously colored and large wings, and its strong ight power. Its green band in the dorsal wing has attracted scientists' attention. 20,21 Blackness is another important factor in the color of the buttery as it clearly increases the contrast of the colored wing patterns. The presence of blackness indicates that the visible light of all the wavelengths is absorbed effectively. This particular and elaborate mechanism is necessary to reduce the amount of material and enhance the absorption. 16,17,22,23 Vukusic et al. 17 measured the nanostructure and absorption spectra of two different levels of blackness. They reported that lattice structures increased the optical path length by multiple scattering on the surface, where the pigments were diffusely distributed. The tapered shape of the ridge was used to decrease the abrupt change in the refractive index on the surface. Zhao et al. 16 reported that the periodically aligned inverse-V type ridges with oblique side walls transfer light beams to the nanohole area where the unique light-trapping effect occurs. The reported study mainly focused on the absorption properties of the special nanostructure. While nature is a brilliant artist with great wisdom, it would be never-ending to study and borrow the art of nature. In a buttery, the absorption needs to be achieved with a light frame to facilitate the ight. Herein, a new pathway was paved for the extensive exploration of strategies inspired by nature in designing a sophisticated system with low-cost and highly efficient light absorption.
In this study, a nano-hole array inspired by Papilio ulysses was investigated using a nite-difference time-domain (FDTD) method. The optical performance and structure of Papilio ulysses is shown in Fig. 1. Our goal is not to create an exact computational model for this buttery but to develop the design criteria for a low-cost and highly efficient light absorption structure. Different parameters of the nano-hole array model were investigated, namely the shape of the hole, H (the depth of the hole), D (the thickness of the hole-wall) and L (the size of the hole). These parameters were swept together in numerous simulations. Then, the key factors were optimized using the particle swarm optimization (PSO) method to achieve a low-cost and highly efficient light absorption. The absorption enhancement effects were studied over a range of wavelengths. Our results were obtained in regard to reduced material use through absorption enhancement.
Method
The nano-hole array of the black scales in Papilio ulysses was investigated using the FDTD method with a three-dimensional model (Fig. 2). Herein, part 2 in Fig. 2 was chosen because this part was considered as the main light-trapping structure, while the other parts (P1, P3 and P4) also contributed a little to the absorption. Thus, the simulation of P2 in this study was lower than the experimental data, as shown in Fig. 1. Structure P2 can be depicted in the x-y plane and z direction, respectively. The lattice in the x-y plane contains different shapes (e.g. rectangle, hexagon, triangle and trapezoid pentagon) according to electron microscopy studies 16,17 and the results in Fig. 1. These different shapes are distributed randomly in butteries. Schematics of the four shapes S1-S4 studied here are shown in . The period of the ridge and the lling ratio were xed at 1380 nm and 24%, respectively in these four shapes. The plane wave was applied in this study. The absorption value was dened as follows: The scales of Papilio ulysses comprise chitin and diffused melanin. The refractive index of buttery chitin has been established to be around 1.56, 24 while the refractive index of melanin is much less certain. Because melanin is a strongly absorbing pigment, the value of the refractive index is a complex number, which is difficult to measure accurately. The refractive index of sepia melanin at a wavelength of 633 nm is reported to be (1.655 AE 0.008) + i(0.12 AE 0.07). 25 Similar data, equal to 1.55 + i0.14, were reported for the melanin elytra of a buprestid beetle. 26 Recently the index values of chitin and melanin in the elytron of the jewel beetle were reported. 27 The real part (n) of the high-index layer increases from 1.65 to 1.80 upon decreasing the wavelength. The imaginary part (k) increases to about 0.1 for the shortest wavelength, while for the low-index layer, the real part shows a slight increase from 1.55 to 1.60 and the imaginary part is found to be very small. The index values were also studied in the damsely. 28 The real part of the refractive index at 500 nm for the different cases studied was 1.552 (chitin: dragony), 1.580 (immature male), 1.615 (mature female), and 1.663 (mature male). In the beginning of this study, the index value was set at 1.56 + i0.06. Our goal was not to obtain the exact refractive index for Papilio ulysses but to design a light absorption structure inspired by the buttery. The imaginary part k was set to a small value to make sure that not only melanin but also the structure contributes to the absorption. Then, the imaginary part was set at 0.15 to facilitate the optimization of the structure. This nano-hole array was investigated in detail by sweeping the parameters. The boundary conditions in the vertical (x & y) direction were periodic (periodic boundary condition, PBC) and in the horizontal (z) direction was absorbing (perfectly matched layer, PML), as shown in Fig. 2(b). Periodic boundary conditions reduce the simulation time signicantly and make it possible to sweep the parameters. This is the reason we abstracted the complex buttery structure as a periodic model. The SEM images were obtained using an FEI Sirion 200 eld-emission-gun scanning electron microscope. The optical micrographs of the samples were taken using a digital optical microscope (VHX-600, Keyence). The absorption measurements were made using a QDI 2010 UV-vis-near-IR micro-spectrophotometer.
Study parameters: shape, L, D and H
The absorptions with different shapes, H, D and L values were studied under normal incidence as shown in Fig. 3. The H, D and L values represent the hole size, hole-wall thickness and hole depth, respectively. The period of the ridge and the lling ratio were xed at 1380 nm and 24%, respectively with shapes S1-S4. In addition, the default L, D and H values of S2 were set at 400 nm, 100 nm and 1500 nm, respectively. The absorptions of the different shaped structures (S2-S4) show a slight difference, as illustrated in Fig. 3(a), while the value of shape S1 is smaller than the other three. Thus, the hole shape was not the key factor in the absorption efficiency. In the next section, we show that the shape of the structures affects the angular absorption properties. The absorptions increase with high D and H values and low L values as shown in Fig. 3(b)-(d). The mass per area (mpa) of S2 can be presented as mpa AHr (r is the density). The mass per area increases with high D and H values and low L values, and this is agreement with the change in absorption. Thus, it seems that a higher absorption is achieved at the cost of using more material. In order to save the cost and reduce the weight, the nano-hole structure was optimized and is discussed in the following section (sweep parameters). As illustrated in Fig. 4, the absorption was studied using a eld map in the x-y plane. Herein, shape S2 was chosen as an example. The simulations were completed with L ¼ 200 nm, L ¼ 800 nm, D ¼ 70 nm and D ¼ 160 nm. Four cross-section eld maps were given in four z positions: 1 nm, 500 nm, 1000 nm and 1500 nm. On comparing column D ¼ 70 nm and D ¼ 160 nm we can see that the energy cannot be exhausted if the D value is too small, while with a smaller L value of 200 nm, the energy exhausts quickly. It can also be seen that the energy can be absorbed only if the H(z) value is large enough. Thus, the absorptions increase with high D and H values and low L values, as discussed above.
Under various incident angles
Thus far, the absorption under normal incidence has been studied. Herein, we perform the absorption using broad incident angles. The models used in this part are similar to those used for absorption under normal incidence except that the Fig. 3 The absorption spectra obtained under normal incidence with: (a) different shapes S1-S4 as shown in Fig. 2(c); (b) different L (hole size) of the side in shape S2, the D and H in this part were set at 100 nm and 1500 nm, respectively; (c) different D (hole-wall thickness) of the side in shape S2, the L and H in this part were set at 400 nm and 1500 nm, respectively; (d) different H (hole depth) of the structure in shape S2, the L and D in this part were set at 400 nm and 100 nm, respectively. A thin layer of substrate was considered in this part with a thickness of 100 nm. substrate is neglected here. Fig. 5(I) shows the schematic denition of the incident angle a, polarization and the model. Larger incident angles require more memory. To balance the dilemma, the polar angle a is limited to be between 0 to 40 . Because the structures we chose were symmetrical, the azimuth angle b was set to be the symmetry axis direction, such as 0 and 45 for shape S1, and 0 and 30 for shape S2. The incident plane was dened by b and a. In this part, the L, D, and H values were set at 600 nm, 100 nm and 1500 nm, respectively.
The absorption spectra contour plots of shape S1 versus the wavelength under various incident angles are shown in Fig. 5(II)(a)-(d). The results show that the absorption can be achieved under various incident angles, and the absorption values changed from 43% to 88% with the different a, b and polarizations.
The absorption spectra of shape S2 are shown in Fig. 6. Fig. 6(I) shows the sketch map of the incident angle a, the polarizations and the model. The azimuth angle b was set at 0 and 30 according to the symmetric axis of the hexagon. The values of L, D, and H were set at 400 nm, 100 nm and 1500 nm, respectively. The absorption spectra contour plots of S2 versus the wavelength under various incident angles are shown in Fig. 6(II)(a)-(d). The absorption of S2 was similar to the case of S1, which was also affected by a, b, and the polarizations. However, the absorption of shape S2 was much steadier than S1. The shape of the hole affected the angular dependence of absorption.
To study the relationship among a, b, polarizations, and absorption, a simple model was investigated as shown in Fig. 7(II)(c) inset. Fig. 7(I) shows the sketch map of the incident angle a, the polarizations, and the model. The azimuth angle b was set at 0 and 90 . The values of D and H were set at 100 nm and 1500 nm, respectively. This model was decomposed by shape S1. The absorption spectra contour plots are shown in Fig. 7(II)(a)-(d). When the electric eld was parallel to the plane of the rectangle, as shown in Fig. 7(II)(b)-(c), the absorption was much larger than the opposite condition. When the angle between the incident light and the plane of the rectangle increases, the absorption decreases as shown in Fig. 7(II)(a)-(b). If the incident light was parallel to the plane of the rectangle, the absorption did not decrease with higher incident angles. Hence the absorption was affected by the three parameters: the angle of electric eld with the plane of rectangle, the angle between incident light and the plane of rectangle, and the incident angle a.
When two rectangle array models merge into one structure by crossing each other, shape S1 can be obtained. The case shown in Fig. 5(a) can be considered as the merging of 5(b) can be considered as the merging of Fig. 7(II)(b) and (d) by crossing each other. Their corresponding absorption spectra also affirm this type of merging. The lobe in Fig. 7(II)(c) can be explained by the lobe in Fig. 5(a). In conclusion, an appropriate shape should be chosen to satisfy the special angle dependent requirements when we design the absorption structure. A highly efficient light absorption structure can be achieved by optimizing parameters D, H and L. Thus, the structure with shape S1 was chosen because the shape was not the key factor for the absorption efficiency, and this simple model can reduce the simulation time signicantly. Then, we can sweep the key parameters (D, H and L) over a large range to investigate the absorption efficiency.
Mass per area
In the discussion above, the refractive index was set at 1.56 + i0.06 to clarify the contribution of the structure to the absorption. It is very difficult to achieve high absorption by merely increasing H with a too low k value. Analogously, it is very difficult to reach high absorption by increasing the k value with a too small H value. Therefore, the k value was set at 0.15 in this part to facilitate the optimization of the light absorption structure. Our goal was to design a light absorption structure, which uses less material to save costs and reduce the weight. Thus, the "mass per area" was taken into account. Herein, when a homogeneous material was assumed, the mass sample /S cover (mass per area) ¼ r sample V sample /S cover , where V sample /S cover is used to represent the "mass per area".
The shape of the nano-hole array in the x-y direction was discussed above. Herein, the absorption with different shapes in the z direction is shown in Fig. 8. These four models have the same V sample /S cover value. Line "a" and line "b" in Fig. 8(II) show that the nano-hole array can enhance the absorption compared with the thin lm structure of Fig. 8(a). Line "b" and line "c" in Fig. 8(II) show a similar absorption with a normal hole shape and a gradient hole shape. Line "b" and line "d" in Fig. 8(II) show that the trapezoidal hole structure decreases the absorption compared with the normal hole structure. Thus, the simple rectangle shape (b) was chosen here for the optimization study.
The L, D, and H values are swept ranging from 400 nm to 800 nm, 300 nm to 700 nm and 60 nm to 180 nm, respectively to search for a structure with high absorption. The cycle graph of these three parameters is shown in Fig. 9(a). Every parameter takes 8 values, and thus in total there are 8 Â 8 Â 8 ¼ 512 samples. The axis of the abscissa is the sample number. Fig. 9(c) shows the values of L, D, and H for each sample. Fig. 9(b) shows the "mass per area" (V sample /S cover ) for each sample, which is calculated using the L, D, and H values as follows: . Thus, when we analyze the absorption of a structure, the corresponding V sample /S cover value should be considered.
Sweep parameters
As shown in Fig. 3, the absorption increases with high D and H values, and low L values. It seems that the higher the absorption achieved, the more the material is used (namely, a higher "mass per area" as shown in Fig. 9). However, our present study wanted to challenge this assumption, demonstrating that some optimized structures contribute to high absorption with a relatively low "mass per area". a wavelength of 800 nm, the absorption is very sensitive to the D value (DD: 60 to 180, DA: 38% to 90%), whereas with samples 41-48 under a wavelength of 400 nm, the absorption is insensitive to the D value (DD: 60 to 180, DA: 88% to 94%). Thus, careful choice of the parameters is important for the design of a low-cost and highly efficient light absorption structure.
Although most of the samples were sensitive to H, there were a few exceptions. Two examples are given here. Samples (392, 400, ., 440, 448) under a wavelength of 800 nm shows that the absorptions are very sensitive to the H value (DH: 300 to 700, DA: 42% to 90%), while samples (24,32,40,48,56, 64) at a wavelength of 400 nm show that the absorptions are very insensitive to the H value (DH: 414 to 643, DA: 91% to 95%). In addition, the result with the L value was similar with those found for the parameters H and D.
Clearly, the nano-hole array is the absorption structure because of its anti-reection characteristics. First, the high absorption value can be achieved with a high extinction coefficient. However, this is not the problem we focused on. Second, the absorption value also varies with different structure parameters: the H, D, and L values. Our results show that light absorption can be enhanced by optimizing the H, D, and L values. With equal material (mpa), the optimized structure can achieve the highest absorption compared to the other structures.
Optimizing the structure using the PSO method
To obtain a high absorption structure, the values of H, D, and L should be selected carefully. The particle swarm optimization (PSO) algorithm, a type of the optimization algorithm, was chosen to overcome this problem. The PSO algorithm, which is inspired from the social behavior of animal species such as birds, bees, and others (particles), was introduced by Eberhart and Kennedy 29 while looking for their requirements in their research area. As a robust, stochastic evolutionary strategy, the PSO algorithm has recently been used in solving electromagnetic problems 30,31 and has been used to design non-linear and non-continual optimization approaches with continuous variables. Fig. 11 shows the result of the optimization process. The absorption was xed at 90%. The refractive index was set at 1.56 + i0. 15. The models of the nano-hole array and thin lm structures are shown in Fig. 11(b). The results show that the material can be reduced signicantly at a wavelength of 600 nm (as shown in Fig. 11(a)) and with a broadband spectral range of 400 nm to 800 nm (as shown in Fig. 11(c)). Upon comparison with the thin lm structure at a wavelength of 600 nm we obtained a structure (H ¼ 700 nm, L ¼ 560 nm and D ¼ 60 nm) that could reduce the amount of material by 84%. In addition, under the broadband wavelength from 400 nm to 800 nm, we found a structure (H ¼ 780 nm, L ¼ 660 nm and D ¼ 164 nm), which could reduce the amount of material by 59%.
Conclusions
In conclusion, the light absorption properties of the nano-hole array in Papilio ulysses were investigated using the FDTD method. Our results conrmed that the nano-hole structure of Papilio ulysses contributes to the light absorption efficiency. The shape of the hole affects the angular dependence of the absorption, while the parameters H (the depth of the hole), D (the thickness of the hole-wall) and L (the size of the hole) are the key factors affecting the absorption. These three parameters were swept together with numerous simulations. One assumes that a higher absorption requires more material with a higher value for the "mass per area". However, our studies challenge this assumption, demonstrating that some optimized structures can achieve low-cost and highly efficient light absorption materials. The particle swarm optimization (PSO) algorithm was used for this optimization process. The optimized nano-hole array saves 84% material than a thin lm of equal absorption (90%) with a wavelength of 600 nm. While in the wavelength range of 400-800 nm, the optimized nano-hole array saves 59% of the material than a thin lm of equal absorption (90%). | 2019-04-08T13:13:09.977Z | 2017-04-24T00:00:00.000 | {
"year": 2017,
"sha1": "4d0b1d52c446a4bb8c98dfb32990df861c3c3f36",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c7ra03048g",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5dfad9bb0eab3cd173a22cd454c47b3f0f2e7b30",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
53593087 | pes2o/s2orc | v3-fos-license | Online-Analysis of Hits in the Belle-II Pixeldetector for Separation of Slow Pions from Background
The impending Upgrade of the Belle experiment is expected to increase the generated data set by a factor of 50. This means that for the planned pixel detector, which is the closest to the interaction point, the data rates are going to increase to up to 28 Gbit/s. Combined with data generated by the other detectors, this rate is too big to efficiently send out to offline processing. In order to reduce the data rates online data reduction schemes, in which background is detected and rejected, are going to be employed. In this paper, an approach for efficient online data reduction for the planned pixel detector of Belle-II is presented. Its central part is the NeuroBayes algorithm, which is based on multivariate analysis. It allows the identification of signal and background by analyzing clusters of hits in the pixel detector on FPGAs. The algorithm is leveraging the fact that hits of signal particles can have very different characteristics, compared to background, when passing through the pixel detector. The applicability and advantages in performance are shown through the D* decay. In Belle-II, these decays produce pions with such a small transversal momentum, that they barely escape the pixel detector itself. In a common approach like an extrapolation of tracks from outer detectors to RoIs, these pions are simply lost, since they do not reach all necessary layers of the detector. However, cluster analysis is able to identify and separate these pions from the background, thus keeping their data. For that characteristics of corresponding hits, like the total amount of charge deposited in the pixels, are used for separation. The capability for effective data reduction is underlined by a background reduction of at least 90% and signal efficiency of 95%, for slow pions. An implementation of the algorithm for usage on Virtex-6 FPGAs that are used at the pixel detector was performed. It is shown that the resulting implementation succeeds in replicating the efficiency of the algorithm, implemented in software while throughputs that suffice hard real-time constraints, set by the read-out system of Belle-II, are achieved and efficient use of the resources present on the FPGA is made.
Introduction
Envisaged luminosities for SuperKEKB are expected to generate extremely high hit rates for the detectors close to the beam pipe [1]. This is especially true for the pixel detector (PXD) of the Belle-II experiment, which is located closest to the interaction point. The generated data rates are estimated to reach up to 28 Gbit/s. However underlying DAQ system cannot transmit the data to offline processing with a sufficient rate. To solve this problem online data reduction is used close to the PXD. The primary mechanism to reduce data in the pixel detector is based on extrapolation of hits in the outer detector layers to areas, so called Regions of Interest (RoI), inside the PXD. Only the data of active pixels in these areas is kept. This way most of the interesting particle hits, called signal, are kept while less interesting background is suppressed. However, this method leads to the loss of all interesting particles that are not even reaching the outer layers. One important process in Belle-II is the decay of a B meson into a D*, an orbitally excited D meson. As the B mesons are always produced in pairs, it is useful to do a full reconstruction of the decay products of one B meson thus fixing the four-momentum of the other B. This greatly simplifies reconstruction of the B from the so called signal side. Because of the large branching fraction for the production, it is vital that the D* is correctly reconstructed. However D* can decay into so called slow pions, earning its name from having very low energy. They have such a low transversal momentum, that outer layers of the detectors are not reached. In this case, RoI selection suppresses them. That is not acceptable for the Belle-II experiment since reconstruction efficiency would be greatly decreased. To solve this issue, an alternative approach is used. A machine learning algorithm, the NeuroBayes, is executed online. It has the ability to predict whether a cluster of hits in the PXD was due to a slow pion or background. This way pions can be saved, as they would have been lost otherwise. To match Belle-II data reduction requirements, the algorithm has to be implemented on FPGAs close to the pixel detector. The NeuroBayes was designed to run on PCs. It has to be shown that porting the algorithm onto FPGAs can achieve efficient separation of slow pions from background. Additionally the throughput of the PXD has to be matched to avoid any overflow and loss of important data. Since available FPGAs in the PXD DAQ are already used for other tasks, the resource demand has to be sufficiently small to allow smooth integration. This paper is organized in the following way. In Section II, a description of the PXD, RoI selection and an estimation of the envisaged data rates are given. Section III concentrates on the slow pion rescue mechanism. It encompasses the architecture used on FPGA and overview of the NeuroBayes algorithm. Results of the implementation are shown in Section IV. A conclusion and outlook are given in Section V.
Related Work
The H1-Level 2 Trigger of the HERA accelerator used neural networks to improve the suppression of the background rate and increase of the signal efficiency [3]. At first it was deployed on dedicated ASICs. In the following trigger upgrade FPGAs were used. It showed the FPGA's capability to host highly parallel and pipelined designs while meeting the hard timing requirements [4]. Neural networks were also proposed for the z-vertex trigger for Belle-II [5]. Its goal is to improve background suppression by estimating the z-vertex more accurately. Data from the central drift chamber is used as an input for the network to make an estimation. The network is planned to be implemented on an FPGA in order to meet throughput requirements. Both approaches show that machine learning algorithms can be used online to improve background suppression. Hard realtime constraints were met by using FPGAs. The NeuroBayes algorithm, used for the slow pion rescue, is based on neural networks. However, it was originally designed to improve particle identification. For that it uses custom preprocessing algorithms.
DEPFET Pixel Detector DAQ
The PXD of Belle-II represents the two innermost detector layers and is part of the vertex detector [2] (VXD). It is built based on DEPFET technology [6]. The DEPFET pixels are arranged in 768x250 matrices located on separate modules called half ladder. These modules are arranged in two cylindrical layers around the interaction point. Charge deposited by particles is digitized by the DEPFET Current Digitizer ASIC [7] (DCD). They are are digitized into 8-bit ADC and then sent to the Data Handling Processor [8] (DHP). At the DHP zero suppression, common mode noise and pedestal fluctuation corrections are performed. The processed data is then passed on to the Data Handling Hybrid [9] (DHH), in which data of 5 half ladders is concentrated. Additionally clusters of adjacent hits in the pixel matrices are build. Clusters are then passed on to the Online Selection Nodes (ONSEN) [10]. In the ONSEN the data is matched with RoIs and passed to offline storage. Tracks for RoI selection are delivered by the data concentrator (DatCon). The total amount of pixels in the PXD is at 7.68 million. Meanwhile a worst case occupancy of 3% can occur. As a result the generated data is going to reach about 1 MByte/event. However the ONSEN has an output data rate of about 100 kByte/event, resulting in a required data reduction of 90%.
RoI Selection
Data reduction in the PXD is achieved by definition of so called RoI. They define areas in the PXD that are suspected to contain pixels that were hit by interesting particles. Data is only kept for pixels located inside these areas. RoIs are defined by extrapolation of tracks to the PXD. These tracks are constructed using hits in the four outer layers of the VXD, called the silicon vertex detector (SVD). As a result particles need to have a certain transversal momentum to reconstruct tracks. This means that at lest three hits have to be present at the SVD. Particles not reaching the outer layers are considered as background. This way substantial data reduction is achieved.
Slow Pions in the Pixel Detector
The D* decay is important for correct reconstruction of events in Belle-II. One possible product of this decay are pions. They can have varying transversal momenta, shown in Fig. 1(a).
Most of the D* decays below 60 MeV/c include pions, making them even more important for reconstruction. These pions are additionally called slow, earning their name from the low momentum. Due to the low momentum there is a chance for these pions to not reach the outer layers. To investigate that Fig. 1(b) is presented. It shows the layers of the VXD that are reached by pions with a transversal momentum smaller 80 MeV/c. Considering a momentum of 60 MeV/c or less the third layer of the SVD is often times not reached. Less momentum leads to even less pions reaching that three layers. This is important since particles need to reach least three SVD Layers in the for RoI mechanism. As a result a slow pion rescue mechanism is needed to avoid loss of data.
Slow Pion Rescue in the PXD DAQ
Considering the PXD DAQ system [13] only two viable options exist for the slow pion rescue to be executed on. These are the DHH and the ONSEN. Both provide FPGAs for hosting the rescue mechanism and are still close enough to the PXD. However the ONSEN is already tasked with matching of pixel clusters to calculated RoIs. Implementation here could lead to tough integration, since most of the resources are already in use. The DHH on the other hand has around 50% CLBs and 30% DSPs available. For this reason it is chosen to host the slow pion rescue. An overview of the resulting system is depicted in Fig. 2.
. Slow Pion Identification
Slow pions can be identified by using data from pixels clusters, they passed through. The available data consists of the ADC for all pixels in a cluster, the layer and the positions of active pixels in a half ladder. The most indicative characteristic is the digitized charge. Distribution of charges deposited by particles in pixels of the PXD are depicted in Fig. 3 for different momenta . Here four classes of particles are shown, the most important ones being pions and electrons, that are seen as background. The occurrence of pions are indicated by the red arrow with pion label. The area bellow the red horizontal line at about a cluster seed charge of 50 represents the occurrence of electrons. It is observable, that pions typically deposit much more charge in than electrons. Consequently slow pions can be separated from other particles, by introducing a cut-off for the read out charge from pixels. This method was applied to simulation with the help of basf2. The result was that by introducing this threshold about 50% of the simulated pions could separated correctly from background.
NeuroBayes Machine Learning Algorithm
Motivated by the possibility of separating hits caused by slow pions from background seen in Section 4.1, more advanced algorithms can be used to achieve high signal efficiencies. One such algorithm is the NeuroBayes [11], which is based on multivariate analysis and was developed for usage as a scientific tool in high energy physics. The general flow of usage of the NeuroBayes is shown in Fig. 4. It consists of two main parts, they are the Teacher and the Expert. The Teacher is used for generating a prediction model, called the expertise. This model is used for predicting a class to a given set of input data. The data used for the Teacher is typically historic, either taken from a real scenario or simulation. Meanwhile training is conducted supervised. The model is then used by the NeuroBayes Expert. The Expert's task is to predict the correct class for the data given at the input. Its output is a probability density function representing the algorithms trust into its classification decision. In our case it is the probability for an analyzed particle cluster being a slow pion. The probability is then typically mapped onto a binary value representing the classification decision.
In the slow pion rescue, only the expert is going to be executed on FPGAs. The teacher is used offline beforehand. Here historic data corresponds to simulated clusters of hits in the PXD. They were generated with the help of the basf2 simulation framework [12]. While current data are pixel clusters passed on from the PXD to the expert.
Architecture on FPGAs
The architecture of the slow pion rescue is depicted in Fig. 5. It consists of three major components, the protocol handling, the feature extraction and the NeuroBayes Expert algorithm. All parts are connected with each other in a pipelined way. This is the result of the required throughput at about 1 processed cluster per clock cycle.
Protocol Handling
The protocol handling's task is straightforward, as it decodes data packets of pixel data produced by the clustering on the DHH. Its main purpose is to decouple both mechanisms, as the same packets can be passed on to the ONSEN without necessarily being processed by the slow pion rescue. This allowed for easier integration into the DHH.
Feature Extraction
The feature extraction transforms data from the PXD into a more suitable representation, before it can be used by the NeuroBayes Expert. For optimal performance of the algorithm so called features are defined. These are separate data streams, which are preprocessed independently. A feature has a distinct impact on the prediction made by the algorithm, however, they can still be correlated with each other in some way. For the slow pion rescue, 8 features have been found out to be reasonable. These are computed at the feature extraction before being passed on the Expert. They are listed in the following: • Sum of all pixel charges in a cluster • Maximum pixel charge of a cluster • Minimum pixel charge of a cluster • Layer of the PXD containing the cluster • Length of cluster in z-direction • Length of cluster in r-φ-direction • Total length of cluster • Number of pixels in a cluster
NeuroBayes Expert
The NeuroBayes Expert operates on multiple parallel input data streams and can be partitioned into pipelined processing steps. Each input data stream is corresponding to one of the predefined features. The first processing step of the Expert is called binning and it is performed separately and in parallel for each input stream. The main component of this step is the bin, which is an interval in the range of possible values a single feature can assume. Bins do not overlap and are bounded by an upper and lower limit. Binning assigns the current value of an input stream to a bin by checking whether it is within the bin's predefined interval. The assigned bin is than mapped onto a weight, which essentially represents the influence of the selected bin on the prediction. In case that, the value of a feature is between the limits of a bin, an interpolation of the weight is performed. After the preprocessing all calculated weights are multiplied with a predefined vector. Each entry in the vector contains a value that represents the importance of a feature compared to the others. This accounts for one feature having more significance on the prediction than others. The result of this multiplication is the probability density function. The last step in processing of the Expert is the cut. Here the computed value of the probability density function is compared to a predefined threshold. If the the value is above the threshold, a 1 is returned indicating that this cluster was probably produced by a slow pion. Otherwise a 0 is returned for background. The signal efficiency and background rejection rates vary with the selected threshold value. As this algorithm was originally written in FORTRAN and developed for usage on PCs all of the mentioned processing components were implemented by hand in VHDL. Fortunately most of the processing steps can be broken down into simple arithmetics i.e. additions, multiplications.
Performance and Resource Demand
The slow pion rescue was implemented on a Virtex6 VLX75T, which is used at the DHH [14]. Due to the pipelined architecture, throughput is directly corresponding to the achievable clock frequency. Making use of the FPGA's capability to cascade inputs and outputs of DSPs, high clock frequencies are be achieved. As depicted in
Identification Efficiency
The capabilities of algorithms to identify desired particles is measured with 2 metrics, these are signal efficiency and background rejection. In this case signal efficiency represents the algorithms ability to correctly identify pions out of a given cluster of hits. On the other hand background rejection represents the algorithms ability to correctly identify a cluster of hits that was not produced by a pion, as background. Fig.6 shows the signal efficiency on the x-axis over the achieved background rejection, on the y-axis. Overall 5 classes of pions with different momentum are depicted with different colored lines. The momentum range is from 15 to 65M eV , these pions are expected to not reach outer layers. It can be seen that overall an overall background rejection of at least 88% can be achieved, this is in case the highest signal efficiency is selected. The implementation behaves the best when momentum is between 25 and 55M eV . For the targeted Background Rejection Rate of 90%, at least 95% Signal Efficiency is achieved.
Conclusion and Outlook
In this paper we showed, that using the NeuroBayes algorithm on FPGAs is a suitable solution for identifying certain particles types, using the data from the Belle-II PXD. Under usage of simulation data generated with the help of basf2, it was shown that the implementation can achieve signal efficiency of at least 95% with a background rejection of 90%. Both are sufficient to match the data reduction requirements set by the Belle-II experiment. Additionally the implementation is suitable to be integrated into Belle-II. The resource demand is small enough to allow for integration with the other components to be used on the DHH. To achieve the strict throughput requirements set by the DAQ, a pipelined architecture is used. Not only is it fulfilling the requirements, but it can reach even higher throughputs than demanded. Future work is going to focus on further increasing the implementation's signal efficiency and allowing for easy adaptation in case of changes in the PXD. | 2018-11-13T10:31:01.090Z | 2015-12-23T00:00:00.000 | {
"year": 2015,
"sha1": "e57b50257a8801771d1c4021db17119c98172dfd",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/664/9/092001",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7b9ea83b19476ecb2a106a8f8b2f850b6ddd7b2e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
270424874 | pes2o/s2orc | v3-fos-license | The effect of Air Gap Width on the Focal Properties of Objective Snorkel Lens
T he current research dealt with studying the effect of air gap width on the objective properties of the magnetic Snorkel lens used in the scanning electron microscope. The structure of the lens was designed and its properties were studied, and the axial magnetic flux density distribution was calculated (B z ) using the electron optical design program (EOD), The results indicated that the maximum value of the magnetic flux density (B max ) increased with a decrease in the width of the air gap (S), accompanied by a decrease in the half-width, spherical and chromatic aberrations, and an increase in the resolution of the analysis.
Introduction
Charged particle optics has become an important part of modern physical research, and thus calculations in such devices have become essential and contribute directly to the development of electronic devices.Developments in the field of electronic computers have greatly facilitated the design of electronic lenses after their design encountered many problems and difficulties.The flux or flux density distribution can be known to explore the optical properties of any of the proposed electronic lenses [1].As a result, computer simulation plays a key role in the design and improvement of charged particle optical systems, making it easier to estimate the performance accuracy of these systems before they are manufactured.That saves a lot of money and time [2].One of the problems facing the issue of finding specific geometric designs for electronic lenses with very small aberration coefficients is the dependence of the properties of these lenses on a large number of geometric and physical variables.The process of achieving the best lens design with the lowest aberration and the highest analysis accuracy is called optimization [3].One of the optimization methods is the analysis method.It is a traditional method in designing electronic lenses and is based on the principle of trial and error, as the designer begins by developing an initial design for the required lens and then works to improve this design by changing the geometric parameters.Geometrical parameters such as the shape and size of the excitation coils, the electric and magnetic polepieces, and the iron circuit for this design until the optimal design is reached, which is the design that gives the lowest values of aberration [4].The monopolar lens design is a development in the field of electromagnetic lenses.Mulvey in 1972 proposed the first monopolar lens by dividing the bipolar lens into two equal halves producing two lenses each with one pole [5].The monopolar lens called the snorkel lens was proposed for the first time by (Mulvey).This objective lens has many advantages in low voltage scanning electron microscopy, and its pole protrudes outside its structure, and in this positive feature, the sample rotates more freely than in bipolar lenses [6].Extensive studies have been conducted to improve the geometry and dimensions of electronic magnetic lenses and to choose the optimal design that gives the lowest values of aberration.Al-Shahat contributed to showing the extent of the effect of changing the dimensions of the geometric lens on the performance of the saturated unipolar magnetic lens, as he compared three lenses that have different dimensions, and there is a ratio between these dimensions, and he concluded that the geometric dimensions have a very significant effect on increasing the maximum value of the magnetic field distribution and the distribution bandwidth field, as well as reducing coefficients of spherical and chromatic aberration and focal length, and it was found that the lens of small size gives the best performance [7].Al-Shamma presented a study of two types of lenses, as the researcher compared the pinhole lens and the Snorkel lens by using the simulation program Electron Optical Design (EOD) that works by the finite element method, after changing the air gap and the diameter of the axial aperture of the pole of the two lenses, the Snorkel lens achieved the best optical performance and also the lowest values for coefficient of spherical and chromatic aberration at certain operating distances [8].Was studied the effect of the diameter of the axial aperture (D), the width of the air gap (S) between the two polepieces, the thickness of the polepieces (t), and the excitation coefficient (NI) of the magnetic lens designed to obtain the best optical properties, represented by the focal length fₒ, the spherical aberration (CS) and chromatic aberration coefficients (CC), it was found that the properties improve significantly with reducing the diameter of the axial aperture and the width of the air gap for this lens.It was found that the best lens design with the best optical properties was achieved at an air gap (S = 2 mm), axial aperture diameter (D = 6 mm), and face thickness.Polepieces (t = 3 mm) [9].Evaluation of each design included calculating the axial magnetic field, lens magnetization, and flux density using the finite element method (FEM) for three distinct current density values (2,4 and 6 A/mm 2 ).Therefore, the most obvious results and behavior are obtained at a current density of (2 A/mm2).The maximum magnetization characteristics, maximum flux values, and minimum bandwidths of the axial magnetic field strength are obtained when the pole face length is (1 mm) [10].
Objective Lens
The objective lens is one of the most important components of the electron microscope because of its great influence on the resolution of the microscope's analysis.In addition to reducing the size of the probe, it focuses it on the surface of the sample to be examined.This focusing process is carried out by means of the air gap present in the magnetic circuit near the optical axis, and also by means of the axial magnetic field generated in the area confined between the two iron polepieces when an electric current passes through its coil [11].The ability of electromagnetic lenses to concentrate the beam in a precise symmetric probe is mainly limited by defects called lens deviations, Common deviations of the lens include spherical aberration (Cs) and Chromatic aberration (Cc), the aberration factors should be as small as possible to achieve high resolution [12].The spherical aberration coefficient (CS) for a magnetic lens with magnetic flux density (BZ) can be calculated according to the following relationship [13][14][15]: Where (Vr) represents the accelerated voltage of electrons and relatively corrected and (Rα (z)) is a solution to the axial radiation equation, and α semi angle.The (CC) Chromatic aberration coefficient of a magnetic lens can be found from the following formula [15]: The resolution of the analysis was calculated by the relationship [16]: Where (λ) is the wavelength of the electron, which is given by the following equation [17]:
Design of the Magnetic Objective Lens
The research aims to find the best air gap width for the magnetic Snorkel lens.In this work a new Snorkel lens had been designed and studied using EOD, as shown in the figure (1) with dimensions and geometric parameters represented in axial length (110 mm), radial width (144 mm), and a rectangular coil with a cross sectional area (1000 mm 2 ) and air gap width (S= 6 mm), the diameter of the polepiece opening (DP = 4 mm), the thickness of the polepiece face (t = 5 mm), and an axial gap in the form of a cone of length (113 mm).In order to clarify the performance of the objective lens proposed for this study, the axial distribution of the flux density and objective characteristics of the lens was calculated using (EOD) program version 3.069, designed by Lencov´a and Zl´amal [18] using the finite element method.
Effect of the Air Gap on the Snorkel Objective Lens Properties:
In Table (1).The results of the two figures show that the lens with a gap (S = 3 mm) achieves the best resolution ability.
Conclusion
The width of the air gap (S) has a clear effect on the objective properties of a Snorkel lens.Through analyses and calculations, it was found that the coefficients of spherical and chromatic aberrations decrease, and the accuracy of the analysis increases as the width of the air gap decreases, as the lens with a gap width (S = 3 mm) achieved the best objective properties compared to other values chosen for the width of the gap in the current research, the results were achieved, using EOD program.This lens is used as an objective lens to focus the electron beam in a scanning electron microscope.
Fig. 1 :
Fig. 1: Geometric dimensions of the objective lens, a Snorkel in two-and three-dimensional design.
order to study the effect of the air gap width (S) on the optical performance of the Snorkel lens, different values of the gap width [S= (3, 6, 9, 12, 15) mm] were chosen, while keeping the other geometric parameters of the lens constant.Figure (2) shows the distribution of axial flux density (Bz) as a function of axial distance (Z) for different values of air gap width under constant excitation (NI=2000A-t).It was found that the maximum value of the magnetic flux density (Bmax) increases with decreasing air gap width and is accompanied by a decrease in the half width (H.W.), as shown in Figure (3).
Fig. 2 :
Fig. 2: Distribution of axial magnetic flux intensity (Bz) as a function of distance (Z) of the Snorkel lens for different values of air gap width (S) at constant excitation (NI=2000 A-t).
Fig. 3 :
Fig. 3: The change of maximum magnetic flux density (Bmax) and half-width (H.W) as a function of the air gap width (S) of the lens.
Figure ( 4 )
Figure(4) shows the paths of the magnetic flux lines of the lens (SL) at constant excitation (NI = 2000 At).It was found that the smaller the width of the air gap (S), these lines converged and concentrated between the pole and the iron arm, and that Lenses with a gap width (S = 3) mm) have more uniform flux lines than lenses with larger gaps.The effect of changing the gap width on the objective properties of the lens was also studied, where the spherical and chromatic aberration coefficient were calculated.Figures(5) and(6) show the relationship between the change in the coefficients of spherical aberration (CS) and chromatic aberration (CC) as a function of the relatively corrected acceleration voltage of the lens at constant excitation (NI = 2000 A-t).It was discovered that when the width of the air gap was reduced, the optical qualities of the developed lens improved, and the lens with width (S
Fig. 4 :
Fig. 4: The trajectories of the lens magnetic field lines for different values of air gap width (S).
Fig. 5 :Fig. 6 :
Fig. 5: Variation of the coefficient of spherical aberration (CS) as a function of the proportionally corrected acceleration voltage (Vr).
Fig. 7 :Fig. 8 :
Fig. 7: Variation of the analysis resolution (δ) as a function of the air gap width (S) of the lens at fixed values for each of the excitation and acceleration voltages (Vr=12kV, NI=2000 A-t). | 2024-06-13T15:13:51.590Z | 2024-04-25T00:00:00.000 | {
"year": 2024,
"sha1": "5fbd543b3eeba531be8d978cb2e0c3d3395bd5a3",
"oa_license": null,
"oa_url": "https://doi.org/10.25130/tjps.v29i2.1505",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ce830c4d914c04deceb5a880e18d7ac76dd2345d",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": []
} |
207853080 | pes2o/s2orc | v3-fos-license | IMNet: A Learning Based Detector for Index Modulation Aided MIMO-OFDM Systems
Index modulation (IM) brings the reduction of power consumption and complexity of the transmitter to classical multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) systems. However, due to the introduction of IM, the complexity of the detector at receiver is greatly increased. Furthermore, the detector also requires the channel state information at receiver, which leads to high system overhead. To tackle these challenges, in this paper, we introduce deep learning (DL) in designing a non-iterative detector. Specifically, based on the structural sparsity of the transmitted signal in IM aided MIMO-OFDM systems, we first formulate the detection process as a sparse reconstruction problem. Then, a DL based detector called IMNet, which combines two subnets with the traditional least square method, is designed to recover the transmitted signal. To the best of our knowledge, this is the first attempt that designs the DL based detector for IM aided systems. Finally, to verify the adaptability and robustness of IMNet, simulations are carried out with consideration of correlated MIMO channels. The simulation results demonstrate that the proposed IMNet outperforms existing algorithms in terms of bit error rate and computational complexity under various scenarios.
I. INTRODUCTION
To meet the high requirements of spectrum efficiency (SE) and energy efficiency for next generation wireless communication systems, index modulation (IM) [1] has attracted extensive attention and research in recent years. IM is a general term for a series of innovative modulation technologies, which convey information bits not only on the modulated symbols but also on the on-off status of some resource blocks (such as antennas, subcarriers and time slots) [2]- [4]. Due to that some of resource blocks are inactive (off), IM can bring the reduction of energy consumption and complexity of the transmitter at the expense of acceptable degradation of SE.
In view of above characteristics, IM has been widely applied to existing wireless communication systems in different ways. Among them, generalized spatial modulation (GSM) [5] and orthogonal frequency division multiplexing with IM (OFDM-IM) [3] are the two typical cases in the IM family, which transfer part of the information bits on the indices of the active transmit antennas (TAs) and subcarriers, respectively. To further improve SE, [6] combines OFDM-IM with multipleinput multiple-output (MIMO) systems, while [7] and [8] apply IM across multiple domains (such as space, frequency and time domains). Compared with classical MIMO-OFDM systems, the above IM aided systems can improve the bit error rate (BER) performance. Meanwhile, it can reduce inter-channel interference, peak to average power ratio and power consumption and relax inter-antenna synchronization requirements.
It is worth noting that the traditional detectors in classical MIMO-OFDM systems cannot be directly applied to the IM aided MIMO-OFDM (IM-MIMO-OFDM) systems. Since part of the information bits are conveyed on the indices of the acitve TAs and subcarriers, not only the modulated symbols but also the indices of the acitve TAs and subcarriers need to be detected, which makes the complexity of detector greatly increased. [9] and [10] have analyzed the maximum likelihood based detector (MLD), but the complexity of the detector increases exponentially with increasements of the numbers of TAs and subcarriers and the signal constellation size. To further reduce the detection complexity, several low complexity detectors have been proposed, such as, simple minimum mean square error (MMSE) detector, matched filtering (MF) detector, log-likelihood ratio (LLR) detector and signal vector based list (SVBL) detector. However, compared to the MLD, these low complexity detectors suffer from a significant error performance degradation. Particularly, all the aforementioned methods are heavily dependent on the channel state information at receiver (CSIR), which requires high system overhead in the practical communication scenarios.
To be practical, imperfect CSIR should be considered in the design of detectors. For carrying out the detection at receiver with imperfect CSIR, deep learning (DL), which has the powerful feature extraction and generalization abilities, can be introduced in the detector design. With these abilities, DL based detectors can realize robust detection in MIMO systems [11], [12]. Apart from this, compared with the traditional detectors, DL based detectors are non-iterative, which show great computational complexity reduction [13].
In this paper, we consider the low complexity and robust detector design for the IM-MIMO-OFDM systems. The information bits in the IM-MIMO-OFDM systems are conveyed on three parts: the indices of the active TAs, the indices of the active subcarriers and the modulated symbols, which are also the three parts that need to be detected at receiver. In this case, the traditional detectors face high complexity. However, the transmitted signal in IM-MIMO-OFDM systems actually has the property of structural sparsity. Based on this property, the detection process at receiver is formulated as a sparse reconstruction problem. The formulated sparse reconstruction problem can be solved through the following two steps. The first step is estimating the indices of non-zero elements in the transmitted signal vector through the received signal vector. The second step is estimating the values of the non-zero elements with the prior knowledge of their indices. Considering the imperfect CSIR and computational complexity, a DL based non-iterative detector called IMNet is proposed to realize the above sparse reconstruction process. The proposed IMNet consists of two subnets (i.e., antennna detection (AD) subnet and signal denoising (SD) subnet) as well as a least square (LS) detector. The task of the AD subnet is to realize the first step of the sparse reconstruction process, and the second step is accomplished through combining the AD subnet with the LS detector, which callded as DL based matching pursuit (DLBMP). For further improving the performance of the IMNet, the SD subnet is introduced to remove the noise effects.
To verify the adaptability and robustness of the proposed IMNet, we perform simulations with consideration of two channel models (i.e., the Rayleigh fading MIMO channel and the correlated MIMO channel) and two CSIR conditions (i.e., the perfect CSIR and the imperfect CSIR). The simulation results demonstrate that the IMNet has a better BER performance and robustness than the traditional algorithms in different scenarios, and the computational complexity is much smaller than the traditional algorithms.
The rest of this paper is organized as follows. The system model and problem formulation are given in Section II. Section III details the architecture of the proposed IMNet. The simulation results are presented and discussed in Section IV. Finally, conclusions are drawn in Section V.
II. SYSTEM MODEL AND PROBLEM FORMULATION
In this section, we first demonstrate the system model of the IM-MIMO-OFDM systems in Section II-A. Then, the sparse reconstruction problem is formulated in Section II-B.
A. System Model
We consider an IM-MIMO-OFDM system that equipped with N t TAs, N r receive antennas (RAs) and N f subcarriers, whose block diagram is depicted in Fig.1. At transmitter, IM is applied to both space and frequency domains, which means the information bits are conveyed not only over the modulated symbols but also over the indices of the active TAs and subcarriers. At receiver, these transmitted information bits can be recovered through the proposed IMNet.
As shown in Fig.1, each IM-MIMO-OFDM frame is comprised of a total number of C + KD incoming data bits. The first C = log 2 Nt K bits are used to select K active antennas from N t TAs for transmitting. 1 In this way, K signal links will be generated while the remaining N t − K TAs keep off status. The left KD bits are equally split into K blocks and separately processed on the K signal links. Unlike classical OFDM systems, which map all data bits to constellation points for all subcarriers, the D bits on each signal link are divided into two parts. The first part with d 1 = log 2 F N f bits is used to select F active subcarriers from all the N f subcarriers to convey information, while the remaining N f −F subcarriers are set to be idle. 2 The second part with d 2 = F log 2 M bits is used to select F symbols from the signal constellation S with |S| = M for F active subarriers. And then, an OFDM-IM block is generated. After interleaving, inverse fast Fourier transforming (IFFT), adding cyclic prefix (CP) and parallel to serial converting, the K OFDM-IM blocks are transmitted through K active TAs.
After passing through the MIMO channel, the CP is removed, fast Fourier transform (FFT) and deinterleaving are done at each RA to obtain the received signal in frequency domain. The received signal at the i-th subcarrier for all RAs can be represented as where y i ∈ C Nr×1 denotes the received signal vector at the i-th subcarrier, x i ∈ C Nt×1 denotes transmitted signal vector at the i-th subcarrier, H i ∈ C Nr×Nt denotes the channel frequency response (CFR) between TAs and RAs at the ith subcarrier and w i ∈ C Nr×1 represents the additive white 1 N K and · denotes the binomial coefficient and floor function, respectively. The mapping between the C bits and the TA combination patterns can be implemented by using a look-up table [5]. 2 The mapping between the d 1 bits and the subcarrier combination patterns can be implemented by using a look-up table or the combinatorial method [3].
Gaussian noise (AWGN) vector with zero mean and unit variance at the i-th subcarrier.
Let φ a , φ f and X s respectively denote the indices set of the active TAs, the indices set of the active subcarriers and the transmitted signal matrix, where X s = x 1 , x 2 , · · · , x N f . The MLD [9] based on Eq.(1) can be formulated as It should be noted that the MLD detector entails an exhaustive search space based on Eq.(2). Therefore, there will be a prohibitive complexity for this method because of the large number of transmit antennas, the large size of subcarrier and the complex signal constellation of practical MIMO-OFDM systems.
B. Problem Formulation
Due to that only K TAs and F subcarriers are active for transmitting, the transmitted signal vector x i is sparse for all subcarriers, which is shown as Fig.2. By utilizing the sparsity of x i , the detection process can be considered as a sparse reconstruction problem. Colorless squares denote the inactive resource blocks in the space domain, which means these antennas are inactive. The squares with crosshatch in the active space domain represent the inactive resource blocks in the frequency domain, which means such subcarriers are inactive. The squares with pure color denote the active resource blocks that truly convey the modulated symbols.
In this sparse reconstruction problem, x i is the sparse vector that needs to be recovered, y i is the measurement vector and H i is the measurement matrix. Based on Eq.(1), the sparse reconstruction of x i can be formulated as the following optimization problem with the consideration of AWGN.
where ε is a predetermined noise level of the system. There exist many algorithms that can solve such a problem, but they need the measurement matrix H i to realize the sparse reconstruction, which cannot be accurately known in practical wireless communication scenarios. In addition, by solving the problem in E.q. (3), only one transmitted signal vector can be reconstructed at a time, which is inefficient.
To further improve the performance and reduce complexity, we assume that the inactive subcarriers on each active signal link convey a special symbol x * , which is shown as squares with crosshatch in Fig.2. After this operation, the transmitted signal vectors x i (1 ≤ i ≤ N f ) have the following relationship: Eq.(4) means all the transmitted signal vectors share the same support 3 . Meanwhile, due to the fact that there are K active antennas, all the transmitted signal vectors are K-sparse, which implies that there are K non-zero elements (including x * ) in each transmitted signal vector, which is expressed as Eq. (5). Based on these two observations, the signal matrix X s owns the property of structural sparsity. Based on the structural sparsity of X s , the detection process of IM-MIMO-OFDM systems can be further formulated as a sparse reconstruction from multiple measurement vectors (MMV). For X s , the MMV refers to the multiple received signal vectors, i.e., y i (1 ≤ i ≤ N f ), which is denoted as Y s = y 1 , y 2 , · · · , y N f . The sparse reconstruction of X s can be formulated as the following optimization problem with the consideration of AWGN.
By solving this problem in Eq.(6), the transmitted signal vectors at all subcarriers can be jointly recovered efficiently, compared to separately solve the problem in E.q(3). The problem can be solved by two steps. The first step is estimating the indices of non-zero rows in X s through Y s , which also means detecting the indices of active antennas. The second step is estimating the values of the elements in non-zero rows with the prior knowledge of the indices gotten in the first step. The purpose of the second step is jointly estimating the indices of the active subcarriers and the modulated symbols.
Considering the practical situation that H i cannot be accurately known at receiver and traditional algorithms are iterative with high comolexity, we introduce DL to realize the two steps with imperfect CSIR and finally recover the transmitted signal, which will be detailed in the next section.
III. PROPOSED IMNET
In this section, a DL based detector, called IMNet, is designed to recover the transmitted signal of IM-MIMO-OFDM systems, which can significantly decrease the complexity of receiver. The architecture of the IMNet is elaborated in Section III-A. The details of the AD subnet and combination of the AD subnet and the LS detector are shown in Section.III-B. The SD subnet is introduced in Section.III-C. Section.III-D gives the training specification.
A. IMNet Architecture
As shown in Fig.1, the IMNet consists of two convolution neural networks (CNN), which are called as antenna detection (AD) subnet and signal denoising (SD) subnet, respectively. The task of the AD subnet is to predict the activation probabiliy of each TA, while the task of SD subnet is to further improve the precision of the signal that is recovered by LS detector. The working procedure of the IMNet can be divided into two stages. In the first stage, the AD subnet predicts the activation probability of each TA for getting the indices of acitvated TAs, and then, based on the obtained indices and CSIR, the LS detector is used to get the initial estimation of the transmitted signal. In the second stage, in order to refine the initial estimation, the SD subnet is introduced to treat the initial estimation as an image, and remove the noise effects.
After above two stages, the indices of activated TAs are obtained and the transmitted signal through each activated TA is recovered. The subcarriers are sorted in descending order according to the amplitude of the recovered symbols on them, and the first F subcarriers are considered to be active. Finally, with the indices of the active antennas and subcarriers and modulated symbols, source binary bits can be recovered through inverse mapping.
B. AD Subnet
The AD subnet is a four-layer CNN, whose architecture is shown in Fig.3. For the convolution layers, we use convolution kernels with 3 × 3 size and Max-Pooling operation with 2 × 2 size. The only difference between the two convolution layers is the number of filters. There are 64 and 128 filters in the first and second convolution layers, respectively. To accelerate the converging procedure, we apply the Rectified Linear Unit (ReLU) as activation function in the two convolution layers. The general activation function of ReLU is defined as where x k is the input signal of the activation on the k-th channel.
The input complex valued signal is separated into real and imaginary parts, and then, the two parts are fed into the network, as shown in Fig.3. The output of the AD subnet is the activation probability of each TA. Therefore, we use the Sigmoid function as the activation function of the output layer and the binary cross-entropy as the loss function, which are expressed in Eq. (8) and Eq.(9), respectively.
where θ 1 denotes the weights for the AD subnet,ŷ i and y i are the predicted activation probability and the initial on-off label of the i-th TA, respectively. Compared to the traditional algorithms, the AD subnet can predict the activation probability of each TA and then get the indices of active TAs without the knowledge of CSIR, which produces the excellent performance under various channel conditions.
According to the estimated indices of the activated TAs through the AD subnet, we select corresponding columns in H i to form a new channel matrixĤ i . Then, the LS detector can be applied to get the initial estimated signalx i , which can be expressed asx where (Ĥ i ) is the pseudo inverse ofĤ i . The whole process of the stage 1 which consists of AD subnet and LS can be summarized as Algorithm 1. τ is a predefined threshold to decide whether the TA is active or inactive.
According to φ a , choose columns of H i to generatê Estimate the signal vectorx i using Eq.(10) 7: end for 8:X LS = x 1 ,x 2 , · · · ,x N f 9: return φ a ,X LS
C. SD Subnet
The initial estimation of the transmitted signal is obtained through Algorithm 1. Considering the CSIR which cannot be accurately known in practical wireless communication scenarios, the initial estimation is coarse. In order to refine the coarse estimation, a denoising network is introduced to mitigate the effects of noise. We choose the state-of-the-art denoising network [14] in image processing fields as our SD subnet. The input of the SD subnet is the initially estimated signal matrix through Algorithm 1, and the output is the denoising signal matrix. We adopt the Mean Square Error (MSE) as the loss function for SD subnet. The loss function can be expressed as where θ 2 denotes the weights of the SD subnet, T is the number of training pairs, and X i is the transmitted signal without noise.
D. Training The IMNet
Let Θ = {θ 1 , θ 2 } denote the set of all weights of the IMNet. The whole process of the IMNet can be expressed asX (12) where F , F 1 , F 2 and LS are the functions of the IMNet, AD subnet, SD subnet and the LS detector, respectively.
We use a two stage training algorithm to get the optimal Θ. In the first stage, we train the AD subnet. In the second stage, we freeze the parameters of AD subnet, and train the SD subnet to get the final output. The entire training process is summarized as algorithm 2.
IV. PERFORMANCE EVALUATION
In this section, we evaluate the performance of IMNet. Simulations are performed to show the superior performance of the proposed IMNet, compared with existing detection algorithms under various channel conditions. Implementation is first presented, then results are given and discussed.
A. Implementation Details
The two subnets in IMNet are developed through Keras. For the dataset, we obtain the training data through simulations. The two subnets are trained using the stochastic gradient descent method and Adam optimizer on the Nvidia GTX 1080ti GPU environment. The learning rate and batch size are set to 0.001 and 50, respectively.
To verify the adaptability of IMNet, both Rayleigh fading MIMO channel and correlated MIMO channel are considered in our simulations: • Rayleigh fading MIMO channel: The channle matrix H obeys complex Gaussian distribution, i.e., H ∼ CN (0, 1 Nt ). • Correlated MIMO channel: The channle matrix H can be expressed as where A iid is the independent identical distributed (i.i.d.) Rayleigh fading channel, Θ Rx and Θ T x are the spatial correlation matrix of TAs and RAs, respectively. In our simulations, the correlation coefficient ρ is set to 0.5. In order to demonstrate the robustness of the IMNet, imperfect CSIR is also considered in our simulations. Considering the ML channel estimation [15] [16], the imperfect CSIR can be expressed asĤ whereĤ and ∆H denote the imperfect CSIR and estimation error, respectively. ∆H obeys zero mean i.i.d. complex Gaussian distribution with E |∆h m,n | 2 = Ntσ 2 z NpEp [15], where N p and E p represent the number and the power of pilot symbols, respectively.
We compare IMNet with other two traditional algorithms: (1) ML [9] [10]: An exhaustive search algorithm that jointly estimates the active antennas, active subcarriers and modulated symbols.
(2) Matched Filtering with Log-likelihood Ratio (MF-LLR) : An algorithm that combines MF in [17] with LLR detector in [18]. In MF-LLR, the active TAs, the active subcarriers and the modulated symbols are estimated sequentially. Fig.4 illustrates the BER performance of IMNet and two traditional algorithms (i.e., ML and MF-LLR) under Rayleigh fading MIMO channel. As expected, the proposed IMNet yields the best performance for both perfect and imperfect CSIR. When the CSIR is perfect, the BER performance of IMNet has about 2dB improvement compared with the traditional algorithms. When the CSIR varies from perfection to imperfection, the BER performance of ML and MF-LLR experience significantly decline, while IMNet achieves a more BER performance gain of over 3.3dB compared with the traditional algorithms at all SNR values. This gain comes from two aspects. One is that the AD subnet in IMNet can predict the activation probability of each TA without CSIR, which means the AD subnet is insensitive to channel variations. Therefore, when CSIR is imperfect, AD subnet is still able to provide accurate prediction of activation probability of each TA, which provides a solid foundation for the next step of signal detection. The other aspect is that the SD subnet can improve the accuracy of the recovered signal, which can further improve the BER performance.
1) Comparisons in Rayleigh fading MIMO channel:
2) Comparisons in correlated MIMO channel: To further verify the adaptability and robustness of IMNet, we eavluate the BER performance of IMNet and two traditional algorithms (i.e., ML and MF-LLR) under correlated MIMO channel with perfect CSIR and imperfect CSIR. It can be seen from Fig.5 that IMNet always achieves the best peroformance whether the CSIR is perfect or imperfect. When the CSIR changes from perfection to imperfection, there is a clear performance degradation greater than 10dB of traditional algorithms at 25dB SNR, while IMNet still maintains its BER at 10 −4 . The reason for is that traditional algorithms are much more dependent on the CSIR than the proposed IMNet, while IMNet can extract the characteristics of channel from the received signal and further improve the accuracy of the recovered signal.
3) Performance in different IM-MIMO-OFDM scenarios:
To verify the applicability of IMNet, we evaluate the BER performance of IMNet in three different IM-MIMO-OFDM scenarios under different SNR, and the results are shown in Fig.6. The configurations of three scenarios are detailed in TABLE I, which make these scenarios become more complex gradually. We only consider perfect CSIR in each scenario. It can be recognized from Fig.6 that IMNet has a fairly good performance in three scenarios. The BER is smaller than 10 −3 for all scenarios at 25dB SNR. As the numbers of activated antennas and subcarriers increase, the BER performance has a certain gain by using proposed IMNet detector. Compared with Rayleigh fading channel model, the BER performance has a little degradation with correlated MIMO channel model in all three scenarios.
4) Comparisons in Computational Complexity:
To verify the low complexity of the proposed IMNet, the computational complexities of IMNet and two traditional algorithms (i.e., ML and MF-LLR) are compared in this part. We choose three different IM-MIMO-OFDM scenarios, shown in Table I and the results are given in Table II. We transmit 500 IM-MIMO-OFDM frames in each scenario. When the numbers of TA and subcarrier grows, we can activate more antennas and subcarriers to convey information, but the computational complexity will also be increased, which is consistent with the results in Table II. We can also see that IMNet always achieves the lower computational complexity than the traditional algorithms in all three scenarios. In the first scenario where the transmitted information is the easiest to recover, MF-LLR needs about twice consumption of time than IMNet, and ML needs more than seven times consumption of time. As the scenario gets more complex (i.e., scenario 2 and scenario 3), the consumption of time of two traditional algorithms increases significantly, especially for ML algorithm. The reason for this phenomenon is that two traditional algorithms need a large number of iterations to search the optimum combination of the active TAs, the active subcarriers and the modulated symbols, while IMNet is a non-iterative detector that can directly predict the the active TAs and subcarriers and recover the signal. In this paper, we formulate the detection process of IM-MIMO-OFDM systems as a sparse reconstruction problem. Based on the structural sparsity of the transmitted signal and sparse reconstruction theory, a DL based non-iterative detector called IMNet is proposed to realize the detection process, while maintaining low complexity. The two most distinctive characteristics of IMNet are that its AD subnet can predict the activation probability of each TA without CSIR and its SD subnet can further mitigate the noise effects. These characteristics enable IMNet to achieve better performance with imperfect CSIR compared with the traditional algorithms. Besides, IMNet is a non-iterative detector, which make its detection complexity far less than the traditional algorithms. Simulation results demonstrate that IMNet outperforms two traditional algorithms in terms of BER and computational complexity in various scenarios, which verifies the better adaptability and robustness of IMNet. | 2019-11-12T02:01:00.296Z | 2019-11-11T00:00:00.000 | {
"year": 2019,
"sha1": "759f8d166bbe1fd3b47f15e8d39e2f52f47c8049",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1911.04133",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "759f8d166bbe1fd3b47f15e8d39e2f52f47c8049",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science",
"Mathematics"
]
} |
256358392 | pes2o/s2orc | v3-fos-license | Magnetic, Optoelectronic, and Rietveld refined structural properties of Al3+ substituted nanocrystalline Ni-Cu spinel ferrites: An experimental and DFT based study
The nanocrystalline Ni0.7Cu0.3AlxFe2-xO4 (x=0.00: 0.02: 0.10) is prepared through the sol-gel autocombustion route.Both XRD and Rietveld confirm the single-phase cubic spinel structure of the investigated materials.Other structural parameters refined by the Rietveld refinement analysis are corroborated to single-phase cubic spinel formation of the NPs.Leveraging a vibrating sample magnetometer (VSM) the consequence of Al3+ substitution on the magnetic parameters is studied.The saturation magnetization (MS) and Bohr magneton are found to decrease with Al3+ substitution.The Remanence ratio and coercivity (HC) are observed to be very low suggesting the materials are soft ferromagnetic.First-principle calculations were carried out using the density functional theory (DFT) to demonstrate the optoelectronic behavior of the materials.The electronic bandgap is found low as Eg=2.99eV for the explored materials with observing defect states at 0.62eV.The optoelectronic properties of Al3+ substituted Ni-Cu ferrite NPs have been characterized through the DFT simulation for the first time, demonstrating their potentiality for optoelectronic device applications.The materials' optical anisotropy is observed along the x-axis, which manifests their tunability through light-matter interaction.
Introduction
Ferrites are classified as magnetic materials with astounding electromagnetic, dielectric, and other functional properties. Notably, the size and shape-dependent tunable properties of nanocrystalline ferrites (NCFs) make them promising for multi-purpose high-frequency applications. Their potential applications have been reflected through their tunable dielectric and electromagnetic properties over the last decade, which considered them suitable for a range of electronic and biomedical applications such as multi-layer chip inductors, magnetic sensors, high-density magnetic storage devices, isolators, microwave devices, wireless power transfer, hyperthermia, drug delivery, magnetic resonance imaging, gene therapy and delivery, DNA and RNA separation, and ferrofluids and so on [1][2][3][4][5]. As reported in [6], spinel ferrites (metal oxide semiconductors) are proven as the superior class of magnetic materials possessing the properties of high sensitivity, fast response, protracted stability, and low cost. Moreover, nanocrystalline spinel ferrite thin films have been focused in recent years on understanding the magnetooptical behaviors of annealing at comparatively low temperatures [7]. Furthermore, nanocrystalline ferrite materials are latterly being used in the production of thinner EM wave absorbers with a broader absorption bandwidth to sustain a safe and stable environment for both devices and lives by minimizing electromagnetic wave interference damage [8,9].
In [A][B]2[O]
4 nano-spinel ferrites, the hetero-structure nature is highly essential for application purposes, as cation distributions over tetrahedral [A] and octahedral [B] lattice sites, as well as their inter-site exchange interactions eventually modify their magnetic characteristics [8,10]. Among various NCFs, nickel ferrites are magnetically soft materials in nature, which offer a range of practical applications due to their high magnetic permeability, moderate saturation magnetization, low eddy current loss, high resistivity, low dielectric tangent, suitable optical band gap, and high mechanical strength [9,[11][12][13]. During the synthesis of ferrite nanoparticles, some distinct characteristics (i.e., high surface-to-volume ratio, small-size effect, and quantum tunneling effect) are observed due to bulk-tonano-scale transition that leads to the exalted physical, magnetic, and optoelectronic properties of the materials. Consequently, ferrites' structural and magnetic properti are influenced by a range of factors such as synthesis methods, doping, processing time, sintering temperature, grain size, purity, and sintering resources [14][15][16][17][18][19].
It is noteworthy to mention that selecting a suitable substituent in forming a compatible ferrite sample is vital for fine-tuning the materials' physical properties and ameliorating the applications at a broad range [20,21]. Here, Al 3+ is chosen as the substituent since it undergoes a phase transition via a reduction in the symmetry of ferrite crystals as the effect of the Jahn-Teller effect, which in turn improves the materials' electromagnetic properties [22,23]. Moreover, Al 3+ incorporation in ferrites has a standing of improving the crystallinity with maintaining the homogeneity of magnetic nanoparticles (MNPs) [24][25][26][27][28]. To synthesize MNPs, the sol-gel approach is widely used among other approaches because of its improved control over powder morphology, homogeneity, and elemental composition, providing a narrow particle size distribution at relatively low temperatures. It also provides a uniformly nano-sized metal cluster, which is crucial for improving the properties of nanoparticles for high-frequency (HF) device applications [29].
Several attempts have been made sporadically to examine the structural, electrical, morphological, magnetic and dielectric characteristics of Ni-based ferrite NPs. Research in spinel ferrites is still advancing, substituting several atoms as the dopant in A and B-sites to improve their physical, dielectric, and magnetic properties. V. A. Bharati et al. [30] investigated the effect of Al 3+ and Cr 3+ parallel doping on the structural, morphological, and magnetic properties of Ni ferrite NPs. In [31], K. Bashir et al.
studied electrical and dielectric properties of Cr 3+ doped Ni-Cu ferrite N', demonstrating thematerials' potential HF applications and photocatalytic activity. Le-Zhong Li et al. [32] studied Al 3+ substituted Ni-Zn-Co ferrites and found that saturation magnetization dropped dramatically and dc resistivity increased for Al 3+ substitution with x>0.10. The morphological and magneto-optical parameters of Ni ferrite NPs were investigated in [33], where the authors reported the bandgap, Eg = 1.5 eV, and the decreasing trend was observed in the variation of saturation magnetization and Tc with Al 3+ content. N.
Jahan et al. [34] assessed the consequences of diamagnetic aluminum (Al 3+ ) substitution on the morphological and magnetic properties of Ni-Zn-Co spinel ferrites fabricated using the conventional ceramic technique. They reported a steady decrease in lattice constant with increasing Al 3+ content and noted the maximum saturation magnetization (Ms) of 93.06 emu/g at x = 0.12. The structural, optical, magnetic, and photocatalytic behavior of Al 3+ substituted nickel-ferrites were reported in [35], which revealed n excellent photocatalytic activity showing the optical bandgap ranges between 1.60 and 1.89 eV. Q. Khan et al. [36] scrutinized the influence of inserting Al 3+ on the structural and dielectric behavior for Ni-Cu spinel ferrites, divulging a maximum dielectric loss of 0.4 at 2.5 GHz. Consequently, various groups explored the impact of Al doping on spinel ferrite nanoparticles' characteristics. Mn-Ni-Zn ferrite [37], Ni-Co ferrite [38], Mn-Zn ferrite [39], Co-Zn ferrites [40], Ni-Mn-Co [41], and Ni-Zn ferrite [42,43]. According to the DFT study [44], mixed spinel ferrites exhibited half-metallic properties, whereas semiconducting behavior showed pure compositions. Substitution of transitional atom content in spinel ferrite enhances the lattice parameter linearly, whereas a decrement in magnetization was observed with the weakening of the super-exchange effect in A, and B sites, as examined through DFT study in [45]. Another study employed first-principle GGA+U energy calculations for NiFe2O4 to examine the sensitivity of the cation distribution with strain modulation [46]. However, tunability in optical properties by bandgap modulation within spinel ferrite as studied through DFT calculations can offer potential for storage and photovoltaics, multifunctional materials and devices applications [47].
In [52], we performed on structural, dielectric, and electrical transport properties for Al 3+ substitution (x=0.00 to 0.10, in the step of 0.02) of nanocrystalline Ni0.7Cu0.3AlxFe2-xO4. However, the synthesized nano spinel ferrites' Rietveld-refined structural characteristics and magnetic properties, as well as the DFT-based optoelectronic properties for such a mixed spinel ferrite structure, have not yet been reported. Therefore, this study aims to investigate how Al 3+ incorporation affects the structural (Rietveld refinement) and magnetic properties of sol-gel produced Ni-Cu ferrite NPs. The optoelectronic performances are also analyzed using the first-principle density functional theory (DFT) simulations for Ni0.7Cu0.3AlxFe2-xO4 (x=0.06) spinel ferrite structure.
Materials preparation
The nanocrystalline powder samples of Ni0.7Cu0.3AlxFe2-xO4, with x varying as 0:0.02:0.1, were synthesized via the sol-gel route. Analytical grade of nickel (II) nitrate (Ni(NO3)2), copper (II) nitrate (Cu(NO3)2), iron (III) nitrate hexahydrate (Fe(NO3)3.9H2O), and aluminum (V) nitrate Al(NO3)3.9H2O were used as the raw materials. These metal nitrates were dissolved in de-ionized water with adding a few drops of ethanol in a 1:2 molar ratio to obtain the initial solution with keeping its pH value at 7. The dry gel was obtained by vigorously swirling metal nitrates at 70ºC in a thermostatic water bath, then dried for 5 hours in a 200ºC electric oven. Following the process, the resultant compositions were burned and grounded before sintering at specific temperatures, and the self-ignition process progressively transformed it into fluffy-loose powder. The yielded fluffy-loose powder was annealed at 700ºC for an additional 5 hours to obtain the highly crystalline materials without impurity. The powder was homogenized further by hand-milling in a mortar to assemble disk-shaped samples. Afterward, the nanocrystalline powder was condensed into disk-like forms using a 65 MPa hydraulic press for 2 minutes. The processed samples had a diameter of 12.02 mm, and a thickness of 2.3 mm. Structure and magnetic properties were measured using the annealed powder samples.
Characterizations and properties measurements
Characterization of the synthesized spinel ferrite nanoparticles has been performed using a variety of techniques such as x-ray diffraction (XRD), field emission scanning electron microscopy (FESEM), energy dispersive x-ray analysis (EDX), transmission electron microscopy (TEM), vibrating sample magnetometer (VSM) and UV-Vis spectroscopy. Detail analysis for structural, electrical, and morphology using XRD, FESEM, EDX are presented in our previous study [52]. The structural properties of the prepared ferrites powder (i.e., lattice parameters (a), the crystallite size (D), displacement density, and so on) were investigated employing an x-ray diffractometer with Cu-K ( = 1.5418 Å) radiation. Synthesized nanoparticles' magnetic properties were measured using a Vibrating Sample Magnetometer (VSM; Micro Sense, EV9). The following relationship has been used to investigate the net magnetic moment [53]: where M1 indicates the molecular weight of the samples, and Ms denotes the magnetic saturation.
Magnetic coercive force (coercivity) is a material's resistance to demagnetization since it allows us to study the material's magnetic properties than magnetization resistance, and it follows the relationship as (2) , where = * is used in Eq. (2) to determine the anisotropic constant.
Pseudopotentials from the VASP library were used to describe each atom; these potentials were built on plane-wave basis sets obtained using the projector-augmented wave (PAW) approach and were parameterized for each formalism (LSDA and GGA) [62,63]. It is well-established that these pseudopotentials are more accurate for magnetic systemshan the classical ultra-soft pseudopotentials (USPPs) [60]. Taking into account on-site Coulomb interactions with the Dudarev method (LSDA+U) [63] yielded a band gap that is more precise and in good agreement with experimental data. The electronic self-consistence force convergence threshold was considered 1×10 -7 eV [64]. The Brillouin Zone was integrated using Γ-centered k-points mesh 4×4×2 generated with Monkhorst-Pack Scheme [65] and the kinetic energy cut-off was chosen 400eV. For each orbital of all atoms, partial occupancy was chosen by Gaussian smearing with a smearing width of 0.05eV to integrate the Brillouin Zone. The band structure was obtained using Wannier90 [66] interpolation to get a more accurate band structure.
The number of interpolated bands was equal to the number of bands obtained from the plane wave basis code (VASP).
X-ray diffraction
The x-ray diffractometer (XRD) spectra of the prepared Al 3+ substituted Ni-Cu ferrite nanoparticles are portrayed in Fig. 1, where the vital peaks are reflected from the planes of (1 1 1), (2 2 0), (3 1 1), (2 2 2), (4 0 0), (4 2 2), (5 1 1) and (4 4 0). XRD peaks confirm the single-phase spinel structure of the synthesized magnetic nanomaterials in cubic shape with no other phase pesence. The crystallite size of Ni-Cu ferrite nano samples was found to vary with Al 3+ substitutions estimated by the highest diffraction (3 1 1) plane utilizing Scherrer formula. In this study, variations in the structural parameters have been observed for 2-10% doping of Al 3+ in the lattice of Ni-Cu, which comes out with no significant alteration. The evaluated structural parameters utilizing XRD are listed in Table 1 in [52].
Lattice Spacing
Bragg's law was employed to calculate the distance between atom centers (lattice spacing) d that depends upon the direction in the lattice as following: where n signifies the order of diffraction is taken as 1 as well as , is the x-ray wavelength and Bragg's angle, respectively. Intensity (a.u.) The miller indices values, i.e., (h k l) = (3 1 1), were used to calculate the lattice constants employing
Lattice constants
Eq. (4) as follows [67]: Besides, the theoretical lattice constant (ath) was estimated employing the following relation: where R0 stands for the oxygen ion's radius (1.32Å) and the radius of A-and B-sites atoms denoted by and are evaluated utilizing the given formula [68]: where u is the oxygen position parameter considered as 3/8 for an ideal FCC crystal. Moreover, the computed lattice parameters for each crystal-plane, displayed against the Nelson-Riley function, following: where a diffracted line is obtained for each sample employing the diffraction angle .
Crystallite Size
The average crystallite size (D) for the prepared Ni-Cu nanoparticles were estimated following Debye-Scherrer's equation from the most rising peak plane (3 1 1) [69]: where the structural shape factor (Scherrer's constant) k = 0.94 is used for small cubic crystal, the full width at half maximum (FWHM) is denoted by β, symbolizes the wavelength of incident x-rays and is the diffraction angle known as Bragg's angle.
Dislocation density
Dislocation density is the parameter used to analyze the strength and ductility of the crystal arrangement and is varied by the sample annealing. In a crystal structure, the overall dislocation length per unit volume is projected by the number of etch pits per unit area on the etched surface, as determined by equation [70]: where D represents the crystallite size. Dislocation density and particle size have followed an inverse relationship for the synthesized nanoparticles as reported in [52].
Lattice Strain
The unit length deforms when an object is subjected to pressure, reflecting the sample's strain. Due to the formation of defects in crystal structure and flaws in the crystal structure, the atoms' typical lattice orientations exhibit slight changes. Lattice strain quantifies the distribution of lattice constants induced by crystal defects and imperfections, such as interstitial and/or impurity atoms and lattice dislocations [71]. The following relation (Eq. 11) was used to evaluate the lattice strain for the synthesized spinel ferrites.
= (11) where , and represent the full-width at half maximum (FWHM) of the diffraction peak and Bragg's angle, respectively.
Micro-strain
The most prevalent sources of deformation are dislocations, plastic deformation, point defects in the crystal structure, and abnormalities in domain boundaries,which happen in one part per million (10 -6 ) of the material [72]. Hence, peak broadening is a crucial aspect of micro-strain, as defined by the following equation:
Stacking Faults
During crystal formation, point defects can condense into stacking faults (SF), which are distortions from the normal lattice structure induced by the layer arrangement or faults that arise in the atomic planes of the crystal [73]. The stacking fault was calculated and reported that SF values varied inversely only with the tangent of the diffraction angle [52]. Table 1 in [52].
The change in lattice constants and associated cell volume with increasing Al 3+ content is depicted in 2. In variation of lattice parameters, a decreasing trend with Al 3+ content is noticed, which is justified as a larger ionic radius of Fe 3+ (0.63 Å) is replaced by Al 3+ (0.53 Å) having a smaller ionic radius following Vegard's law [74]. The literature reveals similar trends in the variation of lattice parameters [17,36,74]. The depicted downward shift of crystallite size with Al 3+ content after x=0.04 in Fig. 3 A-site after a certain amount of Al incorporation. As consequence, it is observed from Fig. 3 that the average grain sizes for the samples are increased up to x=0.04 Al 3+ incorporation and then the values were decreased. Strain parameters help to understand the correlation between strain-induced magnetism and nonmagnetic Al 3+ concentration in the ferrite nanoparticles. However, the variation in both lattice and micro strain values are portrayed in Fig. 4 to understand a clear project of the lattice distortion behavior with the Al 3+ concentrations. As depicted in Fig. 4, both of the parameters are varied in the same manner except for x=0.10, a slight splitting appears. FESEM extracted surface morphology of the investigated materials exhibited that the grains were mostly about spherical in shape and are distributed uniformly and evenly due to the separating grain boundaries as described in [52]. FESEM micrographs, on either hand, revealed multi-grain phenomena composed of grains and grain boundaries, with some agglomerations appearing due to the dipole-dipole interaction within magnetic nanoparticles and the high surface to volume ratio. The mean particle size of the samples was found to be in the nano-size range (59 -65 nm). The intensity peaks in the Energy-dispersive X-ray spectroscopy (EDX) spectra are exacerbated by the energy gap between two electronic states generated by the cannonade of composites by the electron beams of the SEM. As reported in [52], there are no impurity peaks, reconfirming the materials' single-phase structure observed after ascertaining the proper compositional proportions of elements contained in the synthesized nanocrystalline ferrite samples.
Rietveld refinement
The refinement of XRD extracted data is carried out through the Rietveld refinement analysis using the FullProf software as illustrated in Fig. 5. In this refining process, a nonlinear least-squares fitting method with a Pseudo-Voigt profile was utilized. Throughout the fitting procedure, instrumental and background parameters were carefully considered to refine the structural parameters and this process was repeated until a minimal residual (difference between calculated and observed intensities) was found along with achieving a good value of fitting parameter, χ 2 .
The Rietveld fitting parameters are presented in Table 1, with R-factors and χ 2 values. For each sample's fitting, Fig. 5 displays the observed intensity (Yobs), the estimated intensity (Ycal), and the residual, which are obtained from the refinement process. Also, Table 1 displays the refined crystal parameters (i.e., lattice parameter, unit cell volume, and average crystallite size).
Hopping Lengths
The hopping length refers to the average distance traveled by an ion from one neighboring lattice-site to another. We used the following relations to determine the hopping distance between A-sites, B-sites, and shared sites, respectively [75]: To study the hopping mechanism and further cation distributions over the lattice sites, the hopping lengths for both tetrahedral and octahedral sites are evaluated following Eq. (17)(18)(19). The estimated hopping lengths for A-sites (LA), B-sites (LB), and shared A-B sites (LA-B) are tabulated in Table 2 The variations in both LA and LB with Al 3+ substitution follow a similar trend as shown in Fig. 6.
As the grain size shifts, the distance between magnetic ions shifts, which in turn affects the LA and LB values [76]. Fig. 6 clearly reveals that LA>LB, suggesting that the probability of electron hopping between ions in tetrahedral A and octahedral B sites is lower than that between octahedral B-B sites.
Moreover, hoping lengths also gives the insight to understand the electrical conduction characteristics since the effectiveness of the scattering process of hopping electrons directly affects the conductivity [77]. The other structural parameters as listed in Table 2-bond lengths (dA× and dB×), tetrahedral edge (dA×E), shared (dB×E) and unshared (dB×Eu) octahedral edges for the synthesized Ni0.7Cu0.3AlxFe2-xO4 samples are evaluated using lattice parameter value a and the oxygen positional parameter u = 0.381 Å utilizing the following equations [78]: uniformity and nano-spherical shape of the particles is evident. When the plane of the atoms is in the same direction and linear, high crystallinity is achieved, and the average d-spacing is 0.458 nm which are exemplifies perfectly through the HTEM image in Fig. 7(C). In Fig. 7(D), the selected area electron diffraction (SAED) pattern is shown to be well matched with the x-ray diffractogram results. The SAED pattern confirms that the spotty diffraction rings correspond to planes (1 1 1
Magnetic Properties
The magnetic properties of ferrites are affected by the composition of metal ions and their distribution in the spinel lattice. The variation in cation distribution over tetrahedral (A-site) and octahedral (B-site) C B A D lattice sites causes the variation in magnetic properties. At room temperature, the magnetic parameters of Al 3+ incorporated Ni-Cu ferrite nanoparticles are determined using the VSM method with varying applied magnetic fields (H). The M-H curves of the synthesized materials are represented in Fig. 9.
From the VSM loops, the magnetic parameters (i.e., Ms, Mr, Hc, K, and so on) are calculated and they are listed in Table 4. Table 3. [15,17] suggest that with further Al 3+ incorporation in the ferrites, exchange interaction by the crystal symmetry reduction in lattice occurs, therefore, Cu 2+ partially migrates its site from A to B, Fe 3+ shifts from B to A and Al 3+ alleviates to Al 2+ thus relocating to site A in the manner can be described as Cu 2+ ⟷Cu 3+ +e-, Fe 3+ +e-⟷ Fe 2+ and Al 3+ + e− ⟷ Al 2+ . Facts of cation distribution according to the Neel's lattice model and the exchange interactions among lattice sites (JAA, JAB, and JBB) where each is influenced by oxygen ions enable it to be comprehended the trend seen in Fig 9(A) on the variation of magnetic saturation values and magnetic moment with Al 3+ concentrations [79].
M-H Hysteresis loop
Furthermore, one unpaired electron contains in Al 3+ with a magnetic moment of √3, whereas Al 2+ has no valence electrons that are unpaired. However, Oxygen seems to have two unpaired electrons in its valence state which compensate for neutralizing the charge of the samples. Hence, the lattice constant changes influencing the JAB interaction leads to altering the magnetic performance due to cationic redistribution between A and B sites and lattice distortion in the crystal symmetry of the ferrites [80,81].
The saturation magnetization (MS= MB-MA) and consequently the experimental magnetic moment (ηexp) are found to decrease with the increase of Al 3+ content as depicted in Fig. 9(A) for destabilization of the A-B inter-site interaction. However, both parameters are shown maximum for the pristine sample with no Al 3+ content. Similar variations in such magnetic behaviors for the ferrite nanoparticles have been reported in the literature [17,28,82]. Table 3 as well as the variation of both parameters is presented in Fig. 9(B). As shown in Fig. 9(B), both Mr and HC demonstrate a similar behavior with the incorporation of Al 3+ content. The resultant low coercivity values for the examined ferrite nanomaterials classify them as soft magnetic materials. Fig. 10 portrayed the projection of changes between the Mr/Ms (remanence ratio) and K (magnetic anisotropy constant) for Ni0.7Cu0.3AlxFe2-xO4 samples with Al 3+ substitution changes. Several factors contribute to the variation of HC and K values with Al 3+ concentration, in which magnetic domain walls and corresponding magnetic moments are significant [53]. The K values are found to be increased with Al 3+ concentration, which is attributed to the site exchange interactions among magnetic nanoparticles. The remanence ratio of the synthesized ferrites is found to be very low in a range of (0.000 -0.094) [ Table 3], indicating the presence of magnetic nanoparticles with a multi-domain nature in the samples as also reported earlier [53,83,84].
UV-Vis Analysis:
Optical absorption spectrum (UV-Vis) study is one of the suitable methods for understanding the optical property of the materials. Fig. 11 shows the UV-vis absorption spectra of the prepared Ni-Cu nano spinel ferrites that exhibit absorption spanned in a wide wavelength range of 200-800 nm. It is seen from Fig. 11(A) that there is a tendency to show different slopes at different wave lengths, which can be ascribed to the electron transitions between the oxygen ions and cations. To estimate the direct bandgap energies (Eg), the UV-Vis spectra were plotted; (αhν) 2 vs photon energy (E=hν) using the formula: Ebg (eV) ≤ 1240/ [85,86], and represented in Fig. 11(B). Optical absorption spectra of Al 3+ substituted be seen that the UV-Vis absorption spectra exhibit a slight spline shape indicating to the localized (by means of structural defects) electronic levels above the valence band gap. Additionally, it has been found that in wide bandgap semiconductors, shallow defect levels near the conduction band or valence band do not play an effective role as a recombination site. However, the deepest level within the forbidden band is where the most effective recombination takes place [87]. It appears that the defects at the octahedral location with Al and Fe in the prepared Ni-Cu spinel ferrites could kick off recombination.
However, the optical properties since associated with the crystal formation, hence lattice and micro strain along with the hopping lengths effectively play role to alter the properties.
Density Functional Theory analysis
To mimic the Ni0.7Cu0.3Fe1.94Al0.06O4 sample, standard DFT simulation was performed, and the electronic and optical properties were investigated. for the sake of Al (x=0.06) doping in the B-site and the DFT simulations were performed. Spin polarized calculations with GGA-PBE functionals were performed to relaxed the structure. Table 4 shows the structural parameters from DFT calculations. It is noticed that the lattice parameter ( ), cell volume (V), and inter bonds are well consistent with the experimental data.
agreement with the experimentally observed data. As referred, the material to be a magnetic structure, the magnetic nature of Fe, Ni, O, Cu, and Al was introduced, and found the equilibrium magnetic moments of corresponding elements as 4.257 , -1.796 , 0.259 and -0.608 . However, as the material is magnetic in nature, so the spin polarized calculations were performed. Hence the Spin-UP bands (represented in Fig. 13(a)) and Spin-Down (represented in Fig. 13(b)) bands were obtained. The Wannier90 interpolated band structure is presented in Fig. 13. From the figure, it is clearly evident that the doped material has a direct band gap at the Γ-point in both cases. Therefore, there can be a direct transition of electrons from the valance band to the conduction band without phonon interaction. The Spin-UP case has a band gap of 2.99eV, and the Spin-Down bands have a band gap of 1.66eV. The DFT calculated optical band gap nature and value is found in tune with the experimental data as presented in Fig. 11. The direct band gap makes the material a potential candidate for optoelectronic device applications. However, a defect band state evident in the Spin-UP band structure in the conduction band region. That defect states located 0.62eV above the valence band maxima.
To investigate the defect region, we performed the Density of States (DOS) and Partial Density of States (PDOS) calculations presented in the Fig. 14. The DOS shows that the material has a defect state near fermi region in the conduction band. This occurred due to the strong hybridization between Cu-3d orbital and O-2p orbital; a small contribution exists due to the Fe-3d orbital but no significant contribution of Al-3(s,p) orbitals can exist be observed in the defect region. Therefore, it can be concluded that this material has an opportunity to perform as a photocatalyst in color degradation. Moreover, the above experiment reports that the material is magnetic in nature, which also can be observed in the nonsymmetric density of states represented in Fig. 14. From the PDOS it can be easily inferred that the magnetism is mostly driven by the Fe-3d orbital electrons and the 3d orbitals of Ni and Cu simultaneously.
Optical Properties
As previously mentioned, the above material has a potential optoelectronic device application. Therefore, the optical profile is crucial for this purpose. So, we did an investigation in its optical properties with the GGA-PBE+U formalism. Where the dielectric function plays the critical role, and absorption coefficient , reflectivity R, dielectric energy loss function L, refractive index , optical conductivity like information concealed. To perceive these optical properties of this material at-first we extracted the dielectric function and its corresponding real and imaginary parts from the Kramer-Kronig relation -( ) = ( ) + ( ) [92]. Hence, the light-matter interaction opens tremendous opportunities in case of magnetic matter, we considered polarization along three orthogonal axes-x,y,z which leads us to the optical anisotropy of the concerned material along the x-direction. The optical anisotropy of the doped material constituted in Fig. 15(a, b), which evokes the anisotropy along x-axis. In the static limit, → 0, _ (0) =6.05eV when _ = _ = 5.84eV, where _ is the real part of dielectric function along ( , , )-direction. But, the imaginary part of the dielectric function at the static limit vanishes in all three cases i.e. _ = _ = _ = 0 eV. Therefore, the average of is 5.91eV and the maximum of lies at 2.86eV along the x-direction polarization and 2.92eV for polarization along y,z-directions. The absorption spectra presented in the Fig. 15(c) infers that there is no absorption below the band gap energy nor any band transition appeared. xx yy zz function, that yields the refractive index of the respective material in a similar fashion. Fig. 15(e,f), shows that the above material has a homogeneous refractive index along the polarization direction & , but it poses different over the -direction, therefore this material showed up with two different indices 2.37 (along-) and 2.31(along-, ), hence it encoded itself with the optical birefringence. At limit → 0, the average refractive index of the material 2.34. The extinction coefficient K, from Fig. 15(f), vanishes below the band gap, nearly the band gap region it has a small peak and later it progresses around 5eV region and thereafter it decreases towards the 20eV with a single peak, and finally it diminishes. Maximum attenuation occurs at 5.16eV( -direction) and 4.96eV( , -direction).
Reflectivity was extracted from the refractive index and extinction coefficients, represented in Fig. 16, at static limit → 0, there was a 16% reflectivity (in case of -direction polarization) and about 15% reflectivity for the other cases. Maximum reflectivity was achieved at 5.14eV for the -orientation, which was about 32% , whether the other two cases had maximum reflectivity in the same energy level which of amount 31%.
Conclusion
Using the sol-gel auto-combustion method, a series of nanocrystalline Ni0.7Cu0.3AlxFe2-xO4 has been produced with varying Al 3+ concentrations (0.00≤x≤0.10, in the step of 0.02) and annealed at 700ºC. Xray diffraction patterns confirm that all investigated nanoparticles possess the same cubic single-phase structure with no impurities. The Rietveld refined lattice parameters of the annealed nanoparticles fall in the range (0.833-0.835 nm). It shows steady decrease in lattice parameter size with the increment of Al 3+ except for x=0.02 and 0.08. The average crystallite size of the investigated nanomaterials has been measured using Scherrer's formula considering the most prevalent XRD peaks (3 1 1), and the values are found in the range (61-71 nm) after the refinement. The VSM technique is employed to carry out the magnetic parameters. The Ni-Cu ferrite nanoparticles under this study are shown to be soft ferrimagnetic against an applied magnetic field. A decreasing trend is observed for shifts in saturation magnetization and Bohr magneton with Al 3+ incorporation. The saturation magnetization is observed as maximum for the sample with Al 3+ content (x=0.00). Moreover, the remanence ratio (0.00-0.094), coercivity (0.00-93.57 Oe), and the magnetic anisotropy constant (0.00-3612.19 erg/Oe) are varied with Al 3+ content. The purported high crystallinity and soft magnetic nature of the studied Al 3+ substituted Ni-Cu ferrite nanoparticles synthesized via the sol-gel method stand them potential candidates for multifunctional device applications. Moreover, the direct bandgap and the studied optical properties carried out through the DFT study suggest the investigated materials for optoelectronic device application, especially for their light-matter anisotropy response. | 2023-01-30T06:42:07.560Z | 2023-01-26T00:00:00.000 | {
"year": 2023,
"sha1": "fd783fbde98043baf36d22f805974163757ed053",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fd783fbde98043baf36d22f805974163757ed053",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
219786849 | pes2o/s2orc | v3-fos-license | DETERMINATION OF STATE VARIABLES IN TEXTILE COMPOSITE WITH MEMBRANE DURING COMPLEX HEAT AND MOISTURE TRANSPORT
: The cotton-based composite is equipped with a semipermeable membrane made of polyurethane (PU) (100%), which blocks liquid transport to the surrounding environment. The complex problem analyzed involves the coupled transport of water vapor within the textile material, transport of liquid water in capillaries, as well as heat transport with vapor and liquid water. The problem can be described using the mass transport equation for water vapor, heat transport equation, and mass transport equation for liquid moisture, accompanied by the set of corresponding boundary and initial conditions. State variables are determined using a complex multistage solution procedure within the selected points for each layer. The distributions of state variables are determined for different confi gurations of membranes.
Introduction
Composites made of natural fibers are frequently subjected to a complex process of transport of heat and moisture. The problem is significant because of (i) the thermal comfort of the user in normal conditions and (ii) the adequacy of working conditions in clothing exposed to the heat flux of prescribed density. The semipermeable membrane made of polyurethane (PU) (100%) ensures the diffusion of water vapor to the surroundings, while simultaneously blocking the transport of liquid moisture outside (the sweat from the skin) and inside (the precipitation). The second membrane between the textile layers helps to secure the prescribed conditions in a cotton-based composite.
The effect of coupled transport on the protective performance of a garment has been investigated previously [1], with the model being developed using the balance concept and enthalpy formulation. The energy approach accompanied by adequate boundary conditions and Pennes's approach of heat conduction in human skin have been presented by Song et al. [2]. The problem can be also analyzed by means of the only state variable, viz., the temperature [3], and water vapor concentration [4].
The model of coupled heat and water vapor transport and the interaction with human comfort have been investigated by Li [5]. Additional air layers are introduced inside the garment, which are determined by the active elements. A mathematical model that takes into account the water vapor sorption of fibers has been developed [6] to describe and predict the coupled heat and moisture transport in wool fabrics. The two-stage moisture sorption has been analyzed using the Fick's diffusion and David-Nordon models. A mathematical simulation of the perception of thermal and moisture sensations also has been developed [7] using the physical mechanisms of heat and vapor transport, the neurophysiological responses, and the psychoneurophysiological relationships from experiments. On the basis of a mathematical model describing the coupled heat and moisture transfer in wool fabric, Haghi [8] has investigated the moisture sorption mechanisms in fabrics made from fibers with different degrees of hygroscopicity. For weakly hygroscopic fibers, such as polypropylene fiber, the moisture sorption can be described by a single Fickian diffusion with a constant diffusion coefficient. Li et al. [9] have investigated the coupling mechanism of heat transfer and liquid moisture diffusion in porous textiles. An equation describing liquid diffusion is incorporated into the energy conservation equation and the mass conservation equations of water vapor and liquid moisture transfer, which include vapor diffusion, evaporation, and sorption of moisture by fibers. A dynamic model of liquid water transfer, coupled with moisture sorption, condensation, and heat transfer, in porous textiles has been developed by incorporating the physical mechanism of liquid diffusion in porous textiles [10]. The same mathematical model of heat and mass transfer is coupled with the phase change of materials in porous textiles [11]. The study of water vapor permeability of wet fabrics and the effect of air gaps between the skin and the fabric on the total relative cooling heat flow is analyzed by Hes and Araujo [12].
Puszkarz and Krucińska [13] investigate the comfort of doublelayered knitted fabrics with applications addressed for specific users. Textiles with a comparable structure are tested for air permeability, and the results are compared with numerical calculations. Different aspects of textile materials in the microscale are analyzed by Grabowska and Ciesielska-Wróbel [14]. The uncovered head causes significant heat and moisture loss from a newborn's skin. The thickness of a composite textile bonnet that retains optimal skin parameters has also been analyzed [15]. The study [16] presents the tests of heat insulation obtained for the newly developed garment compared with that of commercially available garments for babies, under different climatic conditions. This paper is a continuation of previous investigations concerning the determination of state variables in complex composites [15,[17][18][19][20][21]. The main goal is to determine the distribution of state variables in the cotton-based composite subjected to complex heat, water vapor, and liquid water transport. The structure is equipped with a single membrane made of 100% PU (on the external boundary) and two membranes made of 100% PU (on the external boundary and between the textile layers), which can improve the user's working conditions. The numerical simulation is always cheaper and gives more general results than the practical tests with measurable numerical and economic benefits.
The novel elements are the following. (i) Introduction of two cotton layers characterized by different material characteristics (cotton + Kevlar; acryl + cotton). (ii) Determination of state variables for cotton-based composite with single and double membranes to improve the thermal comfort and working conditions of the user. material, combined with the heat transfer. The transport is unidirectional and is reduced to an optional cross section of the textile material determined within the 2D Cartesian coordinate system. The coordinate x = 0 denotes the lower part, whereas x = L represents the upper part of the material (Figure 1). The fi brous material is interleaved by capillaries, which transport the water vapor and liquid moisture (e.g., the sweat) from the skin to the surroundings. These capillaries are directed at a dominant angle of b.
The assumptions characterizing the physical model are the following [5,6,9,10]: (i) Water vapor diffuses within the fi bers and the interfi ber void spaces. Liquid is transported by surface tension to the regions of lesser concentrations. Heat is transported by conduction within fi bers and by convection from the outer surfaces to the void spaces as well as within these spaces. (ii) The volume increase caused by mass diffusion and liquid transport can be neglected. (iii) Orientation of the fi bers can be signifi cant, although the diameters are small, and water vapor is transported through the interfi ber spaces faster than within the fi bers. The internal structure is different in woven fabrics, knitted fabrics, and nonwovens, and it should be always deeply analyzed. (iv) Though the combined transport is a disequilibrium process, instantaneous thermodynamic equilibrium can be introduced between the textile material and the fl uid in the free spaces and capillaries irrespective of time characteristics. Textile fi bers have small diameters and large surface/volume ratio. (v) The inertial force is neglected considering the low velocities. (vi) The air/vapor mixture is at saturation point in the presence of the liquid phase. (vii) Capillaries are assumed to be of the same diameter and ensure the continuous fl ow, i.e., the liquid transport is assumed to satisfy the cumulative frequency and linear distribution.
Introducing the balance concept, we can determine the following: (i) the mass transport equation for water vapor -Eq. (1); (ii) the heat transport equation -Eq. (2); (iii) the mass transport equation for liquid moisture -Eq. (3). (1) The mass transfer equation, i.e., Eq. (1), determines the transport of water vapor as follows: (i) within the void spaces between the fi bers in time -the fi rst term on the left-hand side; (ii) in the fi bers from the skin to the surroundings -the second term; (iii) from the current concentration to the saturation conditions that cause vapor condensation -the third term. The heat transport equation, i.e., Eq. (2), describes the following: (i) the heat transported with liquid water -the fi rst term on the left-hand side; (ii) the heat transported during sorption/ desorption of water vapor and liquid moisture in fi bers -the second and third terms; (iii) the latent heat of condensationthe fourth term. The mass transport equation for liquid, i.e., Eq.
(3), defi nes the transport of moisture as follows: (i) along the capillaries in time -the fi rst term on the left-hand side; (ii) in the fi bers with respect to time -the second term; (iii) the current transport of liquid water to attain the saturation condensationthe third term.
The sum of the volume fractions is represented as follows.
The sorption/desorption of water vapor between the fi bers and the interfi ber spaces is a complex process [5,6]. The fi rst stage of sorption is described by Fick's law and the constant diffusion coeffi cient of the material. The sorption/desorption process within some materials possessing strongly hygroscopic properties (cf., wool) is determined by Fick's law during the entire process [5,6,9,10]. Assuming a short duration of time, the problem is described by the fi rst phase of sorption. The discrete relations have the following form. The equilibrium time, t eq, is determined experimentally, which depends on the textile material and is in the range of 540-600 s [5,6,8]. For weakly hygroscopic fi bers such as polypropylene, the moisture sorption is described by a single Fickian diffusion with a constant diffusion coeffi cient. The sorption rate during the fi rst stage, R 1 , is defi ned by the water vapor transport in dry cylindrical fi bers [9, 10] according to Fick's diffusion.
The boundary conditions at the skin (x = 0) determine the concentration of saturated water vapor, the specifi ed temperature, and the constant volume fraction of fi brous material in the following form. (8) Let us assume that the external boundary x = L is unprotected against disadvantageous working conditions. It is subjected to water vapor convection from the void spaces, as shown in Eq. (9). Heat is transferred by radiation, convection, and latent heat of evaporation, as in Eq. (10). Liquid moisture is transferred by convection, shown in Eq. (11). The appropriate conditions are expressed below.
(9) (10) The external boundary can be alternatively protected by the semipermeable membrane.
Let us next introduce the two-layered textile structure. The internal layer is made of cotton (80%) and Kevlar (20%) of thickness 7 × 10 -3 m. The external layer is made of acrylic fi ber (80%) and cotton (20%) of thickness 8 × 10 -3 m. The surface mass is equal to 0.500 kg/m 2 for Kevlar, 0.300 kg/m 2 for cotton, and 0.350 kg/m 2 for acrylic fi ber. The working time in the test clothing is limited and does not exceed t eq = 540 s; the diffusion in the fi bers is determined by the fi rst stage of sorption.
The material shows unidirectional transport of moisture and heat. The diffusion coeffi cients within the fi bers are assumed according to previous reports [8].
Of course, the global value of the diffusion coeffi cient within the fi bers is determined using any homogenization method, e.g., the simplest "rule of mixture".
The heat transport coeffi cients in textile fi bers have the following values [8]: The heat of sorption of water vapor in the fi bers, λ v , determines the part of heat transported with moisture during sorption on the external surface of the fi bers. The volumetric heat capacity c is the heat transferred to increase the temperature.
Numerical Solution
The problem is solved numerically using an iterative procedure of the following operational sequence.
Water vapor concentration within the fi bers w f is determined
from Eq. (4), i.e., Fick's diffusion within a fi brous material.
2. Two-factor formula (ε a w a ) is calculated from the mass transfer equation for water vapor, i.e., Eq. (1).
3. Volume fraction of the liquid phase ε l can be determined from the mass transport equation for liquid moisture, i.e., Eq. (3).
4. Volume fraction of water vapor ε a is computed from the sum of the volume fractions, viz., Eq. (4).
5. Water vapor concentration in the air fi lling the interfi ber void space w a is consequently defi ned by a simple division: (ε a w a )/ε a .
The textile structure with material characteristics as in Eq. (18) is exposed to coupled transport for 400 s; the problem can be solved numerically from t = 0 to the fi nal value t = 400 s with a time step equal to 25 s.
Let us fi rst analyze the textile structure made of two textile layers without any membrane. The location of the calculation points is shown in Figure 2a The course of temperature versus time is shown in Figure 4. The temperature at the calculation points situated closer to the skin is higher, and the courses are different depending on the location. The points located closer to the skin are characterized by a rapidly reduced course at the beginning of the calculation time. In contrast, the temperature at the points located further from the skin decreases softly at the beginning of the test period. Next, the temperature reduction is substantial. The maximal difference in temperature at the end of the time range The water vapor concentrations within the fi bers, w f , are depicted in Figure 6. The membrane on the external surface of the cotton-based composite protects against vapor loss to the surroundings. Therefore, the values of vapor concentrations in the fi bers increase continually irrespective of the time to the saturation value at the maximal time. The difference in the values at different calculation points is caused by the diverse nature of the materials in the layers. The course of temperature versus time is similar to that shown in Figure 4.
The next composite is equipped with two membranes, the fi rst is between two cotton layers, and the second is on the external boundary. The structure can be implemented into the personal protective equipment for fi remen subjected to thermal radiation from the fi re source. The vapor transport to the surroundings is blocked, which results in an increased level of water within the internal layer and a constant skin temperature due to contact with the coolant. The water vapor concentration within the interfi ber space, w a , versus time is shown in Figure 7. The initial values of vapor concentrations increase rapidly for x = 0.0035 m and x = 0.007 m; the courses are similar to the corresponding curves in Figure 3. The values at the other points increase softly, which is caused by the internal membrane between the textile layers. The external membrane yields the maxima of concentrations at the fi nal time point, and these values are comparable within all calculation points.
Conclusions
The combined transport of water vapor, liquid moisture, and heat in cotton-based composite is a complex phenomenon. The physical model introduces different simplifi cations, e.g., only the dominant angle of the pore axes, only Fick's diffusion, and so on. However, the model proposed is complicated, although the approximated solution is obtained in a few consecutive steps. The obtained distribution of state variables for a simple structure without membranes is concomitant with that published by other authors [9, 10].
The light protective clothing for fi refi ghters does not contact the fl ame and withstands a short exposure time to heat fl ux of prescribed density. The membrane is the additional barrier protecting against complex heat, moisture, and liquid transport, as well as retaining an adequate temperature on the skin. The phenomenon is combined because heat is transported with liquid water, by sorption/desorption of water vapor and liquid moisture in the fi bers, and by the phase change (the latent heat of condensation). The textile layers are made of cotton with Kevlar/acryl, which improve both the moisture transport from the skin and the personal comfort of user. The single-and double-membrane structure causes an increase in water vapor concentrations within the void spaces w a in time. Additionally, the courses of the maximal values for samples with membrane are more convergent than the courses of samples without membrane. [17] Dems, K., Korycki, R. (2005). Sensitivity analysis and optimal design for steady conduction problem with radiative heat transfer. Journal of Thermal Stresses, 28, 213-232.
[18] Korycki, R. (2009) The solution of the complex transport process is determinable by signifi cant simplifi cations of the model. The problem is affected by the optimization balance, i.e., the most accurate results versus the most benefi cial computational effort. Thus, the analysis can be developed to fi nd the simplest possible solution, for instance, by some additional simplifi cations of the model. The problem can be also applied to optimize the shape and material characteristics of textile layers within the composite. The fi nishing procedure and its infl uence on the material characteristics can be additionally discussed [22]. | 2020-05-28T09:13:37.343Z | 2020-04-29T00:00:00.000 | {
"year": 2020,
"sha1": "cefd74437aae1103c61cecd571d65a830f57c6d1",
"oa_license": "CCBY",
"oa_url": "https://www.sciendo.com/pdf/10.2478/aut-2020-0011",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cf179c9d2152feb9ebced4d9f59c42c9f2f9de4b",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
265199423 | pes2o/s2orc | v3-fos-license | Maximizing biodiesel yield of a non-edible chinaberry seed oil via microwave assisted transesterification process using response surface methodology and artificial neural network techniques
In this study, the non-edible Chinaberry Seed Oil (CBO) is converted into biodiesel using microwave assisted transesterification. The objective of this effort is to maximize the biodiesel yield by optimizing the operating parameters, such as catalyst concentration, methanol-oil ratio, reaction speed, and reaction time. The designed setup provides a controlled and effective approach for turning CBO into biodiesel, resulting in encouraging yields and reduced reaction times. The experimental findings reveal the optimal parameters for the highest biodiesel yield (95 %) are a catalyst concentration of 1.5 w/w, a methanol-oil ratio of 6:1 v/v, a reaction speed of 400 RPM, and a reaction period of 3 min. The interaction of the several operating parameters on biodiesel yield has been investigated using two methodologies: Response Surface Methodology (RSM) and Artificial Neural Network (ANN). RSM provides better modeling of parameter interaction, while ANN exhibits lower comparative error when predicting biodiesel yield based on the reaction parameters. The percentage improvement in prediction of biodiesel yield by ANN is found to be 12 % as compared to RSM. This study emphasizes the merits of both the approaches for biodiesel yield optimization. Furthermore, the scaling up this microwave-assisted transesterification system for industrial biodiesel production has been proposes with focus on its economic viability and environmental effects.
Introduction
The world is currently experiencing a historic energy crisis because of the diminishing supply of fossil fuels, over-reliance on them, and their rising depletion [1].As a result, there have been more greenhouse gas emissions, hotter weather, and numerous other environmental problems [2].Environmental health concerns are greatly impacted by the growing carbon footprint, notably that of the automotive industry and other associated industries [3].To diversify the renewable fuels used in automobiles, efforts are being made.Numerous scientific studies have shown that these global problems are primarily the result of human activity [4].Indeed, the rapid expansion of polluting industries, the rapid expansion of the transportation sector, and excessive energy consumption have all made a significant contribution to the natural resources' depletion and environmental degradation through the release of greenhouse gases, particularly CO 2 , which is the primary cause of climate change.This leads to global warming [5].Therefore, it is crucial to create substitution by sustainable energy sources.One such choice is biodiesel, a sustainable, clean-burning fuel that can be used in diesel engines and is generated from vegetable or animal-based sources.Biodiesel has drawn a lot of interest as a potential substitute for fossil fuels as it is advantageous to the environment and economy [6].Compared to normal petroleum-based diesel fuel, it generates fewer greenhouse gases, is non-toxic, and biodegradable [7].
The adaptability of biodiesel's feedstocks is one of its main advantages [8].Both edible and non-edible oils can be used to make biodiesel, making it a desirable option for nations with an abundance of non-edible oilseeds [9].One such underutilized resource that can be utilized to make biodiesel is the non-edible oilseed known as chinaberry seed.The seeds of the Asiatic and African Chinaberry tree are used to make China-Berry seed oil (CBO).Non-edible oils are frequently more challenging to turn into biodiesel because of their higher acid value, higher moisture content, and higher level of impurities [10].Therefore, it's crucial to maximize the amount of biodiesel that can be produced from CBO.
Vegetable oils can be transformed into biodiesel using the inventive and effective process of microwave-assisted transesterification [11].The most popular method for manufacturing biodiesel in laboratory and commercial settings with affordable and environmentally responsible catalysts is transesterification [12].In this method, the reaction mixture is heated using microwaves, accelerating the reaction rate, cutting the reaction time, and increasing the production of biodiesel.For transesterification, microwaves have a number of advantages over conventional methods, including higher yields, a need for less energy, and the creation of less wastes [13].
For statistical analysis to determine the importance of the supplied data, RSM was utilized [14].A statistical technique for enhancing the process parameters of chemical reactions is called Response Surface Methodology (RSM) [15].To create the best possible operating circumstances, RSM creates an appropriate experimental design model [16].RSM provides a number of benefits, including the ability to simultaneously analyze many components and their interactions, minimize the number of tests required, and model intricate correlations between variables [17].RSM is a crucial tool for R&D as it facilitates to optimize for goods and processes effectively and efficiently, which reduces costs and produces better results [18].The yield optimization of biodiesel from a range of feedstock, including non-edible oils, has been accomplished with the help of RSM.
RSM has proven effective in modeling parameters, but in recent times, machine learning approaches, particularly Artificial Neural Networks (ANNs), have gained popularity for predicting product outcomes based on given parameters [19].ANNs excel in modeling complex functions with higher accuracy compared to RSM, as RSM is limited to quadratic behavior of the parameters [20].This advantage positions ANNs as a more competent candidate for modeling and optimizing the given phenomenon.
The process of transesterification with microwave assistance was studied by the researchers [21,22].They used microwave ovens as the reactor and used cooking oil as the raw material for the transesterification process.According to the study, microwave-assisted transesterification can produce up to 96 % more biodiesel than traditional methods can.The process was also shown to be quicker, more energy-efficient, and less wasteful.The team's results have significant ramifications for the creation of more economical and environmentally friendly biodiesel manufacturing systems.A recent study by Luqman et al. [23] cotton seed oil as the fuel and microwave-assisted transesterification to produce biodiesel.They found that their approach reduced reaction time and energy usage while producing more biodiesel than conventional approaches.This present study is an effort to convert CBO into biodiesel with optimized yield.Microwave assisted transesterification process has been used for the conversion of CBO into biodiesel.Microwave assisted transesterification process is a tangible solution over the conventional transesterification process in terms of energy consumption.In the setup for microwave-assisted transesterification, a reactor vessel, a condenser, and a stirrer are frequently employed.The reactor vessel's mixture of CBO, methanol, and a catalyst is heated using microwaves.Regular measurements of the acid value and biodiesel output are used to track the reaction's progress.The resulting biodiesel is then cleaned, dried, and refined to get rid of any impurities.The variations in biodiesel yield are observed owing to the operating parameters of the transesterification process.An interaction among these parameters is developed by RSM and biodiesel yield predictions are made.Another technique ANN has been used in this study to predict biodiesel yield.It has been found that the predicted results of ANN are more precise and very near to experimental results.
Materials
A microwave oven, a reaction vessel, a mechanical overhead stirrer, and a condenser comprised the microwave-assisted transesterification system.A 250 mL round bottom flask with a reflux condenser was used as the reaction vessel.CBO was purchased from a local market of Pakistan.The other chemicals such as the alcohol and catalyst, respectively, were purchased from the Sigma Aldrich.The impurity of methanol and KOH was 99.9 % and 85 % respectively.
Biodiesel production
The free fatty acid value (FFA) is the determining step in biodiesel production from any feedstock.Before converting CBO into biodiesel using mineral acids, the acid value (AV) should be reduced.CBO's AV was 3.78, which was higher than the average AV.Therefore, the free fatty acid (FFA) content of the raw CBO was reduced by esterifying it with mineral acids (H 2 SO 4 and CH 3 OH), which in turn determined the AV.
The amount of methanol used was the most crucial element in the esterification process.FFA reduction would be more efficient if there was more methanol in the mixture.Other factors included the reaction's 600 RPM speed, 60 • C temperature, and 3 h of reaction time.Equation ( 1) was used to calculate the catalyst dosage for transesterification.
Catalyst amont =
Catalyst concentration × Amount of CBO used 100 Microwave assisted transesterification has been used to transform CBO into biodiesel.The KOH catalyst in the presence of methanol were used.In the presence of KOH, methanol was added to CBO at reaction speeds ranging from 100 to 400 RPM for durations of 1-3 min before settling overnight.Glycerin, being heavier, settled down in the bottom layer and looked to be collected in the top layer; the latter was separated using a separating funnel.To remove contaminants like catalysts and unused methanol, trans-esterified biodiesel was continually washed in hot water.Biodiesel's washing required distilled water, and the process was repeated until the utilized distilled water was transparent.The whole process flow diagram has been shown in Fig. 1.A rotary evaporator was used to remove the remaining methanol and water from the biodiesel.Biodiesel yield was calculated using Equation (2) [24]:
Biodiesel characteristics analysis
Using a bomb calorimeter, the calorific value of biodiesel was calculated.The Cleveland open cup apparatus (Koehler, New York, NY, USA) was used to measure the flashpoint of biodiesel.The GCMS-QP2010 plus was used to determine the FAME composition.The carrier gas was helium gas.The determination of acid value required titration of CBO with a combination of 0.5 N KOH and 50 mL of distilled water.As an indication, 0.25 g of phenolphthalein and 25 mL of ethyl alcohol were combined.A 50 mL solution (95 % ethyl alcohol and 5 % distilled water) was made, and 1 mL of an indicator was then added to a CBO solution.The AV of WCO was calculated using Equation (3) [24]: Titration (4) [24], where, N: Normality of KOHV: Volume of KOH and distilled water used for titration W: Weight of CBO used.
Method for the biodiesel yield optimization
The catalyst concentration, the methanol to oil ratio, the reaction speed, and the time were the four main operating factors that influence biodiesel yield.For the purpose of optimizing biodiesel yield, experimental conditions were created using JMP Pro 16 software.The operating parameters with their corresponding ranges have been shown in the Table 1.
On JMP Pro 16, the data gathered from experiments were analyzed and then interpreted.Regression analysis, response surface mapping, and analysis of variance (ANOVA) are the three primary analytical stages needed to create optimal circumstances.Artificial Neural Network (ANN) was also applied to predict the biodiesel yield.The experimentally optimized biodiesel yield was compared for RSM and ANN to find the merits of the two techniques for yield prediction and optimization.The flow of different stages is shown in Fig. 2.
Table 1
Process parameters for yield optimization.
Characterization of biodiesel
Table 2 describes the physical and chemical characteristics of CBO sourced biodiesel.These characteristics have been contrasted with typical biodiesel thermophysical characteristics according to ASTM standards.The components of FAME have been identified by GCMS; Table 3 shows the percentage composition of the various long carbon chain constituents.
Biodiesel yield optimization
With the design of experiments, a total of 26 experiments were performed.Fig. 3 shows the biodiesel yield against different reaction parameters.The size of shape corresponds to the time, the colors red, gray, and blue represent the reaction speeds and the markers circle, plus and diamond signify the methanol to oil ratio.The lines solid, short dash and long dash are the trend fits to the data.The graph between catalyst concentration and biodiesel yield is formed.
From the Table 4, the biodiesel yield increases with an increase in catalyst concentration.However, at low catalyst concentration values, the higher yield is obtained at 250 RPM, 1 min reaction time and 9 v/v methanol-oil ratio.As the catalyst concentration is increased, the maximum yield behavior is changed.Now the values for time, reaction speed and methanol to oil ratio came to be 2 mint, 100 RPM and 9 v/v respectively.The yield for the methanol to oil ratio 6 v/v first decreases then increased to maximum value of 95 % for catalyst concentration = 1.5w/w, reaction time = 3 min and reaction speed = 400 RPM.
Validation of optimized techniques
As the biodiesel yield mostly depends on these operating parameters, RSM develops an interaction between them in the transesterification process [25].Therefore, at optimal operating parameters, the biodiesel yield would be optimal.Consider the following four input reaction variables: catalyst concentration (C), reaction time (A), methanol to oil ratio (B), and reaction speed (D).For 26 experiments, the yield of CBO biodiesel was achieved.Fig. 4 shows the relation between RSM prediction and actual yields.
The impressive consistency between the experimental and expected biodiesel output is shown by the linear regression of fit.The obtained biodiesel yields varied from 60 % to 95 %.Additionally, the magnitude importance of each term in the model was established.Every factor, including those with linear, quadratic, and interaction effects, has a significant impact on the production of biodiesel.As can be seen, catalyst concentration had the biggest linear impact on biodiesel yield among the four factors considered.It has a more significant effect than the other factors, which include reaction time, the methanol-oil ratio, and stirring speed.However, the quadratic components for all three variables-catalyst concentration, methanol-oil ratio, and stirring speed-have a greater effect than their linear equivalents.Most of the literature claims that the methanol-oil ratio and catalyst concentration have the highest effects on biodiesel yield.Table 5 shows the analysis of variance.
Effect of operating parameters
This section describes how the biodiesel yield is affected by different process parameters.To examine the impact of catalyst concentration, methanol-oil ratio, stirring speed, and reaction time, Fig. 5a shows an experimentally obtained RSM plot.A range of catalyst concentrations, from 0.2 w/w to 1.5 w/w, were used.As seen in Fig. 2a, the production of biodiesel increased after it climbed from 0.9w/w to 0.2w/w.By raising the methanol to oil ratio from 6:1 to 9:1, the yield was reduced.A higher methanol to oil ratio means that there is more methanol available to react with the oil, which leads to a higher yield of biodiesel [26].However, if the methanol to oil ratio is too high, then the excess methanol will not react and will be wasted.The optimum methanol to oil ratio is typically between 6:1 and 9:1.At a constant stirring speed of 400 RPM and a constant 3 min reaction duration, it demonstrates the relationship between catalyst concentration, methanol to oil ratio, and percentage yield of biodiesel.At a 6:1 ratio, the highest yield of 95 % was attained.Fig. 5b depicts the connection between catalyst concentration, stirring rate, and yield.A higher catalyst concentration means that there are more catalyst molecules available to speed up the reaction, which leads to a higher yield of biodiesel [27].However, if the catalyst concentration is too high, then the catalyst can become deactivated, and the reaction will slow down.The optimum catalyst concentration is typically between 1 % and 2 %.The biodiesel yield peaked at stirring speeds between 180 RPM and 250 RPM and decreased at higher stirring speeds.Biodiesel production, the forward reaction is the transesterification reaction, where the oil molecules react with alcohol to form esters [28].When the reaction speed is increased, the forward reaction is speed up, and the equilibrium point is shifted to the right.This means that more esters are produced, and the biodiesel yield increases.However, if the reaction speed is increased too much, the backward reaction can also be sped up.This can cause the equilibrium point to shift back to the left, and the biodiesel yield can decrease.Fig. 5c depicts the behavior of % yield with respect to catalyst concentration and reaction time with a constant methanol to oil ratio of 6:1 and a constant stirring speed of 400 RPM.A little increase in yield was observed when the reaction time was increased from 1 min to 3 min.The biodiesel yield was enhanced by increasing the catalyst concentration to 0.85 w/w.The trends are consistent with research results that have already been documented in the literature.Fig. 6a displays response surface plots as a function of methanol to oil ratio, stirring speed, and percentage yield of biodiesel at a fixed catalyst concentration and reaction duration of 1.5 w/w and 3 min, respectively.
The percentage yield increased from 70 % to 90 % by swiftly increasing the stirring speed from 250 RPM to 400 RPM with a methanol to oil ratio of 6:1.The maximum yield of biodiesel was created with a methanol to oil ratio of 9:1 to 10:1, as shown in Fig. 6b.Reaction time has an impact on yield as well.Fig. 7 shows surface response function of stirring speed, reaction time, % yield of biodiesel, and methanol to oil ratios of 1.5 w/w and 6:1 for a constant catalyst concentration.The graph demonstrates a consistent rise in yield as rotating speed and reaction time are increased as this has been discussed earlier.The backward reaction is the hydrolysis reaction, where the esters react with water to form the original oil molecules.When the reaction time and reaction speed increase, the concentration of water molecules increases, which speeds up the backward reaction [29].
The equilibrium point is the point at which the forward and backward reactions occur at the same rate.When the reaction time and reaction speed increase, the equilibrium point is shifted to the left, meaning that more of the original oil molecules are produced and less biodiesel is produced.When the reaction time and reaction speed increases, the esters can be converted to soap.Soap is a byproduct of the transesterification reaction, and it can reduce the yield of biodiesel [30].
Artificial neural network (ANN)
Artificial Neural Network (ANN) is a function modeling technique.With enough data and right parameter selection any function may be modeled with this technique [31].In ANN, artificial neurons are used to form a layer based on the number of variables [32].The important parameter of a neuron is its activation function, there are a number of activation functions available [33].The list includes linear activation functions, gaussian function and tangent hyperbolic function.There are more functions but only three were analyzed for ANN modeling of the yield.Several variations were tested.The dataset was split into a training set and validation set.Seventeen data points were used in training and six for validation.The transform covariate function was used to create higher dimensional data.The best results were obtained with a two-layer feedforward network with the first layer consisting of tangent hyperbolic activation function and the second layer with simple linear function.The use of the tangent hyperbolic function confirms the quadratic nature of the parameters predicted by the RSM.Fig. 8 illustrates the ANN model used for the prediction of the biodiesel yield.
Table 6 shows the R 2 and standard deviation values for training and test set.The R 2 value is close to one for the training and is 0.944 for validation.Thus, it confirms a well-defined model based on the training and validation dataset.
Comparison of RSM and ANN models
The RSM and ANN techniques are applied for the biodiesel yield estimation.It is found that RSM is better at describing the interactions of different parameters with yield.RSM is also able to predict the yield as well.The interaction relations by RSM are like that of original behavior found in the experiments.However, upon comparison with RSM and ANN, the prediction of ANN is close to actual results.ANN is very sensitive to data as a supervised learning method of training and validation has been used.The presence of outliers R. Akhtar et al. in the data has to be removed in order for ANN to work properly.Secondly, if the model is over trained, it may predict the value of yield greater than 100 %, that too is a problem of overfitting.Therefore, for the application of ANN, a subject matter expert is required.With a well-trained model, the yield can be predicted with higher accuracy as compared to RSM.Therefore, both are used as their merit superimpose, RSM better describes interactions of individual parameters and ANN works better for overall yield prediction.
Conclusion
At the best operating circumstances, the yield of biodiesel was optimized.CBO FFAs were decreased by acid treatment.H 2 SO 4 was shown to be the most efficient mineral acid.The FFA's value decreased by 90.4 %.It was discovered that using methanol to transesterify CBO was particularly successful.With a catalyst concentration of 1.5 %, a methanol to oil ratio of 6:1, a stirring speed of 400 RPM, and a reaction period of 3-min, 95 % biodiesel yield was achieved.The comparison of two statistical methods is done.The RSM predicts the interactions whereas ANN predicts the yield in the vicinity of the experimental results.Thus, the merits of RSM and ANN for biodiesel production is highlighted.Therefore, the production parameters may be optimized using these techniques.
Fig. 5 .
Fig. 5. Experimentally obtained RSM plot to investigate the effect of (a) methanol to oil ratio (b) stirring speed (c) reaction time at a constant catalyst concentration.
Fig.9a, b, c revealed the comparison between the actual yield and predicted yield for the training and validation stages of the RSM and ANN models.All the points on the training section are on the direct relation line of slope 1.Thus, making a good training prediction.The optimization was applied on the ANN, to predict the maximum yield value of 95 % with parameter values of 1.5w/w, 400 RPM, 6v/v and 3 min for catalyst concentration, reaction speed, methanol: oil and time respectively.Moreover, this is the actual value from the experiment as compared to that of 88 % prediction by RSM.
Fig. 6 .
Fig. 6.Experimentally obtained RSM plot to investigate the effect of (a) stirring speed, (b) reaction time, at a constant range of methanol to oil ratio.
Fig. 7 .
Fig. 7. Experimentally obtained RSM plot to investigate the effect of reaction time at a constant range of stirring speed.
Fig. 9 .
Fig. 9. Biodiesel yield predicted validation models (a) RSM model (b) ANN training and test (c) Combined RSM and ANN models.
was released, employing palm and
Table 2
Physical and chemical characteristics of biodiesel.
R.Akhtar et al.
Table 3
GCMS analysis of biodiesel sourced from the CBO.
Table 4
Interaction among the operating parameters and actual and predicted yield from experimental and RSM.
Table 5
Analysis of variance.
Table 6
Statistical analysis. | 2023-11-15T17:28:24.791Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "e3a63afe650a4c02f0ce701be6b9dab65495d1c6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.heliyon.2023.e22031",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3c90080ba4e446714df28b2ad7f4580d781cd633",
"s2fieldsofstudy": [
"Chemistry",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
237629183 | pes2o/s2orc | v3-fos-license | Smart carnivores think twice: Red fox delays scavenging on conspecific carcasses to reduce parasite risk
The recent SARS-CoV-2 epidemic has highlighted the need to prevent emerging and re-emerging diseases, which means that we must approach the study of diseases from a One Health perspective. The study of pathogen transmission in wildlife is challenging, but it is unquestionably key to understand how epidemiological interactions occur at the wildlife-domestic-human interface. In this context, studying parasite avoidance behaviours may provide essential insights on parasite transmission, host-parasite coevolution, and energy flow through food-webs. However, the strategies of avoiding trophically transmitted parasites in mammalian carnivores have received little scientific attention. Here, we explore the behaviour of red foxes (Vulpes vulpes) and other mammalian carnivores at conspecific and heterospecific carnivore carcasses using videos recorded by camera traps. We aim to determine 1) the factors influencing the probability of foxes to practice cannibalism, and 2) whether the scavenging behaviour of foxes differ when facing conspecific vs. heterospecific carcasses. We found that red foxes were generally reluctant to consume mesocarnivore carrion, especially of conspecifics. When recorded, consumption by foxes was delayed several days (heterospecific carcasses) or weeks (conspecific carcasses) after carcass detection. Other mammalian scavengers showed a similar pattern. Also, meat-borne parasite transmission from wild carnivore carcasses to domestic dogs and cats was highly unlikely. Our findings challenge the widespread assumption that cannibalistic or intra-specific scavenging is a major transmission route for Trichinella spp. and other meat-borne parasites, especially for the red fox. Overall, our results suggest that the feeding decisions of scavengers are probably shaped by two main contrasting forces, namely the nutritional reward provided by carrion of phylogenetically similar species and the risk of acquiring meat-borne parasites shared with these species. This study illustrates how the detailed monitoring of carnivore behaviour is essential to assess the epidemiological role of these hosts in the maintenance and dispersion of parasites of public and animal health relevance.
Host species exhibit a wide array of strategies to avoid, remove and control parasites (i.e., macro-and microparasites, the latter including protists, fungi, bacteria and viruses; Behringer et al., 2018), including immunological and behavioural responses (Blumstein et al., 2017). Among them, behaviour may be regarded as the animals' first line of defence against infection (Hart, 1990(Hart, , 2011. Given that detecting parasites is challenging, usually due to their small size, there has been selection for animals to respond to indirect signs associated with the risk of parasite transmission, regardless of actual parasite presence (Curtis, 2014;Moleón et al., 2017;Weinstein et al., 2018). In response to trophically transmitted parasites, infection risk can therefore be minimized by avoiding risky foods or feeding sites, i.e., parasite-rich environments Curtis, 2014;Hart and Hart, 2018;Weinstein et al., 2018). For instance, herbivores usually avoid grazing close to faeces (Ezenwa, 2004). At a landscape scale, animals are thus forced to modify their use of space and time to reduce exposure to parasites . Hosts may perceive parasite infection risk on a "landscape of disgust", with high-risk patches that are avoided and low-risk patches that are safe Weinstein et al., 2018), whose distribution and magnitude may change with time (Fritzsche and Allan, 2012). In turn, parasite avoidance behaviours may alter energy flow through food-webs (Wood and Johnson, 2015).
Despite the important ecological, evolutionary and epidemiological implications of host behaviour (Ezenwa et al., 2016;Sarabian et al., 2018;Weinstein et al., 2018), little is known about the strategies, mechanisms and consequences of trophically transmitted parasite avoidance in carnivore species. In general, carnivores seem to avoid feeding upon conspecific prey (Caro and Stoner, 2003;Fox, 1975;Palomares and Caro, 1999), especially if prey is found dead rather than killed by the consumer, as dead animals may have succumbed to a disease (Hart, 2011;Moleón et al., 2017). Thus, carrion may play a prominent role in the carnivores' landscape of disgust (Moleón and Sánchez-Zapata, 2021). Given that phylogenetically related carnivores harbour similar parasite assemblages (Huang et al., 2014), the carnivore is more prone to be infected by parasites present in the carcass if both the consumer and the carcass belong to the same species or to a phylogenetically related group of species (Hart, 2011;Moleón et al., 2017). In this case, scavengers must face a trade-off between the changing nutritive value of the carcass, which is maximum for conspecific flesh (as it supplies nutrients in proportions that are easier to assimilate than heterospecific tissues; Mayntz and Toft, 2006;Meffe and Crump, 1987), and its associated parasite risk Pfennig, 2000;Pfennig et al., 1998;Rudolf and Antonovics, 2007). Both the nutritive value and the parasite risk decrease with time (Parmenter and MacMahon, 2009;Rossi et al., 2019), but probably at different rates, which could lead carnivores to also change their foraging decisions over time. However, whether and when a scavenger decides to feed on a risky carcass while obtaining sufficient nutritional revenue are largely unresolved questions in scavenging and disease ecology.
For instance, it is widely accepted within the scientific community that scavenging, including intraspecific consumption (i.e., cannibalism), plays an important role in the transmission of meat-borne parasites in wild carnivores, especially Trichinella spp. (phylum Nematoda), one of the most relevant zoonoses occurring at the wildlife-domestic-human interface (Badagliacca et al., 2016;Campbell, 1988;Pozio, 2000;Pozio and Murrell, 2006). This nematode and other species such as the zoonotic protozoan Toxoplasma gondii (phylum Apicomplexa) are among the paradigmatic parasites that are transmitted by meat consumption. These multi-host parasites are globally distributed (Dubey, 1991;Pozio and Murrell, 2006) and have been described in numerous mammalian carnivores, including the red fox (Vulpes vulpes) and several mustelids and viverrids (Kirjušina et al., 2016;Lukášová et al., 2018;Oivanen et al., 2002a;Pérez-Martín et al., 2000;Sobrino et al., 2007). Intra-specific and intra-family consumption of somatic larvae in muscle could also potentially be a possible transmission route for more specific parasites, such as Toxocara canis in red fox and other canids (Saeed and Kapel, 2006). However, recent empirical Muñoz-Lozano et al., 2019;Olson et al., 2016;Selva et al., 2005) and modelling findings have shown that mammalian carnivores tend to avoid feeding on carrion of other carnivores, especially of conspecifics, possibly as a strategy to reduce the risk of acquiring parasites. Thus, further research on carnivore scavenging behaviour in relation to carcass identity is needed to adequately interpret, based on scientific evidence, the epidemiological factors that characterize the transmission of meat-borne parasites in the wild (Polley and Thompson, 2015;Moleón and Sánchez-Zapata, 2021). This is particularly important in the current context of emerging and re-emerging diseases of global distribution, among which there are many zoonoses that should be studied from an integrated One Health perspective (Bueno-Marí et al., 2015;Evans et al., 2020;Wong et al., 2020).
The general objective of this study is to explore meat-borne parasite avoidance strategies of carnivores, especially the red fox, at carnivore carcasses. The red fox, a ubiquitous and typically generalist carnivore (Wilson and Mittermeier, 2009), is one of the most important reservoirs involved in the sylvatic cycle of many parasites with potential zoonotic and veterinary significance (Karamon et al., 2018). Moreover, foxes are major scavengers (Mateo-Tomás et al., 2015). All of these features make the red fox a good candidate for detailed research on trophic behaviour in relation to the risk of parasite transmission (Díaz-Ruiz et al., 2013;Vercammen et al., 2002).
Specifically, we aim to answer the following main questions: 1) does the probability of foxes to practice cannibalism change with time since the conspecific carcass is available, and on which factors does this depend?; and 2) does the scavenging behaviour of foxes differ between conspecific carcasses and carcasses of other mesocarnivore species? For this purpose, we assessed the consumptive patterns of mammalian carnivore carcasses over time, including the final stages of carcass depletion, in areas with different scavenging communities and degree of anthropization. The latter will allow to control to which extent the propensity to cannibalism is influenced by environmental factors. Our general hypothesis is that the perceived risk of acquiring trophically transmitted parasites through scavenging behaviour is dependent on carcass type (conspecific vs. heterospecific to the consumer), and that carnivores will show behavioural responses to reduce exposure to parasites, including consumption avoidance and delay (Moleón and Sánchez-Zapata, 2021). Based on the results of this and previous studies on scavenging patterns of herbivore carcasses in the same study areas (see "Study areas and scavenging context"), we elaborate a conceptual model that synthesizes how the main forces that carnivores face at carrion resources, namely their nutritional value and the risk of acquiring meat-borne parasites, change over time.
Study areas and scavenging context
Fieldwork was conducted in three mountainous, Mediterranean areas of southeastern Spain: Sierras de Cazorla, Segura y Las Villas Natural Park, Sierra Espuña Regional Park, and periurban areas of Murcia city (hereafter Cazorla, Espuña and Murcia, respectively). For more information on the orography, climate and environmental characteristics of these areas, see Gonzálvez et al. (2021). In Cazorla, there is a rich representation of both obligate (i.e., vultures) and facultative vertebrate scavengers. Espuña holds a similar scavenging community, though vultures are less abundant. In Murcia, vultures are rare, and the presence of domestic carnivores (dogs Canis lupus familiaris and cats Felis silvestris catus) is more frequent than in the other study areas. The red fox is the commonest wild mammalian carnivore in the three study areas, and it is more abundant in Espuña than in Cazorla (there are no data for Murcia; see Moleón et al. (2017), Morales-Reyes et al. (2017) for more details on the study areas of Cazorla and Espuña).
The highly efficient consumption patterns of herbivore carcasses by the scavenging communities of Cazorla and Espuña have been welldocumented (e.g., Arrondo et al., 2019;Moleón et al., 2017;Morales-Reyes et al., 2017). As average, wild ungulate carcass detection time by scavengers is less than one day in Cazorla and less than three days in Espuña, while carcasses are totally consumed in three days in Cazorla and eight days in Espuña, mainly by vultures (especially, in Cazorla), foxes, wild boars and dogs Moleón et al., 2017;Morales-Reyes et al., 2017). In Cazorla, livestock carcasses in open areas are consumed even more quickly, normally within one day . These figures are within the general patterns found worldwide for herbivore carcasses (Sebastián-González et al., 2020). In contrast, mesocarnivore carcasses are rarely scavenged and may last for months Muñoz-Lozano et al., 2019), though detailed data on scavenger foraging behaviour at these carcasses and how this may change over time are lacking.
Data collection
We deployed 66 carcasses of red fox ("fox carcasses") and other mesocarnivore species ("other carcasses") from November 2016 to March 2018 in Cazorla (n = 27 foxes), Murcia (n = 19 foxes) and Espuña (n = 10 foxes, 4 stone martens Martes foina, 3 Eurasian badgers Meles meles, 2 common genets Genetta genetta, and 1 wild cat Felis silvestris silvestris). Carcasses of other mesocarnivores are much more difficult to obtain than fox carcasses, given that these species are scarcer than foxes; also, they are protected, so their hunting is prohibited. Thus, we focused the searching effort of other carcasses around the best-known area, namely Espuña (e.g., see Moleón et al., 2017 and references therein). All carcasses came from animals that were run over and, in the case of some foxes, shot in approved hunts. Before deployment in the study areas, carcasses were carefully eviscerated and examined in order to rule out the presence of macroscopic alterations indicating infection; in addition, all specimens were subject to diagnostic procedures to ensure that they were free from Trichinella spp. (artificial digestion of muscles from base of tongue, forearms and diaphragm; Gamble et al., 2000;Kapel et al., 1994), Sarcoptes scabiei (skin skrapping) and the most common viral diseases affecting wild and domestic carnivores (assays for antibody detection of canine distemper virus, feline coronavirus, canine and feline parvovirus, feline leukemia virus and feline immunodeficiency virus). In this study, only pathogen-free carcasses were used, and the tissue around the shot point was removed to avoid lead residues (see Gonzálvez et al., 2021).
Carcasses were frozen in plastic bags (− 20 • C) and defrosted at laboratory temperature during 12-24 h before being placed in the field. Carcasses were regularly distributed throughout the study areas, with a minimum distance between neighboring carcasses of 1.5 km Gonzálvez et al., 2021). Altitude of carcass sites ranged 772-1676 in Cazorla, 433-1432 in Espuña and 125-448 in Murcia. Each site was classified as "closed area" or "open area", depending on whether tree and shrub cover in a 10 m radius around the carcass exceeded or not 50% of the surface area, respectively (Gonzálvez et al., 2021).
To obtain information about the presence of scavengers and their trophic behaviour at carcass sites, we fixed automatic cameras (Bushnell Trophy Cam and Bushnell Aggressor) to a tree or shrub trunk (50-100 cm height) at 3-4 m from the carcasses. Cameras were programmed to take a 15-second video after detection of movement (one minuteinterval between consecutive videos). Batteries and memory cards were checked weekly, and cameras were removed when no carrion was left or after 10 weeks. We focused on vertebrate species that have been found to scavenge in our study areas (Sebastián-González et al., 2019). These species were grouped in three categories: red fox, other mammals and birds. For each carcass, we defined independent events as: a) consecutive videos of unequivocally different individuals of the same species or individuals of different species; b) if individual identification was not possible, consecutive videos of individuals of the same species taken more than 30 min apart; or c) non-consecutive videos of individuals of the same species (O'Brien et al., 2003;Ridout and Linkie, 2009;Gonzálvez et al., 2021). We then made a distinction between "consumption events", when we observed unequivocal carrion biting and feeding behaviour, and "non-consumption events" otherwise.
Data analyses: weekly scavenging patterns
First, we explored the general patterns of mesocarnivore carcass use by the studied scavenging communities. For each carcass type (fox and others) and study area, we used the images provided by the cameras to calculate, on a weekly basis, the proportion of carcasses that were consumed (i.e., with at least one consumption event) and visited but not consumed (i.e., no consumption events recorded), for all scavengers together and separately for each scavenger category. We did the same for the number of consumption and non-consumption events.
We then explored the changing probability of red foxes to scavenge fox and other mesocarnivore carcasses by calculating these ratios per week: a) consumed:non-consumed carcasses and b) consumption:nonconsumption events. In addition, we determined the accumulated number of carcasses that were a) detected and b) consumed (i.e., at least one consumption event) each week by red foxes. For each carcass, we estimated carcass "detection time" as the time elapsed between carcass placement and the arrival of the first fox.
Data analyses: determinants of carrion consumption by fox
We used generalized linear models (GLMs) to analyse the factors influencing "time of first consumption" (only carcasses with at least one consumption event by foxes were used; n = 27) and the "ratio consumption:non-consumption events" (all carcasses detected by fox; n = 62). For each response variable, we carried out two separate analyses, according to these two different datasets: 1) all fox carcasses in the three study areas; and 2) both fox and other carcasses in Espuña only. The first analysis is mainly focused on exploring the cannibalistic behaviour of foxes, while the second one is aimed to determine if fox scavenging behaviour is influenced by carcass type (see the particular goals of this study in Introduction). Time of first consumption was estimated as the time elapsed since carcass detection by foxes until the first consumption event by foxes. The carcass was the sample unit for these analyses. The explanatory variables were study "area" (Cazorla, Espuña, Murcia; used only for the analysis of fox carcasses in the three study areas), "carcass type" (fox, other; used only for the analysis of fox and other carcasses in Espuña), "habitat" (closed, open), "year", "season" (winter: November-February; spring: March-April), "hour" of carcass placement (morning from dawn to 12:00 h, afternoonfrom 12:00 h to dusk), and carcass "detection time" by foxes (in days). Habitat, season and hour may influence scavenger foraging patterns and interspecific interactions among scavengers (e.g., Arrondo et al., 2019). For the ratio consumption:non-consumption events, we also included "scavenger presence" (presence of scavengers other than foxes) and "scavenger consumption" (at least one consumption event by a scavenger other than fox).
We then proceeded with model construction, using Gaussian error distributions and identity functions for time of first consumption and binomial error distributions and logit link functions for the ratio consumption:non-consumption events; in the latter case, we used the function cbind() in R to combine the vectors "consumption events" and "non-consumption events" in a single response variable, which avoided losing the information on the number of events, i.e., the sample size from which the ratio is estimated (Crawley, 2007). We ran univariate models with all the possible explanatory variables for each case. We did not run multivariate models due to limitations imposed by the low sample size (i.e., number of monitored carcasses). We based model selection on Akaike's Information Criterion, which allows the identification of the most parsimonious model (lowest AIC) and ranks the remaining models. We corrected the AIC value for small sample sizes (AICc). Then, we calculated delta AICc (ΔAICc) as the difference in AICc between each model and the best model in the evaluated set, considering models with ΔAICc< 2 to have similar support (Burnham and Anderson, 2002). Finally, we calculated the deviance (D 2 ) explained by each candidate model according to this formula: D 2 = (null devianceresidual deviance) / null deviance * 100 (Burnham and Anderson, 2002). Analyses were done in R studio software v1.0.143 (R Core Team, 2018).
General results: the scavenging community
A total of 1617 events of scavenger species were recorded in the three studied areas (Cazorla: 68%; Murcia: 13%; Espuña: 19%; Table S1). We detected 14 scavenger species (eight mammals and six birds). Species richness was highest in Cazorla (13 spp.) and lowest in Espuña at fox carcasses (5 spp.). Differences in species richness were mainly due to birds, with six species recorded in Cazorla and only one species in Murcia and Espuña. The red fox was the most frequently recorded scavenger species in the three study areas (59.4% of total events). Consumption events represented 15.7% of the total events recorded. Taking into account all study areas together, foxes were responsible for most consumption events (53.4% of events). Carcasses were consumed by nine species (five birds and four mammals) in Cazorla, two species in Murcia (one bird and one mammal), two species in Espuña at fox carcasses (two mammals), and two species in Espuña at other carcasses (one bird and one mammal). When focusing on those avian scavenger species that scavenge more frequently, consumption events were more frequent than non-consumption events, while the opposite was true for all mammalian scavengers (Table S1). Cannibalism represented 16.9% of the total events recorded for the red fox at fox carcasses. We did not record any consumption event by domestic carnivores (dogs and cats). General patterns of carcass use by the three scavenger categories in each study area are shown in Table 1.
Weekly scavenging patterns
For a given week, there were more carcasses visited but not consumed by mammalian scavengers than carcases visited and consumed, for all areas and carcass types. This pattern was not observed for scavenging birds, especially in Cazorla, where visited carcasses were more frequently consumed than not consumed. Mammalian scavengers other than fox only consumed fox carcasses. The number of carcasses visited and consumed was highest in Cazorla and Espuña (foxes at other carcasses), and lowest in Murcia (Fig. 1a, Fig. S1). In relation to events per studied carcass, we observed a similar general pattern, with far more non-consumption events than consumption events, except for foxes at other carcasses in Espuña ( Fig. 1b; Fig. S2).
The ratio between consumed and non-consumed carcasses by foxes (Fig. 2) showed a bell-shaped distribution, with maximum values (i.e., more carcasses consumed than non-consumed) from the third (in Cazorla) to the fifth (in Murcia) week in the case of fox carcasses. In the carcasses of other species, the maximum took place in the second week, i.e., two weeks earlier than the maximum recorded for fox carcasses in the same study area (Espuña). Even during the peaks, fox carcasses were more frequently left unconsumed than consumed, and only for other carcasses in Espuña the number of consumed carcasses was higher than those left unconsumed. We observed a similar general pattern for events, with peaks occurring from the third week on in the case of fox carcasses and in the second week in the case of other carcasses, i.e., several weeks earlier than the peak for fox carcasses in the same study area. While fox carcasses in Cazorla and other carcasses in Espuña began to be consumed during the first week after their deployment, the first events of consumption of fox carcasses in Espuña and Murcia began to be recorded from the second and third week, respectively. The lowest Table 1 Scavenging patterns at carcasses of red fox and other mesocarnivores in the three study areas of southeastern Spain, according to different scavenger groups (red fox, other mammals, birds and total scavengers). Number of monitored carcasses is indicated for each study area and carcass type. Mean±SD (min.-max.) is shown for carcass detection time, time of first consumption, total events and consumption events for each scavenger group. The number of carcasses visited and consumed by each scavenger group is shown together with the percentage relative to the total carcasses monitored per area and carcass type (in parentheses). Time rounded to the nearest hour. We considered carcasses consumed as those carcasses with at least one consumption event by a given scavenger group. 429 ± 343 (88-927) 10 (100%) 6 (60.0%) 17.1 ± 23.6 (3-83) 6.1 ± 15.5 (0-50) number of consumption events in relation to non-consumption events at fox carcasses was found in Espuña, an area where, in contrast, consumption events of other carcasses exceeded non-consumption events during the peak (Fig. 2). Red foxes detected 94% of studied carcasses, but consumption events were recorded only in one-third to two-thirds of them (Cazorla: 63%; Murcia: 38%; Espuña, fox carcasses: 44%; Espuña, other carcasses: 50%). No other carnivore species consumed carcasses of carnivores other than fox. Foxes detected most carcasses within the first three weeks after carcass deployment. However, the stabilization of the number of carcasses consumed took longer. Within carcasses visited by foxes, the difference in the accumulated number of carcasses consumed and not consumed during the first two weeks was higher for fox carcasses compared to those of other carnivores (Fig. S3).
Determinants of carrion consumption by fox
Regarding fox carcasses, the time from carcass detection by foxes to the first record of consumption was mainly related to the former variable (detection time by foxes) in the three study areas, according to the GLM model with the highest D 2 (Table 2). In particular, foxes started to consume earlier carcasses that were detected later ( Table 3). The ratio consumption:non-consumption events of foxes was mainly related to consumption by other scavenger species (Table 2), with a ratio more biased towards consumption events in carcasses also consumed by other scavengers (Table 3).
In relation to carcasses of fox and other carnivores in Espuña, both the time of first consumption by foxes and the ratio consumption:nonconsumption events of foxes were mainly dependent on carcass type (Table 2). Foxes started to consume heterospecific carrion c. 10 days earlier as average than conspecific carcasses (Tables 1, 3; Fig. 1), and showed relatively more consumption events at other carcasses compared to conspecific ones (Table 3; Fig. 2). Specifically, as average, consumption events by foxes were c. seven times more frequent in heterospecific carcasses than in conspecific ones (Table 1). In general, according to deviance values, the models for this dataset (fox and other carcasses in Espuña) had higher explanatory capacity than the models for the dataset of fox carcasses only (Table 2).
Discussion
Despite being a key defensive barrier against trophically transmitted parasites (Ezenwa et al., 2016;Hart, 1990Hart, , 2011Sarabian et al., 2018;Weinstein et al., 2018), parasite avoidance behaviours in carnivore species have received little scientific attention, especially in the context of carrion use (Moleón and Sánchez-Zapata, 2021). Here, we found that red foxes were very efficient in detecting mesocarnivore carrion, as they visited nearly all monitored carcasses. However, as expected, foxes were generally reluctant to consume them, especially those of conspecifics. In addition, consumption by foxes, when recorded, was delayed several days (heterospecific carcasses) or weeks (conspecific carcasses) after carrion detection, and time elapsed between fox carcass detection and consumption by foxes was shorter for carcasses discovered later. Other mammalian scavengers showed a similar pattern than foxes: they detected most carcasses during the first week after their deployment but we observed very few consumption events (no cannibalistic events recorded), with all consumption taking place from the second week on. The use of videos instead of photos and the longer monitoring period in this study may explain why we found more cannibalistic events here than in a previous study in two of the three study areas (Cazorla and Espuña; Moleón et al., 2017). For comparison, in these two study areas, ungulate carcasses are normally consumed within the first week Fig. 1. Weekly variation in consumption patterns of mesocarnivore carcasses by red fox and other mammalian scavengers in three areas of southeastern Spain. A) Weekly percentage of consumed ("cons."; i.e., with at least one consumption event) and non-consumed ("non-cons."; i.e., visited, but no consumption events recorded) carcasses by red fox and other mammalian scavengers per study area and carcass type. B) Weekly number of consumption ("cons.") and non-consumption ("non-cons.") events by red fox and other mammalian scavengers per study area and carcass type. For a given week, the number of events are divided by the grand total number of carcasses studied in each study area. The number of carcasses available each week to scavengers is given in parentheses. Panels for carcasses of carnivores other than foxes are in boxes. Moleón et al., 2017;Morales-Reyes et al., 2017; see "Study areas and scavenging context" for more details). These differences can not be explained by the different size of mesocarnivore carcasses in relation to the larger ungulate carcasses, as smaller carcasses are normally consumed earlier . Overall, our results are in agreement with diet studies on red fox (Fairley, 1970;Remonti et al., 2005) and other mammalian carnivores (Caro and Stoner, 2003;Fox, 1975;Palomares and Caro, 1999) that indicate that cannibalism is very uncommon in these species, and support the hypothesis that avoidance of carrion from phylogenetically related prey is a widespread behaviour in carnivores to prevent meat-borne parasite risk ; though see Van Allen et al., 2017 for other taxa).
Why do foxes and other mesocarnivores not feed on carnivore carcasses, especially conspecific carrion, upon detection? Our results suggest that the foraging decisions of scavengers are probably shaped by two major contrasting forces (Fig. 3), namely the nutritional reward provided by carrion of phylogenetically similar species (Mayntz and Toft, 2006;Meffe and Crump, 1987) and the risk of acquiring meat-borne parasites shared with these species (Huang et al., 2014;Moleón et al., 2017;Pfennig, 2000;Pfennig et al., 1998;Rudolf and Antonovics, 2007). On one hand, the nutritional quality of carrion decreases with time (Parmenter and MacMahon, 2009). Thus, the most advantageous strategy for foxes would be feeding before carrion is too degraded. On the other hand, the risk of acquiring viable trophically transmitted parasites is also highest when the carcass is fresh (Fan et al., 1998;Pozio, 2016). This may force foxes to wait until the carcass reaches a "safety" parasite load threshold, which is probably more restrictive for conspecific carrion because the number of parasite species that can affect the consumer is maximum (Fig. 3). At this point, it is important to remark that the risk of parasite infection is a perceived risk related to potential rather than actual parasite presence (Curtis, 2014;Moleón et al., 2017;Weinstein et al., 2018). In this sense, many meat-borne parasites, such as Trichinella spp., do not provoke any external lesion or sign of disease after the establishment of the infective larvae in the musculature (Gottstein et al., 2009), and all carnivore carcasses of our study belonged to healthy animals without any macroscopic lesions. Future investigations could assess whether the presence of macroscopic lesions on carnivore carcasses may condition the trophic behaviour of scavenger species, considering, nevertheless, that external signs of infection are usually more difficult to identify for meat-borne parasites than for non-trophically transmitted parasites. Finally, within a carnivore-animal flesh context, all prey can be considered of relatively high-quality (Swift et al., 1979). Thus, the risk of acquiring meat-borne parasites is probably much more determinant than the nutritive value of the carcass when guiding foraging decisions (see Fig. 3).
At which stage of carcass decomposition this nutritional valueparasite risk trade-off favours feeding on conspecific and phylogenetically related carcasses may depend on several extrinsic and intrinsic factors to the scavenger. Regarding extrinsic factors, the infectivity of Trichinella spp. and other meat-borne parasites is known to be highly related to environmental conditions and the changes that occur during carrion decay (Bengis, 1997;Pozio, 2000). For instance, high humidity and low temperature favours the survival and transmission of Trichinella larvae (Fariña et al., 2017;Oivanen et al., 2002b;Pozio, 2016;Riva et al., 2012;Rossi et al., 2019). In cold environments, at constant low temperatures such as those reached beneath the snow, the infective capacity of T. britovi larvae in red fox carcasses does not show important Fig. 2. Weekly variation in the ratios consumed:non-consumed carcasses and consumption:non-consumption events by the red fox per study area and carcass type. Values above and below the dashed horizontal grey line indicate, respectively, ratios biased towards consumption and non-consumption. For a given week, the number of carcasses available to scavengers is given in parentheses. Panel for carcasses of carnivores other than foxes is in the box.
Table 2
AICc-based model selection to assess the factors influencing "time of first consumption" by foxes and the "ratio consumption:non-consumption events" by foxes on conspecific carcasses in three study areas of southeastern Spain ("among areas" comparisons) and on conspecific and heterospecific carcasses in one of these study areas ("fox vs. other carcasses" comparisons). Explanatory variables include study "area", "habitat", "year", "season", "hour", "carcass type", presence of scavengers other than fox ("scav. pres."), consumption by scavengers other than fox ("scav. cons."), and carcass "detection time" by foxes (see text for details on the variables). Number of estimated parameters (k), AICc values, AICc differences (ΔAICc) with the model with the lowest AICc, and the variability of the response variable explained by the predictor (deviance, D 2 ) are shown. Selected models are in bold. reductions during the first four months. However, above the snow, with more oscillating temperatures, the parasite's reproductive capacity sharply decreases after two months, and almost no viable larvae are present after three months (Rossi et al., 2019). At higher temperatures (average: 23ºC), the number of infective T. spiralis larvae in rat carcasses decreases severely after the first week (Oivanen et al., 2002b). In the case of decaying fox meat, the number of infective larvae of several Trichinella genotypes has been found to decrease rapidly during the first two weeks at 22-27 • C and 100% relative humidity (Von Köller et al., 2001). In our study areas, characterized by mild to warm temperatures and with carcasses rarely covered by snow during winter, meat-borne parasites are expected to survive only a few weeks even in the coldest season. Moreover, in these climatic conditions, flesh decomposes faster than in colder latitudes (Selva et al., 2005), with most non-scavenged carrion disappearing within the first two months due to necrophagous invertebrates, decomposers and dehydration (Muñoz-Lozano et al., 2019). In this regard, indirect infection from eating carrion insects could also affect scavenging carnivores. However, the survival period of meat-borne parasites inside insect bodies seems to be very limited. For instance, Trichinella larvae may survive and be infective after being ingested by maggots, though maximum survival under the most favourable environmental conditions is five days (Maroli and Pozio, 2000). Given that climate may play an important role in determining parasite survival around carcasses, further research is needed in colder areas, especially in light of the ongoing global climate change (Cizauskas et al., 2017). All of this is consistent with our findings of low rates and delayed consumption of carnivore carrion, especially of conspecifics, and could explain why foxes practiced earlier cannibalism when they discovered the carcass at advanced stages of decomposition. The fact that the ratio between consumption and non-consumption events of foxes was higher at carcasses that were also consumed by other scavengers suggest some inter-specific facilitative process, as is typical in scavenging assemblages (Moleón et al., 2014). In particular, carrion consumption by other scavenger species could be interpreted as a signal that the carcass is safe, so foxes may have partly relied on these indirect cues to guide their foraging decisions. Alternatively, it may indicate that all scavengers rely on similar cues.
In relation to intrinsic factors, our study design (with carcasses normally separated from each other several kilometers) and occasional individual recognition of foxes (thanks to external, identifiable features observed in the images) revealed that some foxes practiced cannibalism Table 3 Generalized linear models (GLMs) showing the relationship between "time of first consumption" by foxes and the "ratio consumption:non-consumption events" by foxes with the explanatory variables included in the selected models ("detection time": carcass detection time by foxes; "hour" of carcass placement: morning, afternoon; "carcass" type: fox, other; "scav. cons.": consumption by scavengers other than fox). The estimate of the parameters (including the sign), the standard error of the parameters (SE) and the degree of freedom of the models (df) are shown. (Swift et al., 1979). B) On the other hand, the probability of a carcass to have fewer infective stages of meat-borne parasites increases with time.
In fresh carcasses, the risk for a consumer of acquiring meat-borne parasites, at least for direct life cycle parasites, is maximum when it ingests conspecific carrion, and minimum for carcasses belonging to weakly related species, with which the number of shared parasite species is lowest. Non-linearity is probably a fundamental property of all of these functions. C) These contrasting forces probably shape the observed patterns of carcass consumption (for our study areas, see this study, Arrondo et while others rejected conspecific carcasses, which could indicate some individual variation in the way foxes confront the trade-off between the nutritional gains and the risk of acquiring parasites associated with carrion. According to state-dependent foraging theory (McNamara and Houston, 1987), hungry, young, senescent and sick individuals could be more prone to feeding on low quality food and assuming the risk of a dangerous meal (Fodrie et al., 2012;Mukherjee and Heithaus, 2013), which needs to be confirmed in future investigations.
Epidemiological implications
The results of this and previous studies Muñoz-Lozano et al., 2019;Olson et al., 2016;Selva et al., 2005) show that cannibalistic scavenging is a rare feeding strategy in mammalian mesocarnivores. In the case of the red fox, all mesocarnivore carcasses are risky carcasses in epidemiological terms, but the risk associated with fox carcasses is highest because of highest probability of sharing parasite species. Here, we also showed that cannibalistic scavenging, when it does occur, generally takes place after the period of maximum survival of infective stages of potential meat-borne parasites, i.e., several weeks after the carcass becomes available. Overall, this suggests that cannibalistic scavenging is an infrequent transmission route of meat-borne parasites among foxesand possibly other wild carnivores. This challenges the widespread assumption that multi-host parasites such as Trichinella spp. are closely linked to intra-specific consumption, including both predation and scavenging (Badagliacca et al., 2016;Campbell, 1988;Pozio, 2000). This assumption may be partially based on the frequent presence of fox hairs in the faeces of this canid, which has traditionally been interpreted as evidence of cannibalism. However, Remonti et al. (2005) argued that undigested fox hairs found in faeces are mainly related to coat-cleaning rather than cannibalism. Thus, the transmission and maintenance of the sylvatic cycle of multi-host parasites transmitted by meat is likely to depend, more than previously thought, on transmission routes other than cannibalistic consumption of infected carrion.
Similar scavenger's behavioural patterns have recently been described at carnivore carcasses regarding non-trophically tramsitted parasites in the same study areas (Gonzálvez et al., 2021). However, the fact that contact with carnivore carcasses occurs much more frequently (Gonzálvez et al., 2021) than carrion consumption (this study) suggests that mammalian scavenger behaviour is primarily constrained by the perceived risk of acquiring meat-borne parasites.
Importantly, our findings indicate that the risk of meat-borne parasite transmission from carcasses of wild carnivore species to domestic carnivores (dogs and cats) is negligible, at least in our study areas. This was true even in the periurban study area, where the probability of dogs and cats to find a carcass is higher compared to more natural landscapes. Thus, our study suggests that carrion removal from the field, a usual management method against the spread of meat-borne parasites (e.g., Donázar et al., 2009;Probst et al., 2017), is not a justified strategy in the case of carnivore carcasses. Overall, we provide an example of how the detailed study of scavenging animals using images (especially videos) provided by camera traps at carcass sites can help to identify which behaviours and host species may represent an epidemiological risk in the wildlife-domestic-human interface, especially regarding mammalian carnivores, which are often elusive and cryptic species that are difficult to survey (Barea-Azcón et al., 2007;Balme et al., 2009). In this sense, our study provides scientific evidence towards precisely assessing the risk associated with mesocarnivore carcasses and the role that wild carnivore species may have as spreader or reservoir of meat-borne parasites, which has important implications from a One Health perspective.
Conclusions
Carnivore carcasses are fundamental components in the landscape of disgust for carnivores Moleón and Sánchez-Zapata, 2021;Weinstein et al., 2018), and offer many emerging epidemiological, ecological, and evolutionary research opportunities (Gonzálvez et al., 2021;Moleón et al., 2017Moleón et al., , 2020Moleón and Sánchez-Zapata, 2021). Our findings support the view that the indirect, nonconsumptive effects of parasites may strongly influence host behaviour, with potential effects that propagate through food-webs Moleón and Sánchez-Zapata, 2021;Sarabian et al., 2018). From an epidemiological context, the role of carnivore carrion in the transmission of meat-borne pathogens at the wildlife-domestic-human interface, many of which have relevant zoonotic implications (e.g., Trichinella spp.), seems questionable. We have also shown the advantages of detailed behavioural studies that use camera-trapping and combine different metrics to testand challengewidely accepted assumptions on meat-borne parasite transmission. Future research may benefit from our conceptual model, which allows making predictions on the decisions of carnivores foraging at carcasses of different nature (including different parts of carcasses, which may differ in both nutritional quality and parasite presence and abundance) and in different ecological contexts (e.g., different scavenger communities, which may influence risk perception). This conceptual model may be further expanded by adding the predation risks associated with carcasses, especially in areas with top predators that may prey upon subordinate carnivores (Allen et al., 2015;Moleón and Sánchez-Zapata, 2021). Exploring how animal species and individuals recognize and respond to cues associated with parasite risk may help in our understanding of the ecological and evolutionary relationships between carnivore hosts and their parasites, and is fundamental to efficiently manage zoonotic diseases under global change scenarios.
Funding M.M. was supported by a research contract Ramón y Cajal from the MINECO (RYC-2015-19231). This study was partly funded by the Spanish Ministry of Economy, Industry and Competitiveness and EU ERDF funds through the project CGL2017-89905-R.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2021-09-26T13:07:08.187Z | 2021-09-25T00:00:00.000 | {
"year": 2021,
"sha1": "1b3da8de20c54eb2ad255a85d913c57b8773700a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.applanim.2021.105462",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b2eef3d6e8754ed9aa59bf08aaaf3f763ff6128",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220307595 | pes2o/s2orc | v3-fos-license | Endothelial Dysfunction in Diabetes
Diabetes is a worldwide health issue closely associated with cardiovascular events. Given the pandemic of obesity, the identification of the basic underpinnings of vascular disease is strongly needed. Emerging evidence has suggested that endothelial dysfunction is a critical step in the progression of atherosclerosis. However, how diabetes affects the endothelium is poorly understood. Experimental and clinical studies have illuminated the tight link between insulin resistance and endothelial dysfunction. In addition, macrophage polarization from M2 towards M1 contributes to the process of endothelial damage. The possibility that novel classes of anti-hyperglycemic agents exert beneficial effects on the endothelial function and macrophage polarization has been raised. In this review, we discuss the current status of knowledge regarding the pathological significance of insulin signaling in endothelium. Finally, we summarize recent therapeutic strategies against endothelial dysfunction with an emphasis on macrophage polarity.
Introduction
Diabetes is a global health problem, characterized by defective insulin secretion and resistance to insulin. According to the International Diabetes Federation (IDF), the number of people with diabetes is estimated to rise from 425 million at present to more than 600 million by 2045 [1]. Diabetes carries a significant risk of microvascular pathologies, such as retinopathy, nephropathy, neuropathy, and atherosclerotic diseases. Indeed, the relative risk of cardiovascular disease increases by two-to four-fold in patients with diabetes compared to non-diabetes patients [2]. Endothelial dysfunction is an early marker for atherosclerosis, preceding angiographic or ultrasonic evidence of atherosclerotic plaque [3,4]. In addition, accumulating evidence implicates endothelial dysfunction as an event seen even in patients with prediabetic conditions, such as impaired fasting glucose (IFG) and impaired glucose tolerance (IGT) [5].
The vascular endothelium functions as a structural barrier between the lumen and vessel wall. Studies over the past decade have also shown that the endothelium secretes numerous growth factors and cytokines that regulate multiple vascular functions (e.g., vascular tone, proliferation of vascular smooth muscle, platelet aggregation, coagulation, and fibrinolysis). Furthermore, the endothelium mediates vasoconstriction by secreting mediators, such as endothelin-1 and thromboxane A2. In contrast, substances such as nitric oxide (NO), prostacyclin, and endothelium-derived hyperpolarizing factor (EDHF) regulate vasodilation [2]. NO, the primary contributor synthesized from L-arginine by endothelial NO synthase (eNOS), regulates the endothelium-dependent relaxation of arteries [6]. Endothelial dysfunction is characterized by a loss of molecular functions in endothelial cells. Factors promoting this event include metabolic disorders (e.g., diabetes [7,8], obesity [9], dyslipidemia [10]), smoking [11], a high salt intake [12], lack of exercise [13], and menopause [14]. The release of reactive oxygen species (ROS) and the generation of oxidative stress are considered critical factors for the pathogenesis of diabetic vascular complications. While endothelial dysfunction is associated with various pathological aspects, including local inflammation [15,16] and oxidative stress [17,18], the pivotal mechanisms are the decrease of NO production and inactivation of NO [19]. The inactivation of NO results from oxidative stress caused by uncoupling of eNOS [20] and an increase in ROS-generating enzymes, including nicotinamide adenine dinucleotide phosphate-oxidase (NADPH) oxidase (NOX), cyclooxygenases (COX), and xanthine oxidase (XO) [21][22][23].
A variety of clinical methods for assessing the endothelial function are used. Previous studies to assess the endothelial function in humans have often evaluated NO-dependent vasodilation. Measuring the changes in the diameter and blood flow of the coronary artery in response to intra-coronary infusion of acetylcholine is considered the standard method [24]. Non-invasive methods for measuring the endothelial function have been evaluated in previous studies [25]. One of the most commonly applied techniques is flow-mediated dilation (FMD), which is evaluated by brachial artery ultrasound [26]. This method is well-trusted and relevant to cardiovascular risk factors [27] but is highly dependent on the experience level of the operators, who need special training [28]. The analysis of the pulse amplitude tonometry (PAT) in the index finger after reactive hyperemia has been considered as another non-invasive method for assessing the endothelial function [29]. Elevation of plasma concentrations of biomarkers of hemostasis, inflammation, and oxidative stress are also used as indices suggesting endothelial dysfunction [24]. Circulating levels of markers such as P-and E-selectin, ICAM-1, VCAM-1, plasminogen activator inhibitor-1 (PAI-1), oxidized low-density lipoprotein (oxLDL), and asymmetrical dimethylarginine (ADMA) have been used as markers of endothelial dysfunction [24].
We herein review the underlying mechanisms of endothelial dysfunction in diabetes and discuss how endothelial metabolism is targeted by the clinical agents.
Insulin Resistance and Endothelial Dysfunction
Insulin plays a vital role in the maintenance of vascular homeostasis. Insulin resistance is defined as an impaired biologic sensitivity and/or responsiveness to insulin stimulation in target tissues including the muscle, adipose tissue, and liver. Substantial evidence supports insulin resistance as the essential pathophysiologic impairment responsible for metabolic and cardiovascular disorders, collectively known as metabolic syndrome. Disturbance of insulin signaling eventually leads to glucose intolerance, diabetes, dyslipidemia, and coronary artery disease. Over the past two decades, many studies have focused on mechanisms provoking endothelial dysfunction, including ROS-mediated eNOS uncoupling, loss of NO bioavailability, and hyperglycemia-induced apoptosis of vascular endothelium, which ultimately leads to impaired vascular relaxation, a common biomarker of endothelial dysfunction. Understanding the endothelial control of metabolism in detail may aid in the development of novel approaches for intervention in obesity and obesity-related diseases.
Insulin Signaling in Endothelium
Insulin binds to the cell surface receptor known as the insulin receptor (IR). Activated IR phosphorylates intracellular substrates, such as insulin receptor substrate (IRS) family members, Shc proteins, and Gap-1 [30]. In humans, three isoforms of IRS-1, 2, and 4-have been shown to play important roles that vary depending on the cell type and metabolic conditions. For example, IRS-1 regulates insulin action in skeletal muscle as evidenced by findings that genetic ablation of IRS-1 results in insulin resistance and hypertriglyceridemia. IRS-2 functions as a regulator of insulin action in liver and pancreatic β cells. Intriguingly, IRS-2-deficient mice are more susceptible to diabetes than IRS-1 knockout mice because of the impairment of insulin secretion [31], indicating that IRS-2 contributes to the molecular basis for diabetes. The phosphorylated IRS tyrosine activates phosphoinositide-3 kinase (PI3-K) and then converts phosphatidylinositol (3,4)-bisphosphate (PIP2) to phosphatidylinositol (3,4,5)-trisphosphate (PIP3). PIP3 initiates a cascade of serine kinases, resulting in the recruitment of phosphoinositide-dependent kinase-1 (PDK-1) and Akt to the membrane, where they are activated [32]. Activation of Akt greatly influences cellular functions by regulating NO production, angiogenesis, and glucose metabolism [33].
Both IRS-1 and -2 are expressed in the endothelium [34]. Akt activation promotes the cell survival and proliferation of tumor vasculature [35]. Under pathophysiological conditions including obesity and insulin resistance, selective endothelial insulin resistance is promoted by proteasomal degradation of IRS-2 [34]. In the setting of insulin resistance, the reduction of endothelial proliferation results in atherosclerosis, diminished collateral angiogenesis in occluded coronary arteries, and reduced reendothelialization [2]. Furthermore, emerging evidence has shown that the proangiogenic role of Akt is induced by the generation of hypoxia-inducible factor α (HIFα). HIFα activation leads to the expression and subsequent production of angiogenic factors, such as vascular endothelial growth factor (VEGF). Akt's ability to enhance the rate of glycolysis is dependent on HIFα and the subsequent expression of glycolytic enzymes [36].
Another insulin signaling pathway proceeds from Shc, which causes activation of the small GTP binding protein Ras and then initiates a phosphorylation cascade involving mitogen-activated protein kinase (MAPK). The MAPK pathway is associated with endothelial cells, mediating the secretion of ET-1 [37]. Insulin signal pathways form an extremely complicated network and multiple feedback loops. In other words, while MAPK pathways are weakly associated with regulating metabolic functions, PI3-kinase-dependent pathways function as pivotal branches to mediate the metabolic actions of insulin.
Insulin Resistance in the Endothelium
Insulin resistance is characterized by the deficiency in metabolic actions of insulin. A disorder of the PI3-K/Akt pathway results in a lack of insulin sensitivity in peripheral tissues. The MAPK pathway is strongly activated with compensatory hyperinsulinemia to produce inflammatory mediators (i.e., ICAM-1, VCAM-1, and E-selectin) when the PI3-K/Akt axis is downregulated [38]. The imbalance between these two signals leads to endothelial dysfunction, characterized by a decreased production of NO and increased generation of ET-1 in endothelial cells [39,40].
NOX is a key molecule in the development of endothelial dysfunction and is a major source of ROS production in endothelial cells. Type 2 diabetes is characterized by impaired control of the redox environment with overproduction of ROS [41]. The main factors playing a protective role are eNOS and NO. The biological balance at the endothelium is maintained by vasodilatory substances (i.e., prostaglandins, NO) and vasoconstricting factors (i.e., ET-1, angiotensin II). The activated PI-3/Akt pathway induces the phosphorylation of eNOS, transformation of L-arginine to L-citrulline, and production of NO. NO exerts a vasoprotective role by inhibiting the proliferation of vascular smooth muscle cells, expression of inflammatory cytokines, and platelet aggregation. In contrast, the lack of NO generation leads to the enhanced production of inflammatory and thrombotic cytokines [42]. Taken together, these findings indicate that the involvement of endothelial dysfunction and insulin resistance in pathological disorders contributes to the impairment of the cellular glucose uptake, NO-dependent vasodilation, enrichment of oxidative stress, and inflammation.
Elevation of circulating cytokine levels is strongly associated with insulin resistance and contributes to endothelial dysfunction. Increased levels of cytokines, including C-reactive protein (CRP), TNF-α, and interleukin-6 (IL-6), inhibit insulin-stimulated NO production by decreasing the eNOS expression, leading to the inhibition of the PI3K/Akt/eNOS pathway [43,44]. Obesity and type 2 diabetes are associated with elevated levels of leptin and resistin, which induce increases in TNF-α and IL-6 [45]. In addition, leptin enhances the serine phosphorylation of IRS-1, thereby disturbing insulin signaling through the PI-3K/Akt pathway [46]. In contrast, resistin reduces the expression of eNOS [47]. Although adiponectin and ghrelin stimulate NO production through the PI-3K/Akt signaling pathways and enhance the NO bioavailability, both cytokines are known to be reduced in patients with obesity or type 2 diabetes [48,49].
Crosstalk between Macrophage Polarization and Endothelial Cells
Macrophages and endothelial cells are closely related to each other. Endothelial cells produce cytokines pivotal for the differentiation and growth of macrophages. Macrophages constitute an important line of defense against infection and are essential for tissue repairing as well as wound healing [50,51]. These broad actions are mediated through macrophage conversion induced by environmental signals, such as lower temperatures and the secretion of colony-stimulating factor 1 (CSF-1) and interleukin (IL)-4. There are two types of macrophages: the proinflammatory M1 phenotype (classic activation) and the anti-inflammatory M2 phenotype (alternative activation). Adipose tissue macrophages (ATMs) from obese mice and humans are polarized toward an M1 phenotype, with the upregulation of tumor-necrosis factor (TNF) and inducible NO synthase (iNOS). In contrast, "lean" ATMs express high levels of M2 genes, including IL-10, Ym1, and Arginase 1 [52]. Emerging evidence indicates that proinflammatory M1 polarization induces adipose inflammation [53,54]. Consistently, a lack of M1 macrophages improves insulin sensitivity in obese mice [55,56]. In contrast, deletion of M2 macrophages has been shown to contribute to insulin resistance in wild-type mice [57]. These findings imply that macrophage polarization is implicated in metabolic disturbance.
NO exerts anti-inflammatory and antithrombotic effects. These actions are mediated by the activation of soluble guanylate cyclase, which in turn activates cyclic guanosine monophosphate (cGMP)-dependent protein kinase (PKG) through increased levels of cytoplasmic cGMP [58]. Vasodilator-stimulated phosphoprotein (VASP), a downstream target of PKG, has been identified as a regulator controlling cytoskeletal remodeling and cell migration [59]. Previous studies focusing on insulin resistance have revealed the endothelial NO/VASP-mediated suppression of inflammation in adipose tissue and liver [60,61]. Of note, activation of NO/VASP signaling promotes a phenotypic change into an M2 macrophage state. Conversely, a high-fat diet (HFD) attenuates M2 polarization and induces M1 activation in Kuppfer cells, which leads to insulin resistance in the liver [58]. Taken together, these findings suggest that a therapeutic approach targeting the NO/VASP pathway would promote anti-inflammatory actions and may thus be effective for managing metabolic disorders, including obesity and diabetes (Figure 1).
Crosstalk between Macrophage Polarization and Endothelial Cells
Macrophages and endothelial cells are closely related to each other. Endothelial cells produce cytokines pivotal for the differentiation and growth of macrophages. Macrophages constitute an important line of defense against infection and are essential for tissue repairing as well as wound healing [50,51]. These broad actions are mediated through macrophage conversion induced by environmental signals, such as lower temperatures and the secretion of colony-stimulating factor 1 (CSF-1) and interleukin (IL)-4. There are two types of macrophages: the proinflammatory M1 phenotype (classic activation) and the anti-inflammatory M2 phenotype (alternative activation). Adipose tissue macrophages (ATMs) from obese mice and humans are polarized toward an M1 phenotype, with the upregulation of tumor-necrosis factor (TNF) and inducible NO synthase (iNOS). In contrast, "lean" ATMs express high levels of M2 genes, including IL-10, Ym1, and Arginase 1 [52]. Emerging evidence indicates that proinflammatory M1 polarization induces adipose inflammation [53,54]. Consistently, a lack of M1 macrophages improves insulin sensitivity in obese mice [55,56]. In contrast, deletion of M2 macrophages has been shown to contribute to insulin resistance in wild-type mice [57]. These findings imply that macrophage polarization is implicated in metabolic disturbance.
NO exerts anti-inflammatory and antithrombotic effects. These actions are mediated by the activation of soluble guanylate cyclase, which in turn activates cyclic guanosine monophosphate (cGMP)-dependent protein kinase (PKG) through increased levels of cytoplasmic cGMP [58]. Vasodilator-stimulated phosphoprotein (VASP), a downstream target of PKG, has been identified as a regulator controlling cytoskeletal remodeling and cell migration [59]. Previous studies focusing on insulin resistance have revealed the endothelial NO/VASP-mediated suppression of inflammation in adipose tissue and liver [60,61]. Of note, activation of NO/VASP signaling promotes a phenotypic change into an M2 macrophage state. Conversely, a high-fat diet (HFD) attenuates M2 polarization and induces M1 activation in Kuppfer cells, which leads to insulin resistance in the liver [58]. Taken together, these findings suggest that a therapeutic approach targeting the NO/VASP pathway would promote anti-inflammatory actions and may thus be effective for managing metabolic disorders, including obesity and diabetes (Figure 1).
Targeting Endothelial Dysfunction
The ultimate goal of the treatment of diabetes is to prevent microvascular and macrovascular complications. The endothelium lining the inner wall of the vasculature modulates basic hemostatic functions, including the circulation of blood cells, vascular tone, platelet activity, and inflammation. Endothelial dysfunction is considered an early predictor of future cardiovascular events and atherosclerosis. Growing knowledge concerning the diverse functions of the endothelium has focused attention on therapeutic strategies that may improve the endothelial function. From a clinical standpoint, a large amount of experimental evidence supports the notion that therapies targeting endothelial dysfunction reduce cardiovascular mortality and morbidity. It is important to consider whether or not drugs used in the clinical management of type 2 diabetes exert positive and pleiotropic effects on the endothelium independent of the glucose-lowering action. While statins have been reported to exert vascular protective effects that are independent of lowering the LDL-cholesterol level, some anti-diabetic agents have recently been suggested to exert beneficial effects against endothelial dysfunction. In addition to traditional drugs, clinical and experimental data support the possibility that novel classes of anti-hyperglycemic agents have beneficial effects on the endothelial function and macrophage polarization.
SGLT2 Inhibitors
SGLT2 inhibitors block the glucose uptake in the renal proximal tubule of the nephron, resulting in the induction of glycosuria and decreased blood glucose levels. Recent trials, such as the EMPA-REG-OUTCOME and the CANVAS Program have revealed that SGLT2 inhibitors, i.e., empagliflozin and canagliflozin, attenuate cardiovascular events and reduce the death rate compared to the patients treated with placebo [62,63]. SGLT2 inhibitors have been suggested to exert beneficial actions on the endothelial function. For example, Shigiyama et al. clearly demonstrated that dapagliflozin add-on therapy on metformin improved the endothelial function by improving the oxidative stress in patients with inadequate glycemic control [64]. Furthermore, dapagliflozin has been shown to improve systemic endothelial dysfunction and arterial stiffness, independent of the blood pressure and blood glucose levels [65]. Lee et al. also reported that dapagliflozin improves vascular smooth muscle dysfunction with alterations of gut microbiota in type 2 diabetic mouse [66]. Uthman et al. demonstrated the anti-inflammatory action of SGLT2 inhibitors by showing that empagliflozin rescued the TNF-α-induced reduction of the eNOS expression in human coronary arterial endothelial cells [67].
From the perspective of inflammation, empagliflozin is suggested to promote browning of white adipose tissue (WAT) by polarizing M2 ATMs [68]. Furthermore, dapagliflozin attenuates cardiac fibrosis by promoting M2 macrophage polarization in myocardial infarction in rodents [69]. As such, the inhibition of SGLT2 may shift the macrophage polarity to an M2 status, and thus, prevent metabolic disorders causing endothelial dysfunction.
GLP-1 Receptor Agonists
GLP-1 receptor (GLP-1R) is expressed not only in pancreatic β-cells but also in various tissues and organs, including endothelial cells, fat, brain, heart, liver, and muscle, and both GLP-1 and GLP-1R possess pleiotropic effects [70,71]. From the standpoint of vascular protection, the usefulness of GLP-1R agonists has been reported in basic research. For example, GLP-1R agonist reduces the production of inflammatory cytokines [72,73] and apoptosis of endothelial cells [74] and induces eNOS production [75]. The PI3K/Akt-eNOS activation pathway has been suggested as an underlying mechanism [76]. Cai et al. reported that GLP-1R agonists treatment induces a protective effect on endothelial cells through a GLP-1R-ERK1/2-dependent manner [77]. Furthermore, recent trials have shown that exenatide, a commonly used GLP-1R agonist, improves the endothelial function in patients with type 2 diabetes and pre-diabetes [78][79][80].
As is the case with SGLT2 inhibitors, GLP-1R agonist is suggested to modulate macrophage polarity. The reprogramming of the macrophage phenotype towards the M2 phenotype has been shown in mice with apoE and IRS2 deficiency treated with lixenatide. This was associated with a reduction in the atheroma plaque size [81] and the regression of the early stage of atherogenesis [82]. However, these studies were only performed in mouse models. Further mechanistic investigations will be required in order to elucidate the precise role of GLP-1R agonists in macrophage polarity.
Of note, DPP-4 has been shown to regulate inflammation and insulin resistance in the setting of obesity by modulating the macrophage polarity. For instance, linagliptin promotes the shift of polarity toward the anti-inflammatory M2 macrophage phenotype in liver and adipose tissue, thereby improving local inflammation and insulin resistance [87].
Furthermore, clinical data indicate that DPP-4 inhibitors, including sitagliptin [88], vildagliptin [89], linagliptin [90], and saxagliptin [91], improve endothelial dysfunction. Treatment with sitagliptin for 12 weeks significantly improved the change in FMD and increased the circulating levels of CD34, a marker of endothelial progenitor cells [92]. In addition, Kajikawa showed that saxagliptin markedly increased FMD and massively decreased stromal cell-derived factor-1α (SDF-1α), a DPP4 substrate participating in the recovery of vascular injury by recruiting endothelial progenitor cells [91]. Further clinical trials and mechanistic investigations will be required in order to validate the role of DPP4-inhibitors in the pathogenesis of vascular events.
Biguanides
Metformin, the most common anti-diabetic agent, upregulates the blood flow in adipose tissue and skeletal muscle [93]. The metformin-induced production of eNOS and inhibition of leukocyte adhesion, vascular aging, and endothelial cell apoptosis has also been reported. The activation of AMP-activated protein kinase (AMPK) is the underlying mechanism [94,95].
Metformin also exerts an anti-inflammatory function in endothelium and adipose tissue through multiple pathways. For example, a clinical trial using long-term metformin treatment in patients with type 2 diabetes reported its efficacy in reducing levels of plasma markers (i.e., VCAM-1 and ICAM-1) independent of changes in HbA1c [96]. In addition, metformin can mediate macrophage polarization to the M2 phenotype and subsequent inhibition of the Jun N-terminal Kinase (JNK) pathway [97].
From the viewpoint of clinical trials, many prospective studies targeted at patients with type 2 diabetes have shown that metformin treatment improves the cardiovascular prognosis independently from glycemic control. In a study dealing with type 2 diabetes patients, metformin improved both the insulin resistance and acetylcholine-stimulated flow, with a strong statistical relationship between these parameters [98]. In another study, the long-term treatment of metformin improved the plasma levels of markers of the endothelial function independent of other variables, including the weight, blood glucose level, and insulin dose [96].
Thiazolidinediones
Thiazolidinediones (TZDs) are antidiabetic agents that bind and activate peroxisome proliferator activated receptor γ (PPARγ), which is a nuclear receptor superfamily that improves insulin sensitivity. In addition, TZDs have attracted growing interest because of their biological activities, such as their anti-inflammatory, antitumor, and anti-atherosclerotic activities [99].
PPARγ is expressed in not only adipose tissue but also endothelial cells. Endothelial PPARγ decreases the production of chemokines and adhesion molecules, such as ICAM-1 and VCAM-1, and suppresses the production of components of NOX, NOX1, NOX2, and NOX4, leading to the inhibition of generation of ROS [100]. Furthermore, PPARγ promotes NO production in endothelium and abrogates endothelin expression [101].
Sulfonylureas
Sulfonylureas (SUs) have been widely used for treatment of type 2 diabetes. Effects of SUs on vascular and endothelial cells is inconsistent. Studies have showed that glibenclamide, a kind of second-generation SU, have a pro-arrhythmic effect on reperfusion after an ischemic event in vivo [102] and that SUs may be coupled to an enhanced risk of congestive heart failure [103]. Meanwhile, there are reports suggesting positive effects of SUs on endothelium. For instance, it has been reported that gliclazide, one of the second-generation SUs, improves endothelial function in diabetic rabbits [104] and decreases the progression of atherosclerosis in human [105]. From the standpoint of the molecular mechanism, gliclazide has been suggested to protect endothelial cells from apoptosis by decreasing oxidative stress [106], and glimepiride also has been shown to stimulate NO production via PI3-dependent pathways in endothelium and lead to reduction of NF-κβ activation [107].
Medical Nutrition Therapy and Physical Activity
The aim of treatment in diabetes is to maintain an optimal level of blood glucose, lipids, and blood pressure to delay or prevent chronic diabetic complications [108]. Patients with diabetes should achieve good control of their blood glucose by following a nutritious meal plan and exercise program, losing excess weight, implementing necessary self-care behaviors, and taking oral medications or insulin therapy. Weight loss through restriction of the daily diet and physical exercise is essential for managing diabetes and preventing vascular complications. When medications are used to control diabetes, they should primarily augment lifestyle improvements.
Calorie restriction and physical exercise are known to improve not only the insulin sensitivity but also endothelial dysfunction. Calorie restriction promotes NO-dependent vasodilation and coincidentally reduces circulating ET-1 levels in patients with insulin resistance [109,110]. In addition, regular physical exercise increases the expression of vascular eNOS via PI3K/Akt-dependent phosphorylation in humans [111]. Taken together, the favorable effects of these lifestyle modifications induce increased insulin signaling, enhanced eNOS activity, and reduced inflammatory and oxidative stress, leading to the right balance between the vasodilator and vasoconstrictor actions of insulin.
Indeed, a meta-analysis Montero performed pointed out that, in patients with type 2 diabetes, physical exercise greatly increased FMD [112]. Furthermore, another recent meta-analysis revealed that aerobic and combined aerobic and resistance exercise notably improved the endothelial function in patients with type 2 diabetes, as reflected by an elevated FMD. This observation was independent of changes in cardiometabolic markers, such as the blood pressure, body mass index, and glycemic control [113].
ROCK Inhibitors as Preclinical Agents
The small GTP-binding protein Rho and its downstream Rho-associated coiled-coil containing protein kinase (Rho-kinase, ROCK) mediate a variety of cellular processes such as cell contraction, proliferation, and migration. ROCK signaling is activated by many factors, including angiotensin II, glucose, and cytokines, all of which are upregulated under diabetic condition [114][115][116]. Previous studies have elucidated ROCK as a key molecule of endothelial dysfunction. For instance, statins have been reported to inhibit the RhoA/ROCK pathway indirectly, acting by reducing the synthesis of isoprenoids. The intravenous administration of pravastatin prevented impaired NO-dependent vasodilation by blocking the activation of Rho A and Rac and the inactivation of Akt/eNOS pathways in vivo [117]. Fasudil, the first ROCK inhibitor approved for clinical use, suppresses the migration of human pulmonary microvascular endothelial cells and the proliferation of pulmonary artery smooth muscle cells caused by ET-1 [118]. Moreover, fasudil has the potential to improve endothelial dysfunction via restoring NO bioavailability in humans with atherosclerosis [119].
ROCK initiates endothelial dysfunction via NF-κB activation. IκB kinase (IKK) phosphorylates IκB through activation signals, which induces the degradation of IκBα via the ubiquitin system, leads to NF-κB RelA/p65 translocation to the nucleus, and activates the transcription of target genes. ROCK mediates the NF-κB signaling through various pathways. Our laboratory showed that ROCK regulates thrombin-mediated p65 phosphorylation and IκBα phosphorylation in endothelial cells [120]. Moreover, we reported that ROCK regulates the nuclear translocation of RelA/p65 in mesangial cells [121]. Recent researches from Antoniellis et al. showed that RhoA, an upstream factor of ROCK, controls the translocation of NF-κB (p50) in neutrophils [122]. These studies suggest that ROCK controls the nuclear translocation of multiple NF-κB components and that the way of NF-κB regulation varies depending on the stimulus and type of cells. Taken together, ROCK is a principal determinant of endothelial dysfunction.
ROCK has two isoforms: ROCK1 and ROCK2. While these sequences share 65% sequence homology, each isoform plays different roles and has unique pathways of activation. ROCK1 and ROCK2 exert different roles in endothelial dysfunction. For example, endothelium without ROCK2 has shown the reduction of chemokines and adhesion molecules through NF-κB [116]. ROCK1 is required in oxidized LDL-induced cell adhesion, while ROCK2 is involved in both endothelial adhesion and apoptosis by regulating adhesion molecules [123]. ROCK2 has been shown to be a pivotal regulator of endothelial inflammation and functions as an essential factor in the development of atherosclerosis. Because ROCK1 and ROCK2 cannot completely compensate for each other's loss, distinctive roles of them have been pointed out [124]. ROCK2 is distributed in human vascular endothelial cells. Shimada et al. suggested that ROCK2 mediates the production of VCAM-1 and ICAM-1 and induces endothelial inflammation [125]. Furthermore, Shimokawa et al. reported that ROCK2 in a vascular smooth muscle cell (VSMC) leads to the progression of cardiovascular diseases including pulmonary arterial hypertension [126]. An elegant study from Sawada et al. suggested that loss of ROCK2 in bone marrow-derived cells decreased lipid accumulation and atherosclerotic lesions in the LDL receptor-null mice [127]. Though whether or not ROCK2 is engaged in modulating monocytic migration and adhesion toward endothelial cells is unclear, we recently demonstrated for the first time that ROCK2-but not ROCK1-is involved in the regulation of these processes [116]. These findings underscored the importance of ROCK2 s involvement in endothelial dysfunction. Therefore, ROCK2 represents an attractive target for studying critical regulators of endothelial dysfunction.
With regard to macrophage polarity, the importance of Rho/ROCK signaling has been clarified gradually. Recent studies have shown that ROCK1 and ROCK2 have different roles in the regulation of macrophage polarization into classical pro-inflammatory macrophage type 1 (M1), producing IL-12, and alternative anti-inflammatory macrophage type 2 (M2), producing TGF-β and IL-10. Though ROCK2 inhibition is suggested to result in a decreased population of M2 macrophages with the upregulation of M1 markers in age-related macular degeneration (AMD) [128], the commitment of ROCK1 and ROCK2 in the conversion of the macrophage subtype in other organs and diseases remains to be elucidated. Further mechanistic analyses will be indispensable for clarifying the role of ROCK in regulating macrophage polarity.
Conclusions and Future Perspectives
Insulin signaling pathways and endothelial cells conduct crosstalk, thus, understanding the correlation between insulin resistance and endothelial dysfunction is essential for treating diabetes-related vascular complications. Insulin resistance and endothelial dysfunction lead to the failure of NO-dependent vasodilatation, the glucose uptake by cells, and the induction of inflammation in tissues, eventually leading to atherosclerosis. In addition, endothelial NO activates VASP signaling in macrophages by increasing the M2 macrophage polarization and exerting an anti-inflammatory function under conditions of insulin resistance. A novel class of anti-hyperglycemic agents is suggested to exert their beneficial effects through this mechanism. SGLT2 inhibitors and GLP-1 agonists may promote browning of WAT by polarizing M2 macrophages and protecting against endothelial dysfunction. A firm understanding of the mechanism underlying each drug's pleiotropic effect will be needed to establish new treatment approaches for endothelial dysfunction. | 2020-07-02T10:19:05.588Z | 2020-06-29T00:00:00.000 | {
"year": 2020,
"sha1": "309c4c0addbf4377d048cc1e49f49fead2d2294f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/8/7/182/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a247e81b2cf7fafca87cebd60cfd4e747afc390f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59252590 | pes2o/s2orc | v3-fos-license | Proteomics Analysis Reveals the Implications of Cytoskeleton and Mitochondria in the Response of the Rat Brain to Starvation
Long-term starvation provokes a metabolic response in the brain to adapt to the lack of nutrient intake and to maintain the physiology of this organ. Here, we study the changes in the global proteomic profile of the rat brain after a seven-day period of food deprivation, to further our understanding of the biochemical and cellular mechanisms underlying the situations without food. We have used two-dimensional electrophoresis followed by mass spectrometry (2D-MS) in order to identify proteins differentially expressed during prolonged food deprivation. After the comparison of the protein profiles, 22 brain proteins were found with altered expression. Analysis by peptide mass fingerprinting and MS/MS (matrix-assisted laser desorption-ionization-time of flight mass spectrometer, MALDI-TOF/TOF) enabled the identification of 14 proteins differentially expressed that were divided into 3 categories: (1) energy catabolism and mitochondrial proteins; (2) chaperone proteins; and (3) cytoskeleton, exocytosis, and calcium. Changes in the expression of six proteins, identified by the 2D-MS proteomics procedure, were corroborated by a nanoliquid chromatography-mass spectrometry proteomics procedure (nLC-MS). Our results show that long-term starvation compromises essential functions of the brain related with energetic metabolism, synapsis, and the transmission of nervous impulse.
Introduction
Long-term starvation, implying the lack of nutrient intake, triggers a complex physiological and biochemical reaction that involves an adaptive response of all organs and tissues, including the integrative systems. This adaptation involves responses of the central and peripheral nervous system together with the response of the endocrine system [1]. Different studies have shown that the aim of this global adaptive response is the conservation of energy or fuels to preserve the availability of cellular ATP levels for the functions of the different tissues. The body is forced to minimize oxidative damage and maintain a metabolic balance to survive the starvation period [1][2][3]. The information reported to date suggests the adaptive response attempts to protect the brain from the effect of nutrient deficiency and; thereby, prevent irreversible brain damage.
During starvation, the expression of various neuronal genes in the hypothalamus is regulated to change their hormonal and metabolic behavior, to assume a state of positive energy equilibrium [4]. Several studies have demonstrated that the main energy fuel consumed by the brain is glucose. During starvation, this substrate must be derived from carbohydrate reserves, from de novo gluconeogenesis from amino acids, glycerol, or lactate [5]. Today it is known that, during starvation, the function of the The experiment, conducted in the Animal Production and Experimentation Center of the University of Jaén (Spain), was reviewed and approved by the Ethics Committee of the University of Jaén as well as the Ethics Committee of the Junta de Andalucía (Spain). All procedures were performed in accordance with national and international guidelines for animal experimentation.
For this experiment, 18 male Wistar rats, with an average weight of 390.81 ± 6.23 g, were divided into 2 groups of 9 rats, each with 3 rats per cage (3 cages per group). The rats were maintained under controlled lighting conditions (12 h light/12 h darkness cycle) and temperature (22 • C), and had free access to water and a standard diet (Harlan, Ref. T.2014.12) for 7 days of adaptation. The composition of the diet was: crude protein 14.3%, fat 4%, digestible carbohydrates 48%, fiber 22.1%, and energy 12.1 kJ/g. Then, for one of the groups, called "starvation", the meal was removed for 7 days. The other group, called "control", was maintained with free access to the standard diet for 7 days. After rats were killed by cervical dislocation, the brain was immediately removed, put on ice, and then washed with saline solution (NaCl 0.9%, p/v), weighed, and finally frozen in liquid nitrogen until the preparation of the samples.
Protein Extraction for 2-D
For protein extraction, the brains of the 3 rats from the same cage were pooled. Next, 0.2 g of brain tissue from the control and starved rats were homogenized with 2 mL of buffer containing 8 M urea, 2 M thiourea, 4% 3-[(3-cholamidopropyl)dimethylammonio]-1-propanesulfonate hydrate (CHAPS), 2% isoelectric focusing gel (IPG) buffer, 20 mM dithiothreitol (DTT), 100 mM HCl-Tris, and 0.75 mM phenylmethylsulfonyl fluoride (PMSF) (pH 8). The homogenate was shaken at 4 • C gently for 1 h. During this time the samples were moderately shaken in a vortex every 15 minutes. The homogenates were centrifuged at 10,000× g for 15 min at 4 • C. The supernatants were cleaned using Ready 2-D CleanUp kit (BioRad Laboratories, Hercules, CA, USA) and the resulting samples were used for 2-D and nLC-MS. The protein concentration was measured by CB-X TM Protein Assay (G-Biosciences, St Louis, MO, USA). Three replicates of the homogenates were prepared per experimental group, each being made up of 3 different rats.
Quantitative Analysis of Gel Images and Statistical Analysis
Gel images of three replicates per sample were made with a BioRad FX Pro Plus densitometer (BioRad Laboratories, Hercules, CA, USA). Spot volumes, normalized by the total volume of all the validly matched spots in a set of gels, were quantified using the PDQuest Advanced software (BioRad Laboratories, Hercules, CA, USA). Only spots showing at least a two-fold over-/under-expression ratio, compared to any other sampling site, were taken into account. One-way analysis of variance (ANOVA) followed by Student's t-test were then used to choose the spots that showed altered expression patterns between the two experimental groups.
Protein Digestion and MS Analysis
Differentially expressed spots were automatically excised in an Investigator ProPic (Genomic Solutions, Cambridgeshire, UK). The gel pieces were digested with trypsin using a ProPrep II Automated Protein Digestion instrument (Genomic Solutions, Investigator Digilab, Cambridgeshire, UK) according to the following process: two 30-min destaining steps with 50% acetonitrile (ACN)/100 mM ammonium bicarbonate, two 15-min washes with 25 mM ammonium bicarbonate/50% ACN, dehydration for 5-min with 100% ACN, followed by drying. The sample was subsequently hydrated with 10 µL of 12.5 ng µL −1 trypsin in ammonium bicarbonate for 45 min at 4 • C before being finally digested in a microwave for 2 times of 5 min. Digestion was stopped by adding 1 µL of 10% trifluoroacetic acid (TFA). The resulting peptides were purified in a Pro MS device (Genomic Solutions, Cambridgeshire, UK), with a C18 microcolumn (ZipTip, Millipore, Billerica, MA, USA), and eluted with a matrix solution of 5 mg mL −1 α-ciane-4-hydroxycinnamic acid dissolved in 70% ACN/0.1% TFA. One µL aliquots of the eluted samples were directly spotted onto MALDI plates.
Mass analysis (MS) of the peptides in each sample was undertaken with a MALDI-TOF/TOF mass spectrometer (4800 Proteomics Analyzer, Applied Biosystems Carlsbad, CA, USA), in the m/z range of 800 to 4000 m/z and with an accelerating voltage of 20 kV. Spectra were internally calibrated with peptides from trypsin autolysis (M+H + = 842.509, M+H + = 2211.104). The most abundant peptide ions were then subjected to fragmentation analysis (MS/MS) to provide information for use in determining the peptide sequence.
Database Searching
Proteins were identified by peptide mass fingerprinting, which was confirmed by MS/MS analysis. The Mascot 2.0 search engine (Matrix Science Ltd., London, UK) was used for protein identification, running on the GPS Explorer TM software v3.5 (Applied Biosystems, Carlsbad, CA, USA) to search in the National Center for Biotechnology Information (NCBI) protein database (updated monthly).
The search setting allowed one missed cleavage with the selected trypsin enzyme, a MS/MS fragment tolerance of 0.2 Da, a precursor mass tolerance of 100 ppm, and cysteine carbamidomethylation and methionine oxidation as possible modifications. Proteins showing statistically significant (p < 0.05) changes in their expression were assigned positive identification after taking molecular-mass (Mr) and isoelectric-point (pI) values into consideration.
To make a classification of the protein-protein interactions of the differentially expressed proteins and to determine the interactions, the network STRING v10.0 (Protein-Protein Interaction Networks: http://string-db.org/cgi/input.pl) was used. Pre-established values of the software were used to obtain the proteins associations.
Sample Preparation
Protein extracts were cleaned in 1D SDS-PAGE at 10% of polyacrylamide. Samples were loaded in stacking gel and 100 V was applied until the electrophoresis front reached the resolving gel. After the protein extracts were separated by 1 cm in resolving gel, the electrophoresis was finished and the gel was stained with Coomassie Blue. Protein bands were excised, diced, and kept in water until digestion.
Protein Digestion
Briefly, protein bands were firstly destained in 200 mM ammonium bicarbonate (AB)/50% acetonitrile for 15 min and 5 min in 100% acetonitrile. Protein was reduced by addition of 20 mM dithiothreitol in 25 mM AB and incubated for 20 min at 55 • C. The mixture was cooled to room temperature, followed by alkylation of free thiols by addition of 40 mM iodoacetamide in 25 mM BA in the dark for 20 min. Afterwards, protein bands were washed twice in 25 mM AB. Proteolytic digestion was performed by adding trypsin (Promega, Madison, WI, USA), 12.5 ng/ul of enzyme in 25 mM AB, and incubating at 37 • C overnight. Protein digestion was stopped by adding trifluoroacetic acid at 1% final concentration, and the samples were dried in SpeedVac.
nLC-MS2 Analysis
Briefly, four technical replicates per sample were performed to validate changes detected in 2D differential expression. Nano-LC was performed in Dionex Ultimate 3000 nano UPLC (Thermo Scientific) with a C18 75 µm x 50 cm Acclaim Pepmam column (Thermo Scientific). Previously, the peptide mix was loaded in a 300 µm x 5 mm Acclaim Pepmap precolumn (Thermo Scientific) in 2% acetonitrile/0.05% TFA for 5 min at 5 µl/min. Peptide separation was performed at 40 • C for all runs in. Mobile phase buffer A was composed of water plus 0.1% formic acid. Mobile phase B was composed of 80% acetonitrile plus 0.1% formic acid. Samples were separated at 300 nL/min. The mobile phase B increased to 4% to 45% B for 60 min, and 45% to 90% B for 1 min, followed by a 5-min wash at 90% B, and a 15-min re-equilibration at 4% B. The total time of chromatography was 85 min.
After elution the peptide cations were converted to gas-phase ions by nano electrospray ionization, and analyzed on a Thermo Orbitrap Fusion (Q-OT-qIT, Thermo Scientific). The mass spectrometer was operated in the positive mode. Survey scans of peptide precursors from 400 to 1500 m/z were performed at 120K resolution (at 200 m/z) with a 5 × 10 5 ion count target. Tandem MS was performed by isolation at 1 Th with the quadrupole, collision-induced dissociation CID fragmentation with normalized collision energy of 35, and rapid scan MS analysis in the ion trap. The automatic gain control (AGC) ion count target was set to 102 and the max injection time was 75 ms. Only the precursors with charge state 2-5 were sampled for a second in-tandem mass analysis MS2. The Dynamic Exclusion duration was set to 15 s with a 10-ppm tolerance around the selected precursor and its isotopes. Monoisotopic precursor selection was turned on. The instrument was run in Top Speed Mode with 3-s cycles, meaning that the instrument would continuously perform MS2 events until the list of nonexcluded precursors diminished to zero or 3 s, whichever was shorter.
Data Analysis
The raw data were processed using Proteome Discoverer (version 2.1.0.81, Thermo Scientific). MS2 spectra were searched with SEQUEST HT engine against a database of Uniprot_Musmusculus_Dic2016 (www.uniprot.org). Peptides were generated from a tryptic digestion with up to one missed cleavage, carbamidomethylation of cysteines as fixed modifications, and oxidation of methionines as variable modifications. Precursor mass tolerance was 10 ppm and product ions were searched for at 0 Da tolerances. Peptide spectral matches (PSM) were validated using percolator based on q-values at a 1% false discovery rate (FDR). With Proteome Discoverer, peptide identifications were grouped into proteins according to the law of parsimony and filtered to 1% FDR.
Quantitative First-Mass MS1 Data Analysis in Skyline Software
First-mass analysis (MS1) chromatogram-based quantitation was conducted in Skyline 3.6.0.10162 (24), an open-source software project (http://proteome.gs.washington.edu/software/skyline), as recently described in detail for MS1 filtering [12]. First, comprehensive spectral libraries were generated in Skyline from database searches from Proteome Discoverer 2.1, of the raw data files, prior to MS1 filtering. Second, all raw files acquired in data dependent acquisition (DDA) mode were directly imported into Skyline, and MS1 precursor ions were extracted for all peptides present in the MS/MS spectral libraries. Quantitative MS1 analysis was based on extracted ion chromatograms (XICs) and was made for the top three resulting precursor ion peak areas (e.g., M, M+1, and M+2). Final quantitative comparisons were typically based on only the highest ranked precursor ion. After data import, graphical displays of chromatographic traces (extracted ion chromatograms) were manually inspected for proper peak picking of MS1 filtered peptides. In some cases, the peak integration was adjusted manually in the chromatographic window. Each peptide area of each protein was added together in order to calculate protein abundances in all samples. Fold changes and statistical analysis were calculated with MSstats, an R package involved in Skyline software [13]. The statistical significance of each protein ratio was indicated by its adjusted p-value (<0.1).
Results
Brain protein extracts from control and starved rats were analyzed by 2-D. For each of the situations, three gels were made, for a total of six gels. Figure 1 shows the master gel in which the spots detected in the six gels were considered. Approximately 346 spots were detected in the master gel and the number of spots detected on other gels ranged from 300 to 308. The proteins detected had a molecular weight between 6.5 and 200 kDa, while the isoelectric point (pI) of the protein ranged from 3.2 to 9.4. gel and the number of spots detected on other gels ranged from 300 to 308. The proteins detected had a molecular weight between 6.5 and 200 kDa, while the isoelectric point (pI) of the protein ranged from 3.2 to 9.4. Of all the detected proteins, those that were differentially expressed in the two conditions were selected. For the choice of proteins, several criteria were considered: There was a two-fold difference in protein abundance in one condition vs. the other, and this change was repeated in the three replicates analyzed from each group of rats. The quantitative analysis of the gels resulted in 22 spots with significant differential expression, as shown in Figure 1.
The results for the MS and MS/MS analyses, after a combined search in the databases, are presented in Table 1. Protein identification was based on the homology of three peptides by searching for these in the databases, limiting the search to mammals. Most of the homologies found had a good molecular weight search (MOWSE) score and high sequence (homology) coverage. The scoring criterion is given by the statistical significance that the MASCOT search engine calculates for each identification. A total of 14 spots were identified from protein or cDNA sequences described in mammalian species, such as Rattus norvegicus, Mus musculus, Oryctolagus cuniculus, Monodelphis domestica, and Heterocephalus glaber. Good homology results were found for 64% of the 22 spots analyzed, while the peptide footprint of the remaining eight spots did not meet the criteria established for positive identification. Of all the detected proteins, those that were differentially expressed in the two conditions were selected. For the choice of proteins, several criteria were considered: There was a two-fold difference in protein abundance in one condition vs. the other, and this change was repeated in the three replicates analyzed from each group of rats. The quantitative analysis of the gels resulted in 22 spots with significant differential expression, as shown in Figure 1.
The results for the MS and MS/MS analyses, after a combined search in the databases, are presented in Table 1. Protein identification was based on the homology of three peptides by searching for these in the databases, limiting the search to mammals. Most of the homologies found had a good molecular weight search (MOWSE) score and high sequence (homology) coverage. The scoring criterion is given by the statistical significance that the MASCOT search engine calculates for each identification. A total of 14 spots were identified from protein or cDNA sequences described in mammalian species, such as Rattus norvegicus, Mus musculus, Oryctolagus cuniculus, Monodelphis domestica, and Heterocephalus glaber. Good homology results were found for 64% of the 22 spots analyzed, while the peptide footprint of the remaining eight spots did not meet the criteria established for positive identification. The spots identified corresponded to 13 different proteins. In the case of glyceraldehyde-3-phosphate dehydrogenase (GAPDH), two spots were identified with two different variants of the same protein. The identified variants were located in the horizontal direction of the gel and; therefore, their presence could be due to the charge change caused by the post-translational alterations.
The functions of these proteins, taken from the UniProt database (http://www.uniprot.org/), are shown in Table 2. The proteins identified were grouped into three categories depending on their function or participation in metabolic pathways (Figure 2): death and the genomic breakdown ATP5A1 ATP synthase α subunit, mitochondrial precursor ATP synthase from mitochondrial membrane MAGEA11 Antigen 11 associated to melanome Co-regulator of androgen receptor that increases its activity. Involved in calcium homeostasis in endoplasmic reticulum GAPDH Glyceraldehyde-3-phosphate dehydrogenase Key enzyme of glycolysis Figure 2. Enlarged images of 2-D gels that show the differential expression of 14 proteins identified in the rat brain. The arrows point to the spots and the squares indicate the absence of spots. Numbers in bold indicate that this spot is present in greater quantity than the other condition.
The proteins identified were grouped into three categories depending on their function or participation in metabolic pathways ( Figure 2): (1) Energy catabolism and mitochondrial proteins, including the α subunit of ATP synthase (ATP5A1), the β subunit of ATP synthase (ATP5B), subunit 1 of the cytochrome b-c1 complex (1) Energy catabolism and mitochondrial proteins, including the α subunit of ATP synthase (ATP5A1), the β subunit of ATP synthase (ATP5B), subunit 1 of the cytochrome b-c1 complex (UQCRC1), the subunit of 75 kDa of NADH-ubiquinone oxidoreductase (NDUFS1), MIC60 subunit of the mitochondrial contact site and cristae organizing system (MICOS) (internal mitochondrial membrane protein) (IMMT), and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) complex; (2) Chaperone proteins, including 78 kDa glucose-regulated protein (HSPA5) and calreticulin (CALR); (3) Cytoskeleton, exocytosis, and calcium, such as the α-chain of non-erythrocytic spectrine 1 (SPTAN1), glial fibrillary acidic protein (GFAP), 1S microtubule-associated protein (MAP1S), Secernin-1 (SCRN1), and melanoma-associated antigen (MAGEA11). Figure 2 shows the proteins identified and grouped into different categories with differential expression. Except for some cases, there was a general fall in expression levels of brain proteins after the starvation period. In the Group 1, the long-term food deprivation caused a decrease of more than two-fold in the expression of ATP5B, UQCRC1, and NDUFS1. On the other hand, ATP5A1 changed from not expressing itself in the control situation to an expression of 694.63 units of optical density (O.D) after the starvation period. In the case of IMMT, food deprivation caused the complete disappearance of protein expression, going from 274.85 to 0. Two isoforms of GAPDH were also shown; the isoform 8319 went from having no expression in the control situation to registering an increase to 5290.57 in the starvation situation, while the isoform 8317 went from having an expression of 3007.11 in the control situation to having no expression after the starvation period. Here we see a clear substitution of isoform 8317 to 8319 of GAPDH, caused by the experimental situation. Within the Group 2 of proteins, HSPA5 decreased its expression during starvation, and CALR also reduced its expression by more than double after starvation. The most striking changes occurred in the proteins of Group 3, where all the proteins were not expressed in the starvation situation. Figure 3 shows a classification according to the protein-protein interactions of brain proteins differentially expressed after the starvation period. This is the classification provided by the STRING software, considering only the first 5 categories. Within the clusters: biological processes, molecular functions, cellular components, and KEGG (Kyoto Encyclopedia of Genes and Genomes) routes, we found proteins that were included in several groups simultaneously. MAP1S is also included in this classification, although this protein does not interact with any of the other proteins. Of special interest was the classification of "KEGG routes", where the proteins ATP5A1, ATP5B, NDUFS1, and UQCRC1 participate in both oxidative phosphorylation and diseases such as Parkinson's, Alzheimer's, and Huntington's. expression. Except for some cases, there was a general fall in expression levels of brain proteins after the starvation period. In the Group 1, the long-term food deprivation caused a decrease of more than two-fold in the expression of ATP5B, UQCRC1, and NDUFS1. On the other hand, ATP5A1 changed from not expressing itself in the control situation to an expression of 694.63 units of optical density (O.D) after the starvation period. In the case of IMMT, food deprivation caused the complete disappearance of protein expression, going from 274.85 to 0. Two isoforms of GAPDH were also shown; the isoform 8319 went from having no expression in the control situation to registering an increase to 5290.57 in the starvation situation, while the isoform 8317 went from having an expression of 3007.11 in the control situation to having no expression after the starvation period. Here we see a clear substitution of isoform 8317 to 8319 of GAPDH, caused by the experimental situation. Within the Group 2 of proteins, HSPA5 decreased its expression during starvation, and CALR also reduced its expression by more than double after starvation. The most striking changes occurred in the proteins of Group 3, where all the proteins were not expressed in the starvation situation. Figure 3 shows a classification according to the protein-protein interactions of brain proteins differentially expressed after the starvation period. This is the classification provided by the STRING software, considering only the first 5 categories. Within the clusters: biological processes, molecular functions, cellular components, and KEGG (Kyoto Encyclopedia of Genes and Genomes) routes, we found proteins that were included in several groups simultaneously. MAP1S is also included in this classification, although this protein does not interact with any of the other proteins. Of special interest was the classification of "KEGG routes", where the proteins ATP5A1, ATP5B, NDUFS1, and UQCRC1 participate in both oxidative phosphorylation and diseases such as Parkinson's, Alzheimer's, and Huntington's. Figure 3. Rat-brain protein-protein interaction network differentially expressed after seven days of starvation. These data were obtained using STRING program, adjusting the classification to the first five categories.
The 2D-MS data were validated by quantifying the fold change detected in selected proteins using a nLC-MS proteomics procedure. As shown in Figure 4, the protein-expression level from NDUS1, IMMT, GRP78, CALR, SPTAN1, and GFAP fell after long-term starvation. The resulting protein-expression patterns were consistent with those found by 2D-MS analysis, confirming the results reached with this procedure. ) that is considered with the value of 1. In all cases the differences between control and starvation were significant.
Discussion
In this work, we studied the differential expression of rat brain proteins after a long-term starvation. These proteins were separated by 2-D electrophoresis and identified by MALDI-TOF/TOF. With the help of specialized software, proteins differentially identified according to their cellular function were classified and grouped according to the networks of interactions between them. With the results, we gain overall knowledge of the main changes in brain protein expression that occur in response to starvation.
The metabolic adaptation that the brain undergoes during starvation leads to changes in protein expression. The results of this study show the existence of 14 proteins differentially expressed and classified into three groups: (1) energy catabolism and mitochondrial proteins, (2) chaperone proteins, and (3) cytoskeleton, exocytosis, and calcium.
The lack of the intake of nutrients provokes mitochondrial dysfunction, involving several proteins that show diminished expression: the ATP synthase α subunit (ATP5A1), the ATP synthase β subunit (ATP5B), the cytochrome b-c1 subunit 1 (UQCRC1), the 75-kDa subunit of NADH-ubiquinone oxidoreductase (NDUFS1), and internal mitochondrial membrane protein (IMMT). ATP5A1, ATP5B, UQCRC1, and NDUFS1 form part of the mitochondrial respiratory ) that is considered with the value of 1. In all cases the differences between control and starvation were significant.
Discussion
In this work, we studied the differential expression of rat brain proteins after a long-term starvation. These proteins were separated by 2-D electrophoresis and identified by MALDI-TOF/TOF. With the help of specialized software, proteins differentially identified according to their cellular function were classified and grouped according to the networks of interactions between them. With the results, we gain overall knowledge of the main changes in brain protein expression that occur in response to starvation.
The metabolic adaptation that the brain undergoes during starvation leads to changes in protein expression. The results of this study show the existence of 14 proteins differentially expressed and classified into three groups: (1) energy catabolism and mitochondrial proteins, (2) chaperone proteins, and (3) cytoskeleton, exocytosis, and calcium.
Discussion
In this work, we studied the differential expression of rat brain proteins after a long-term starvation. These proteins were separated by 2-D electrophoresis and identified by MALDI-TOF/TOF. With the help of specialized software, proteins differentially identified according to their cellular function were classified and grouped according to the networks of interactions between them. With the results, we gain overall knowledge of the main changes in brain protein expression that occur in response to starvation.
The metabolic adaptation that the brain undergoes during starvation leads to changes in protein expression. The results of this study show the existence of 14 proteins differentially expressed and classified into three groups: (1) energy catabolism and mitochondrial proteins, (2) chaperone proteins, and (3) cytoskeleton, exocytosis, and calcium.
The lack of the intake of nutrients provokes mitochondrial dysfunction, involving several proteins that show diminished expression: the ATP synthase α subunit (ATP5A1), the ATP synthase β subunit (ATP5B), the cytochrome b-c1 subunit 1 (UQCRC1), the 75-kDa subunit of NADH-ubiquinone oxidoreductase (NDUFS1), and internal mitochondrial membrane protein (IMMT). ATP5A1, ATP5B, UQCRC1, and NDUFS1 form part of the mitochondrial respiratory chain. ATP5A1 and ATP5B are part of the ATP synthase, which is responsible for the ATP production [14,15]; UQCRC1 is a component of the ubiquinole-cytochrome c reductase complex (Complex III) [16], while NDUFS1 is the core subunit of NADH dehydrogenase (Complex I) [17]. The protein IMMT maintains the internal architecture of the membrane and mitochondrial ridges, in addition to forming sites of contact with the external mitochondrial membrane, so that the function of this protein is to maintain the integrity of the mitochondria [18,19]. Energy catabolism in the brain is also affected, as demonstrated by the decreasing expression of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) under starvation conditions. GAPDH is a key enzyme in glycolysis that catalyzes the conversion of glyceraldehyde-3-phosphate to 1,3-bisphosphoglycerate [20].
Some chaperones such as the 78kDa glucose-regulated protein (HSPA5) and calreticulin (CALR) are also affected by the starvation period, by decreasing expression. HSPA5 is a chaperone that facilitates the assembly of multimeric protein complexes in the endoplasmic reticulum [21], and CALR is a calcium-binding chaperone that promotes protein folding, oligomeric assembly, and the quality control of proteins in the endoplasmic reticulum by the calreticulin/calnexin cycle [22,23]. CALR and HSPA5 chaperones are related to the activation of the stress response of the endoplasmic reticulum. At the beginning of the response, chaperones seek to restore homeostasis in the endoplasmic reticulum, but if they fail to achieve this balance, they attempt to trigger cell death [21,23].
The expression of some cytoskeleton-associated proteins also declines or disappears, such as the non-erythrocytic spectrin 1 α chain (SPTAN1), the glial fibrillary acidic protein (GFAP), and the microtubule-associated 1S protein (MAP1S). SPTAN1 interacts with calmodulin in a calcium-dependent way and may participate in the calcium-dependent movement of the cytoskeleton to the membrane [24], GFAP is a cytoskeleton-specific protein of astrocytes [25] and MAP1S modulates the function of microtubules and promotes their stability in mammalian cells [26]. Secernin-1 (SCRN1) and melanoma-associated antigen 11 (MAGEA11), as well as the above-mentioned proteins, show no levels of expression during the starvation period. SCRN1 regulates exocytosis in mast cells [27] and MAGEA11 acts as a co-regulator of the androgen receptor, stimulating its activity [28].
These expression changes, in the larger context of cellular functions, provide key information concerning events in the brain during starvation. The gradual decline observed in the expression of proteins involved in mitochondrial energy catabolism may be related to less use due to the lack of metabolic fuels, involving a lower respiratory rate and a lower need for such use. This even affects proteins involved in the maintenance of membrane structure. Some of these proteins are also involved in neurodegenerative diseases as widespread as Parkinson's, Alzheimer's, or Huntington's disease, implying the alteration of oxidative phosphorylation and mitochondrial structure in these diseases.
In specific cases, this change of expression can be explained individually. The two isoforms of GAPDH are interchanged in response to starvation. The 8317 isoform, expressed in control, is replaced by 8319, expressed during starvation. This change may be prompted due to GAPDH, which, in addition to its role in glycolysis, is a multifunctional protein involved in oxidative and nitrosative stress in the brain, being related to neurological disorders [29].
Moreover, all proteins related to the cytoskeleton diminish their expression until disappearing after starvation. SPTAN1 and MAP1S regulate the assembly in the cytoskeleton and play an important role in the neuronal synapse [23,[30][31][32]. The lack of expression of these proteins shows that the synapses malfunction in the brain. The decrease in the expression of proteins involved in the cytoskeleton, exocytosis, and calcium binding implies the existence of an alteration in the vesicle transport, which is essential in transmitting neurotransmitters and transmitting nerve impulses, both inside the neuron as well as across the synapse. GFAP is an astrocyte-specific cytoskeletal protein whose expression increases during reactive astrogliosis, which occurs in response to damage to the central nervous system [25,33]. According to our results, GFAP is not expressed during starvation, indicating a fall in astrocyte production in this situation.
Long-term food deprivation causes changes at the cellular and intracellular levels. In this work, we found that proteins associated with key cellular functions are affected. Mitochondria and cytoskeleton are the main cellular constituents that showed the greatest functional changes in response to starvation. Mitochondria prove critical in several metabolic functions such as lipid metabolism and energy production. This organelle is essential for the aerobic production of ATP, β-oxidation of fatty acids, ketogenesis, and gluconeogenesis from pyruvate and Krebs cycle intermediates. Severe malnutrition can cause dysfunction in mitochondria, which in turn provokes liver disorders due to oxidative stress and hepatic steatosis [34]. Changes in mitochondrial dynamics have also been linked to neurodegenerative diseases. When mitochondria malfunction, processes such as oxidative phosphorylation and calcium regulation falter, causing problems in the neuronal synapses [35]. Moreover, different works have related starvation, caloric restriction, and perturbations in mitochondrial function with lifespan and ageing [26,36].
In addition, the cytoskeleton and its components are vital for transport in different biological processes, and these pathways hydrolyze ATP to provide mechanical energy. Cells use the elements of the cytoskeleton to transport small molecules, macromolecules, and organelles to the site where they will serve their biological function. This transport is fundamental in the polarization, extension, shape, and neurotransmission of the neurons. The elements of the cytoskeleton, such as myosin, kinesin, and dynein, work together to maintain the actin and microtubule filament system that ensures the appropriate transport and proper structural basis in the cells. Dysfunction of cytoskeletal proteins upsets the transport and harms cellular morphology, which is related to different maladies such as neurodegenerative diseases [35,37].
In our experiment we do not perform specific analysis to correlate the changes in protein abundance with synapses' or motor malfunction that can be reflected in changes in movement capacity or in an abnormal behavior. Nevertheless, it is well known that food restriction induces hyperactivity in rats and other rodents [38]. In relation with a rat model of anorexia nervosa, a reduced availability of food stimulates running in a wheel, and excessive running induces self-starvation. In this vicious circle, rats actually starve and run themselves to death [38]. These changes in the behavior can be the result of a brain malfunction also induced by starvation in rat.
In some tissues or cell types, an important adaptation to starvation is autophagy that can be initiated by nutrients and amino acid deprivation, between other signals [39,40]. It can be a global or selective process that can affect specific proteins [41]. The autophagosomes generated in this process can involve cytoplasmic material and/or organelles, such as mitochondria and breakdown for re-use amino acids and other molecules as sources of energy and nutrients [42]. It is not clear what occurs in the brain in response to starvation [43], although it has been described that autophagy can protect neurons from beta amyloid-induced cytotoxicity [44]. In our results we have not found changes in proteins involved in the regulation of autophagy in response to seven-days starvation in the rat brain, implying that this food deprivation situation does not produce detectable autophagy in brain. Nevertheless, the decrease in the abundance of mitochondrial and cytoskeleton proteins can be related with a stimulation of autophagy or another protein breakdown mechanism during starvation. In conclusion, the main changes induced by a long-term starvation in the brain proteome of the rats affect energetic metabolism, chaperone proteins, and the cytoskeleton. These changes can result in the alteration of ATP production and vesicle dynamics fundamental for the maintenance of the function of this organ. | 2019-01-26T14:02:48.729Z | 2019-01-22T00:00:00.000 | {
"year": 2019,
"sha1": "d4745688ffbd4e64af5e07871e364ab85b11ba56",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/11/2/219/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d4745688ffbd4e64af5e07871e364ab85b11ba56",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
3730418 | pes2o/s2orc | v3-fos-license | The Use of Size Exclusion Chromatography to Monitor Protein Self-Assembly
High resolution size exclusion chromatography (SEC) coupled with static light scattering (SLS) analyses were conducted to study the effect of the mobile phase ionic strength and protein concentration on the output of SEC experiments. The results highlight the effect of small changes in the mobile phase composition on the estimation of molar masses estimated from retention time-based calibration curve compared with those obtained from SLS analysis. By comparing the SLS data with the SEC chromatograms, we show that SEC can provide helpful information on the protein aggregation state as macromolecules approach known precipitation points in their phase diagrams. This suggests the potential use of SEC as an easily accessible lab-based scanning methodology to monitor protein self-assembly prior to nucleation and crystallization. Implications for the use of SEC to study protein phase diagrams are discussed.
Introduction
There is no structural biology laboratory that can be functional without a chromatography system which is used for the purposes of purifying macromolecules [1][2][3][4]. One of the major purification methods used is size exclusion chromatography (SEC) which is based on the shape and size (hydrodynamic radius; Rh) of the eluted macromolecules, such that during a SEC experiment larger macromolecules (larger hydrodynamic radius) are eluted faster than the small ones which are retarded within the stationary hydrophilic resins [5]. Common wisdom is to calibrate gel filtration columns using a standard mobile phase (such as phosphate buffered saline) and use several standard proteins of known molar masses in order to create a calibration diagram (Log molar mass vs. elution time/volume) [6,7]. These diagrams are then used to retrieve information about macromolecules under investigation; including estimates of their molar masses, degree of oligomerization, and stability [8]. However, many factors other than the Rh and molar mass of macromolecules can affect the elution time. Solution pH and the ionic strength of the mobile phase can greatly affect the time at which macromolecules are eluted [9][10][11][12][13]. While it has long been known that solution properties can influence the elution time of a macromolecule under SEC, it unfortunately remains commonplace not to account for these solution properties before consulting a calibration curve-with many laboratories performing a single calibration curve that is applied to all subsequent buffers used. This can result in errors in estimated molar masses that may lead to the incorrect assignment of oligomeric state or other macromolecular solution properties [14,15]. This can be overcome through the use of static light scattering (SLS) measurements in-line with SEC experiments that can provide a direct measurement of solution molecular mass, without recourse to a calibration curve [16]. In addition to the molar mass, SLS analysis also provides important information on the behavior of a macromolecule in solution. As macromolecules typically have well predictable molecular weights, deviations from quanta of these molecular weights can be interpreted as information on the aggregation state of the molecule [17]. Previous researchers have used static light scattering (SLS) measurements to assess deviations from a known molecular mass and have interpreted the results as an estimation of the second virial coefficient (B 2 or A 22 ), which is a concentration independent term in the Taylor expansion approximation of the universal gas equation as applied to molecules in solution (Equation (1)) [18] PV = nRT + c [A 22 ] (1) where the pressure (P) and volume (V) of a solution are related to the number density of molecules (n), the universal gas constant (R), thermodynamic temperature (T), the macromolecule concentration (c), and the second virial coefficient (A 22 where Π is the osmotic pressure, cp the protein concentration (in mass units), R the gas constant, T the absolute temperature, and Mw the protein molecular weight.
A key implication of this is that conditions that promote macromolecular self-assembly may be the conditions that also support nucleation-the initial self-assembly of macromolecules that is essential for the growth of protein crystals. This has been previously demonstrated using dynamic and static light scattering (DLS and SLS) data that show crystallization of macromolecules occurs within a narrow (slightly negative) range of A 22 values corresponding to a weakly attractive solution regime [20], although it was also shown later that mildly positive values of A 22 can also be conducive to crystallization [21].
Here, we report our study of the effect of the mobile phase ionic strength and sample concentration on the output of SEC analyses. We observed that the use of SEC alone can provide information on the protein aggregation state as molecules approached known precipitation points in their phase diagrams, which is confirmed by the associated in-line SLS measurements. Although chromatographic analyses are nowadays implemented into different characterization methodologies [22][23][24][25][26] and at different beamline facilities [27][28][29][30][31], facilitating the determination of precise information for protein samples at the preparatory stage would certainly improve the efficiency of the results obtained through subsequent characterization methodologies.
Our experiments serve as a proof-of-principle and theoretical background for the potential use of SEC as an easily accessible lab-based scanning methodology to monitor protein precipitation or nucleation prior to crystallization. These experiments also serve as a reminder that solution conditions can play a major role in molecular mass estimates from SEC experiments using calibration curves.
Results and Discussion
Size exclusion chromatograms obtained were initially characterized by the peak elution point of sample and the presence of any detectable aggregation in the sample. As shown in Figure 1, laser light scattering signal from the SLS measurements is more sensitive to the presence of these aggregates than the absorbance signal at 280 nm of the NGC. Since the two proteins used in this study have well-defined molecular weights and the measured molecular masses are volume averaged, we have interpreted deviations in the measured molecular masses to represent the degree of assembly in the mobile phase under examination (i.e., the A 22 value of Equation (1) or B 22 value of Equation (2)). Such an interpretation is in line with previous DLS and SLS based studies on A 22 values [19].
As shown below, the molar masses as measured by SLS correlate well with the elution position of the sample. For example, Figure 1 demonstrates that under high salt (2 M NaCl in 0.1 M NaoAc pH 4.5) the protein hen egg-white lysozyme (HEWL) forms large aggregates (eluted at~1.8 mL) and a major monomeric fraction (eluted at~3.4 mL). Under low salt conditions (0 M NaCl in 0.1 M NaoAc pH 4.5), no aggregation of HEWL can be seen, but the sample elutes at~3 mL.
To further examine this behavior, we screened the concentration of NaCl in the mobile phase during SEC to determine the effects on the elution time/volume as shifts in the position of the non-aggregated HEWL absorbance peaks on the chromatograms (i.e., neglecting the peak position of the highly-aggregated sample). As displayed in Figure 2, these shifts were associated with changes in the SLS measured molar masses and sample polydispersity. Below, we detail our analysis, demonstrating features of protein behavior that have been well documented (salting-in/out), and demonstrating that SEC can be used to identify changes in protein aggregation state that (to our understanding) have not been previously demonstrated using SEC. We could find comparable studies using small angle X-ray and neutron scattering [32] and DLS [17] that support our results. sample. For example, Figure 1 demonstrates that under high salt (2 M NaCl in 0.1 M NaoAc pH 4.5) the protein hen egg-white lysozyme (HEWL) forms large aggregates (eluted at ~1.8 mL) and a major monomeric fraction (eluted at ~3.4 mL). Under low salt conditions (0 M NaCl in 0.1 M NaoAc pH 4.5), no aggregation of HEWL can be seen, but the sample elutes at ~3 mL.
To further examine this behavior, we screened the concentration of NaCl in the mobile phase during SEC to determine the effects on the elution time/volume as shifts in the position of the nonaggregated HEWL absorbance peaks on the chromatograms (i.e., neglecting the peak position of the highly-aggregated sample). As displayed in Figure 2, these shifts were associated with changes in the SLS measured molar masses and sample polydispersity. Below, we detail our analysis, demonstrating features of protein behavior that have been well documented (salting-in/out), and demonstrating that SEC can be used to identify changes in protein aggregation state that (to our understanding) have not been previously demonstrated using SEC. We could find comparable studies using small angle X-ray and neutron scattering [32] and DLS [17] that support our results. blue) NaCl. At 2 M NaCl, HEWL displayed a significant delay in its elution from the column and aggregates were first eluted and were shown by the light scattering signal (solid line) as a small peak (eluted around 1.8 mL) preceding the main peak (eluted around 3.4 mL). The absorbance curves (dashed) demonstrate the same elution volume behavior and are very much aligned with the LS curve of HEWL eluted with 0 M NaCl. This is not the case for HEWL eluted with 2 M NaCl, which does not show any information about higher molecular weight protein aggregation Figure 1. Light scattering curves with the corresponding calculated molar masses obtained for HEWL eluted in 0.1 M NaOAc buffer containing 0 M (red) and 2 M (blue) NaCl. At 2 M NaCl, HEWL displayed a significant delay in its elution from the column and aggregates were first eluted and were shown by the light scattering signal (solid line) as a small peak (eluted around 1.8 mL) preceding the main peak (eluted around 3.4 mL). The absorbance curves (dashed) demonstrate the same elution volume behavior and are very much aligned with the LS curve of HEWL eluted with 0 M NaCl. This is not the case for HEWL eluted with 2 M NaCl, which does not show any information about higher molecular weight protein aggregation.
Effects of Ionic Strength on Elution (Salting in)
As shown in Figure 2, the addition of low amounts of NaCl to the mobile phase triggered HEWL to elute faster than it did when the mobile phase was NaCL-free. The measured molar mass was also lower, as displayed in Figure 3 at NaCl ≤ 0.2 M. This clearly demonstrates the known salting in effect by which proteins become more soluble (less prone to aggregation/self-assembly) and thus are eluted faster from the size exclusion column [33,34]. The volume-averaged masses measured by SLS confirm that this is accompanied by a lowering of the tendency of the molecules to self-assemble (i.e., large negative values of A22). A NaCl concentration of 0.4 M provides a mobile phase in which 1 mg/mL HEWL shows the lowest degree of self-association (i.e., high negative values of A22), whereas 0 M NaCl shows a significantly higher degree of self-association (i.e., large positive values of A22). As can be observed in Figure 2, at increasing NaCl concentration (>0.4 M for 1 mg/mL HEWL and 0.2 M for 2 mg/mL HEWL), the monomer peaks again begin to shift further to the right of the chromatograms, displaying increasing retention times. This observation has been earlier reported to be due to the relatively high content of the charged ions in the aqueous media that are expected to compete with the bound proteins for the charged resin [35]. This should decrease the proteins' electrostatic interactions with the resin and promotes their intrinsic hydrophobic interactions. Therefore, the protein becomes structurally disconnected from the aqueous mobile phase and is retained longer in the column. Here, we have neglected the change in the viscosity of the mobile phase in this analysis as, within the range of concentrations used, it should be very minor compared to the solubility effect. [36].
Despite the delaying retention volumes at increasing NaCl concentrations, the results correlate well with the 'salting out' effect that is shown in Figure 3 as a general tendency for an increase in measured molecular mass accompanied by an increase in the sample polydispersity, indicative of a broadening of the population assembly distribution. We also observed that at certain NaCl concentration ranges of the mobile phase (0.6-1.1 M NaCl for 1 mg/mL HEWL and 0.4-0.7 M NaCl for 2 mg/mL HEWL, shown in Figures 2 and 3, respectively), there were peak shifts to the left (faster
Effects of Ionic Strength on Elution (Salting in)
As shown in Figure 2, the addition of low amounts of NaCl to the mobile phase triggered HEWL to elute faster than it did when the mobile phase was NaCL-free. The measured molar mass was also lower, as displayed in Figure 3 at NaCl ≤ 0.2 M. This clearly demonstrates the known salting in effect by which proteins become more soluble (less prone to aggregation/self-assembly) and thus are eluted faster from the size exclusion column [33,34]. The volume-averaged masses measured by SLS confirm that this is accompanied by a lowering of the tendency of the molecules to self-assemble (i.e., large negative values of A 22 ). A NaCl concentration of 0.4 M provides a mobile phase in which 1 mg/mL HEWL shows the lowest degree of self-association (i.e., high negative values of A 22 ), whereas 0 M NaCl shows a significantly higher degree of self-association (i.e., large positive values of A 22 ). As can be observed in Figure 2, at increasing NaCl concentration (>0.4 M for 1 mg/mL HEWL and 0.2 M for 2 mg/mL HEWL), the monomer peaks again begin to shift further to the right of the chromatograms, displaying increasing retention times. This observation has been earlier reported to be due to the relatively high content of the charged ions in the aqueous media that are expected to compete with the bound proteins for the charged resin [35]. This should decrease the proteins' electrostatic interactions with the resin and promotes their intrinsic hydrophobic interactions. Therefore, the protein becomes structurally disconnected from the aqueous mobile phase and is retained longer in the column. Here, we have neglected the change in the viscosity of the mobile phase in this analysis as, within the range of concentrations used, it should be very minor compared to the solubility effect. [36].
Despite the delaying retention volumes at increasing NaCl concentrations, the results correlate well with the 'salting out' effect that is shown in Figure 3 as a general tendency for an increase in measured molecular mass accompanied by an increase in the sample polydispersity, indicative of a broadening of the population assembly distribution. We also observed that at certain NaCl concentration ranges of the mobile phase (0.6-1.1 M NaCl for 1 mg/mL HEWL and 0.4-0.7 M NaCl for 2 mg/mL HEWL, shown in Figures 2 and 3, respectively), there were peak shifts to the left (faster elution) that also correlated with an increase in the molar masses as determined using SLS. These reversed shifts (i.e., against the ionic strength effect of the mobile phase) imply chances of nucleation, because the mobile phase used in this case is known to easily crystallize the protein under test. However, the amount of injected protein per trial (100-200 µg HEWL) that should have also encountered a sort of dilution in the column is not expected to really lead to the growth of even a critical nucleus. While only subtle increases in the molar masses were recorded, the change in elution position correlates well with the SLS measured masses and the changes in the measured masses are significantly above the errors of the measurement, as displayed in Figure 3. Beyond 0.7 and 1.2 M NaCl in the mobile phase of 1 and 2 mg/mL HEWL, respectively, the samples were prone to the formation of aggregates, shown as light scattering peak preceding the main analyzed peak as shown in Figures 1 and 3. elution) that also correlated with an increase in the molar masses as determined using SLS. These reversed shifts (i.e., against the ionic strength effect of the mobile phase) imply chances of nucleation, because the mobile phase used in this case is known to easily crystallize the protein under test. However, the amount of injected protein per trial (100-200 µg HEWL) that should have also encountered a sort of dilution in the column is not expected to really lead to the growth of even a critical nucleus. While only subtle increases in the molar masses were recorded, the change in elution position correlates well with the SLS measured masses and the changes in the measured masses are significantly above the errors of the measurement, as displayed in Figure 3. Beyond 0.7 and 1.2 M NaCl in the mobile phase of 1 and 2 mg/mL HEWL, respectively, the samples were prone to the formation of aggregates, shown as light scattering peak preceding the main analyzed peak as shown in Figures 1 and 3. Figure 3a,b indicate the data for HEWL eluted using a mobile phase containing a wide range of NaCl concentrations at a protein concentration of 1 and 2 mg/mL, respectively. The retention time increases with increasing NaCl concentration in the mobile phase. At two supersaturation levels, the retention time showed a slight drop before it continued again its increase. One drop was detected at the interface between the salting in (when the protein became more soluble by the first addition of sodium chloride) and salting out (when the protein became less soluble at increasing NaCl concentrations). On the graphs, the dashed blue line shows the limit between the salting in and the salting out as per our observations. The second drop (indicated by the black arrow) occurred at the supersaturation level that is most probably coinciding with nucleation (the area between the two dashed red lines), as was shown by the drop of mass recovery and subtle increase in the molar mass. At higher supersaturation levels, additional bands for aggregates preceding the bands under study appeared on LS graphs (shown on the graph as blue circles), which coincided with a drop in molar masses of the main protein peaks and increase in their polydispersity. Figure 3a,b indicate the data for HEWL eluted using a mobile phase containing a wide range of NaCl concentrations at a protein concentration of 1 and 2 mg/mL, respectively. The retention time increases with increasing NaCl concentration in the mobile phase. At two supersaturation levels, the retention time showed a slight drop before it continued again its increase. One drop was detected at the interface between the salting in (when the protein became more soluble by the first addition of sodium chloride) and salting out (when the protein became less soluble at increasing NaCl concentrations). On the graphs, the dashed blue line shows the limit between the salting in and the salting out as per our observations. The second drop (indicated by the black arrow) occurred at the supersaturation level that is most probably coinciding with nucleation (the area between the two dashed red lines), as was shown by the drop of mass recovery and subtle increase in the molar mass. At higher supersaturation levels, additional bands for aggregates preceding the bands under study appeared on LS graphs (shown on the graph as blue circles), which coincided with a drop in molar masses of the main protein peaks and increase in their polydispersity.
The complementary experiment, monitoring the protein assembly by injecting increasing concentrations of the protein under test when the mobile phase concentration was kept constant, was more straightforward. In this case, the mobile phase ionic strength will not interfere with the peak position-molecular size/mass relationship as the ionic strength is kept constant. Increasing the concentration of injected HEWL to the mobile phase that contains a crystallizing agent (NaCl) resulted in peak shifts to the left as shown in Figure 4. These shifts again coincided with an increase in the molar mass as measured by SLS. This again clearly indicates the possibility of monitoring protein inter-molecular interactions that might be indicative for assembly by observing peak shifts on SEC chromatograms. At low salt concentration in the mobile phase (0.5 and 1.0 M NaCl), one population was detected as displayed in Figure 4a,b. However, at high salt concentration (1.5 M NaCl), protein assembly was also associated with protein aggregation as can be seen in Figure 4c. The complementary experiment, monitoring the protein assembly by injecting increasing concentrations of the protein under test when the mobile phase concentration was kept constant, was more straightforward. In this case, the mobile phase ionic strength will not interfere with the peak position-molecular size/mass relationship as the ionic strength is kept constant. Increasing the concentration of injected HEWL to the mobile phase that contains a crystallizing agent (NaCl) resulted in peak shifts to the left as shown in Figure 4. These shifts again coincided with an increase in the molar mass as measured by SLS. This again clearly indicates the possibility of monitoring protein inter-molecular interactions that might be indicative for assembly by observing peak shifts on SEC chromatograms. At low salt concentration in the mobile phase (0.5 and 1.0 M NaCl), one population was detected as displayed in Figure 4a,b. However, at high salt concentration (1.5 M NaCl), protein assembly was also associated with protein aggregation as can be seen in Figure 4c. In summary, the SEC elution point of HEWL correlated with SLS determined molecular masses of the samples, demonstrating that SEC alone can provide data on the self-assembly of macromolecules. It should be stressed that at no point were masses indicative of dimers or higher order oligomers. SLS measures a volume averaged molar mass, meaning that changes in the recorded mass indicate a small subset of the sample being present as higher order assemblies, allowing an estimate of A22 to be made. Our data suggest that the same A22 estimate may also be made directly from SEC chromatograms in the absence of SLS data.
Since the effect of NaCl on most proteins is an increased solubility, we selected another precipitant, (NH4)2SO4, that is known to have the same salting out effect on a different model protein: bovine trypsin [37]. We followed the same two systematic studies we conducted for HEWL; with increasing (NH4)2SO4 concentration at constant trypsin concentration and vice versa. It was difficult to follow the variations with increasing trypsin concentrations, due to the early appearance of aggregates. However, increasing the (NH4)2SO4 concentrations led to similar variations as shown in Table 1. Trypsin eluted earlier when (NH4)2SO4 was added to the mobile phase and this coincided with an increase in molar mass and polydispersity. The protein showed higher solubility (lower molar mass and polydispersity) at (NH4)2SO4 concentrations higher than 1 M and eluted later from the column (salting in). However, trypsin showed an increasing degree of assembly at higher In summary, the SEC elution point of HEWL correlated with SLS determined molecular masses of the samples, demonstrating that SEC alone can provide data on the self-assembly of macromolecules. It should be stressed that at no point were masses indicative of dimers or higher order oligomers. SLS measures a volume averaged molar mass, meaning that changes in the recorded mass indicate a small subset of the sample being present as higher order assemblies, allowing an estimate of A 22 to be made. Our data suggest that the same A 22 estimate may also be made directly from SEC chromatograms in the absence of SLS data.
Since the effect of NaCl on most proteins is an increased solubility, we selected another precipitant, (NH 4 ) 2 SO 4 , that is known to have the same salting out effect on a different model protein: bovine trypsin [37]. We followed the same two systematic studies we conducted for HEWL; with increasing (NH 4 ) 2 SO 4 concentration at constant trypsin concentration and vice versa. It was difficult to follow the variations with increasing trypsin concentrations, due to the early appearance of aggregates. However, increasing the (NH 4 ) 2 SO 4 concentrations led to similar variations as shown in Table 1. Trypsin eluted earlier when (NH 4 ) 2 SO 4 was added to the mobile phase and this coincided with an increase in molar mass and polydispersity. The protein showed higher solubility (lower molar mass and polydispersity) at (NH 4 ) 2 SO 4 concentrations higher than 1 M and eluted later from the column (salting in). However, trypsin showed an increasing degree of assembly at higher (NH 4 ) 2 SO 4 concentrations (salting out). Using a mobile phase of 1.5 M (NH 4 ) 2 SO 4 , trypsin displayed two peaks on the chromatogram-one corresponding to a trimeric assembly and a second monomeric peak. It is notable that two distinctly different degrees of assemblies in cases of using 0 and 1.5 M (NH 4 ) 2 SO 4 in the mobile phase eluted at similar volumes (Table 1). In addition, the larger trimeric assembly of trypsin eluted in 1.5 M (NH 4 ) 2 SO 4 eluted later than the monomeric assembly eluted in 0.5 M (NH 4 ) 2 SO 4 . These observations are consistent with our observations above for HEWL in increasing NaCl concentrations.
Discussion
The conventional application of SEC in structural biology laboratories is to fractionate a given biological sample, for instance by separating monomers from aggregates and perform a molecular weight distribution analysis as well as facilitating protein storage by changing the mobile phase composition. In the general knowledge regarding SEC, it is assumed that, while the sample concentration can interfere with the resolution of sample fractionation, elution points on chromatograms are largely independent of the components of the mobile phase. In contrast, we show here that variations in the composition of the mobile phase can have a large impact on the retention time/volume. Not only can these variations affect the subsequent evaluation of the eluted macromolecules, but also can result in detectable changes in their state (e.g., self-assembly, aggregation or precipitation). This variation on the mobile phase components could be highly problematic for experiments performed with the intention of fractionating or purifying an injected macromolecular sample. Therefore, precise calibration curves specific for every mobile phase should be plotted. In addition, careful choice of the mobile phase may provide a mechanism to further enhance separation of proteins of interest from contaminants using SEC approaches.
Furthermore, we believe this to be the first demonstration that SEC can be used to study changes in molecular mass that we propose are due to variations of A 22 values across a phase diagram. Our observations are confirmed by measurable changes in the molecular masses retrieved from our inline SLS measurements. Similar variations in measured solution molecular mass have been reported by both DLS and X-ray/neutron scattering and have been accepted to represent the effects of changes in A 22 with solution composition. SEC experiments using recently available microcolumns hold the potential for a similar scanning of solution behavior and may yet provide an alternative route to the prediction of crystallization conditions. We are currently developing an experimental setting that will make it possible to test mobile phase conditions for proteins that have not been previously crystallized.
Solution Conditions
Hen egg-white lysozyme (HEWL) and bovine trypsin were purchased from Sigma-Aldrich Darmstadt, Germany (Cat. No. L4919) and Amresco LLC, Ohio, USA (Lot 2555C052), respectively and used without further purification. Per the product information the material was~98% pure monomer. All other chemicals were reagent grade. HEWL was dissolved in 0.1M sodium acetate (NaOAc) that was adjusted to pH 4.5 using acetic acid. The mobile phases were prepared from a range of sodium chloride (NaCl) concentrations buffered with 0.1 M NaOAc, pH 4.5, because this combination is known to facilitate the crystallization of HEWL in two polymorphic forms: tetragonal and orthorhombic crystals [38][39][40]. Similarly, trypsin was dissolved in water with adequate amount of calcium chloride and benzamidine HCl to prevent self-cleavage. The mobile phases for trypsin were prepared from a range of ammonium sulfate (NH 4 ) 2 SO 4 that was adjusted to pH 8.5 using 0.1 M Tris buffer. This combination is known to lead to the growth of orthorhombic trypsin crystals [41]. Before every SEC experiment a freshly prepared mobile phase was degassed and filtered through a 0.22 µm membrane. Relying on the known phase diagram and solubility values of HEWL, we performed our scans in two ways to study both the effect of mobile phase ionic strength and sample concentration [42,43]. First, we scanned a defined HEWL concentration at increasing precipitant concentrations in the mobile phase. For this, we selected two HEWL concentrations (1 & 2 mg/mL) against a wide range of NaCl concentrations (0.1-2 M) in the mobile phase. The second approach was to scan increasing HEWL concentrations (0.5-2 mg/mL) at discrete constant mobile phase concentrations (0.5, 1.0 and 1.5 M NaCl). Scanning increasing Trypsin concentrations against relatively low (NH 4 ) 2 SO 4 concentrations (0.5M) lead to aggregation on the column. Therefore, we focused on performing SEC at low trypsin concentration (2.0 mg/mL) with increasing (NH 4 ) 2 SO 4 concentrations (0-2 M).
Experimental Setups
A brand new microcolumn (Superdex 75 5/150 GL) mounted on an NGC (Next Generation Chromatography) chromatography system, (BioRad, Berkeley, CA, USA) operated at a flow rate of 0.35 mL/min was used for the whole set of SEC experiments. The system was equilibrated using the selected NaCl concentration using the buffer blending module, ensuring a minimization of random errors in the preparation of the mobile phase. For every chromatographic run, equal volumes of the required HEWL or trypsin concentration were dissolved in their respective buffer (water in case of trypsin) and manually injected into the NGC system. The selected injected volume to total column volume ratio facilitated a high-resolution fractionation regime. The sample molar mass, mass recovery, and polydispersity were monitored by using static light scattering (SLS) measurements provided by an in-line miniDAWN TREOS, (Wyatt, Santa Barbara, CA, USA) and its associated Astra 6.1 software (Wyatt, Santa Barbara, CA, USA). | 2019-04-09T13:06:31.714Z | 2017-10-30T00:00:00.000 | {
"year": 2017,
"sha1": "a20a1ff5bf64fdc2b2f55f04fb5253ebdb61ff7a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4352/7/11/331/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e3d3263f9233b1229659931e03f38d805f242024",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
119711024 | pes2o/s2orc | v3-fos-license | Wronskian method for bound states
We propose a simple and straightforward method based on Wronskians for the calculation of bound--state energies and wavefunctions of one--dimensional quantum--mechanical problems. We explicitly discuss the asymptotic behavior of the wavefunction and show that the allowed energies make the divergent part vanish. As illustrative examples we consider an exactly solvable model and the Gaussian potential well.
Introduction
Wronskians (or Wronskian determinants) are most useful for the analysis of ordinary differential equations in general [1] and the Schrödinger equation in particular [2]. Whitton and Connor [3] applied a remarkable Wronskian formalism to resonance tunnelling reactions and we have recently proposed that a closely related method may be suitable for teaching quantum scattering in advanced undergraduate and graduate courses in quantum mechanics [4].
The purpose of this paper is to show that the Wronskian method is also useful for the study of bound states of one-dimensional quantum-mechanical models. We believe that this variant of the approach is also suitable for pedagogical purposes and enables a unified treatment of the discrete and continuous spectra of simple quantum-mechanical models.
In section 2 we convert the Schrödinger equation into a dimensionless eigenvalue equation and show how to apply the Wronskian method to bound states. In section 3 we illustrate the application of the approach by means of two suitable examples. In section 4 we outline the main results of the paper and draw conclusions. Finally, in an Appendix we outline the main properties of Wronskians that are relevant to present discussion.
The Schrödinger equation
Before solving the Schrödinger equation it is a good practice to convert it into a dimensionless eigenvalue equation. In this way one removes all the physical constants and reduces the number of model parameters to a minimum. The time-independent Schrödinger equation for a particle of mass m that moves in one dimension (−∞ < where a prime indicates derivative with respect to the coordinate. If we define the dimensionless coordinate x = X/L, where L is an appropriate length scale, then we obtain the dimensionless eigenvalue equation The length unit L that renders both ǫ and v(x) dimensionless is arbitrary and we can choose it in such a way that makes the Schrödinger equation simpler. We will see some examples in section 3.
It is well known that a general solution to the second-order differential equation (2) can be written as a linear combination of two linearly independent solutions. Here we write where the solutions C(x) and S(x) satisfy at a given point x 0 in (−∞, ∞). These conditions are sufficient to ensure that C(x) and S(x) are linearly independent [2].
For every value of the dimensionless energy ǫ we know that where L and R stand for left and right and c and d for convergent and divergent, respectively. It means that, for arbitrary ǫ, the wavefunction is a linear combination of a convergent and a divergent function when |x| → ∞. If, for a particular value of ǫ, B 1 = B 3 = 0 then the resulting wavefunction is square integrable. In other words, this condition determines the energies of the discrete spectrum.
It follows from the Wronskian properties outlined in the Appendix that Therefore, when B 1 = B 3 = 0 we have a linear homogeneous system of two equations with two unknowns: A 2 and B 2 . There will be nontrivial solutions provided that its determinant vanishes The roots of this equation ǫ n , n = 0, 1, . . . are the energies of the bound states (discrete spectrum).
When the potential is parity invariant and x 0 = 0 then C(x) and S(x) are even and odd functions, respectively. In this case we have and the determinant (7) takes a simpler form: W (R c , C)W (R c , S) = 0. We appreciate that the even and odd solutions are clearly separate and their eigenvalues are given by respectively. Besides, we need to consider only the interval 0 ≤ x < ∞.
Commonly, it is not difficult to derive approximate expressions for the convergent and divergent asymptotic forms of the wavefunction because they are straightforwardly determined by the asymptotic behavior of the potential v(x). Therefore, it only remains to have sufficiently accurate expressions for C(x) and S(x) and their derivatives in order to obtain the eigenvalues by means of equation (7). This problem is easily solved by means of, for example, a suitable numerical integration method [5]. If y(x) stands for either C(x) or S(x) then such an approach gives us its values at a set of points at the same set of points which facilitates the calculation of the Wronskians.
Examples
In order to test the accuracy of the Wronskian method we first choose the exactly solvable problem given by the potential V (X) = −V 0 / cosh 2 (αX), where V 0 > 0 and α > 0. If we set L = 1/α we are led to the dimensionless Schrödinger equation (2) where Note that the dimensionless energy ǫ depends on only one independent potential parameter v 0 . The units of length and energy are 1/α andh 2 α 2 /m, respectively, and we do not have to bother about the mass of the particle and the Planck constant when solving the differential equation. The allowed dimensionless energies are given by [6] for even and odd states, respectively.
Since the potential (11) is parity invariant we integrate the Schrödinger equation Fig. 1 shows the Wronskians W (R c , C) and W (R c , S) for ǫ = −1 and v 0 = 2.5. We appreciate that x = x R = 5 is large enough to have constant asymptotic Wronskians and we choose this coordinate value from now on.
This numerical test also shows that it is sufficient for present purposes to set h = 0.01 and N R = 500 in the fourth-order Runge-Kutta method [5] built in the computer algebra system Derive (http://www.chartwellyorke. com/derive.html). Squares mark the exact critical parameters ǫ. We see that the Wronskians vanish at the exact eigenvalues given by equation (12).
This is a confirmation of our earlier assumption that the number of steps and their size are suitable for obtaining reasonable results. where V 0 > 0, and α > 0, that we easily convert into the dimensionless potential by means of the length unit L = 1/ √ α. This potential is also parity invariant and vanishes asymptotically as |x| → ∞ so that the calculation is similar to the preceding example. In order to show that the Wronskian method also applies successfully to this model we first obtain some critical values of the potential parameter v 0 . Fig. 4 shows W (R c , C) and W (R c , S) for
Conclusions
In our opinion the Wronskian method is sufficiently clear and straightforward for teaching an advanced undergraduate or graduate course in quantum mechanics. The mathematics requires no special background beyond an introductory calculus course.
Since many available computer softwares offer numerical integration methods the programming effort is relatively light.
From a purely theoretical point of view the method is suitable for the discussion of the convergent and divergent asymptotic behaviors of the wavefunction and for illustrating how the allowed bound-state energies make the divergent part vanish leading to square-integrable wavefunctions. The students may try other quantum-mechanical models, derive the appropriate asymptotic behaviors analytically and then test their results by means of a suitable computer program. They can verify that the Wronskians already approach constants as the absolute value of the coordinate increases and that the wavefunction already looks like a square integrable function when the coefficient of the divergent contribution is almost zero. They can even estimate the remnants of the asymptotic divergent part because the Wronskian method provides the necessary coefficients.
In addition to it, the Wronskian method is also suitable for quantum scattering [4] allowing a unified treatment of both the discrete and continuous spectra of the model.
Finally, we point out that computer algebra systems are remarkable aids for the teaching and learning process because they facilitate the algebraic treatment of the problem and even offer the possibility of straightforward numerical calculations (although they are considerably slower than specialized numerical programs).
Appendix A. Wronskians
In order to make this paper sufficiently self-contained in this appendix we outline some well known results about the Wronskians that are useful for the study of ordinary differential equations in general [1] and also for the treatment of the Schrödinger equation in particular [2,3]. To this end, we consider the ordinary second-order differential | 2011-01-17T13:47:38.000Z | 2011-01-17T00:00:00.000 | {
"year": 2011,
"sha1": "274b0752436acf98fbd20f019998f4762842aec9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1101.3209",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "274b0752436acf98fbd20f019998f4762842aec9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
49480370 | pes2o/s2orc | v3-fos-license | Evaluation of the use of electrocardiogram monitoring in patients on psychotropic medications that have a risk of QT prolongation
Introduction: Many psychotropic medications carry a risk of prolonging the QT interval and increasing the risk of developing Torsade de pointes (TdP). The goal of this study was to evaluate whether patients taking psychotropic agents with a known risk of TdP are being monitored at a community hospital through the use of electrocardiograms (EKGs). Methods: This was a retrospective chart review of 100 adult patients—50 from general medicine floors and 50 from psychiatric units—who were taking at least one psychotropic agent with a known risk of TdP during hospitalization. Results: The mean number of medications with QT-prolongation risk administered to the psychiatric and general medicine patients was 4.2 ± 1.7 and 3.9 ± 2.0, respectively (P = .7484). Thirty-two of the psychiatric patients (64%) and 48 of the general medicine patients (96%) received EKGs during their hospitalization (P < 0.0001). Of those newly starting the target medications, 58% (18 of 31) of the psychiatric patients and 71% (5 of 7) of the general medicine patients received a baseline EKG. The difference was not statistically significant (P = .6807). Overall, 8 patients (8%) had corrected QT (QTc) intervals >500 ms. Four had repeat EKGs performed, and none had medication changes made to decrease TdP risk. Discussion: Many inpatients on psychiatric medications received multiple medications with a risk of TdP, but not all received monitoring through baseline or repeat EKGs when warranted. Patients with QTc intervals >500 ms were not appropriately managed to lower their risk of TdP. Pharmacists thus can help improve the monitoring and management of QT prolongation.
Introduction
A prolonged QT interval is a risk factor for the development of Torsade de pointes (TdP), a potentially fatal arrhythmia. The risk for TdP increases as the corrected QT (QTc) interval increases; a QTc interval .500 ms is associated with a 2-to 3-fold higher risk of TdP compared with a QTc interval ,500 ms. However, there is no threshold of QTc prolongation above which TdP is certain to occur (or below which avoiding it is certain). The risk of TdP can increase if more than one QTprolonging drug is used concurrently. 1 Patients with risk factors such as age above 65 years, female sex, congenital QT syndrome, bradycardia, and electrolyte disturbances are more likely to develop QTc prolongation. 2 Many medications often used in the psychiatric setting-such as tricyclic antidepressants, certain selective serotonin reuptake inhibitors, lithium, antipsychotics, and methadonecarry a risk of prolonging the QT interval and increase the risk of developing TdP. [3][4][5] Patients with psychiatric conditions are often on combinations of these medications, which may further increase their risk of developing TdP.
According to American Heart Association practice standards on electrocardiogram (EKG) monitoring in hospital settings, an EKG is indicated for patients who: begin to take a drug known to cause TdP, overdose from potentially proarrhythmic drugs, have new-onset bradyarrhythmias, or suffer from severe hypokalemia or severe hypomagnesemia. The QTc interval should be documented before and at least 8 to 12 hours after the initiation, increase in dose, or overdose of QT-prolonging drugs. 1 Patients with a QTc increase of at least 60 ms after medication initiation, or who have a QTc interval .500 ms, are at risk for TdP. A QTc of ,450 ms in men or ,470 ms in women is considered safe, with no need for additional intervention. 6 The goal of this study was to evaluate whether EKGs were being used to appropriately monitor patients at this inpatient facility who were taking psychotropic medications that have a known risk of TdP. The study also looked at the frequency of use of concomitant medications with a risk of QT prolongation. We compared the monitoring received on medical floors to that on the psychiatric units to see whether these patient populations are being managed differently.
Methods
This was a retrospective chart review of patients admitted to Monmouth Medical Center, a community teaching hospital in Long Branch, New Jersey, between December 1, 2014, and January 31, 2015. Patients admitted to two general medicine and two psychiatric units were included if they were 18 years or older and had at least one standing order for a psychotropic medication classified by CredibleMedsW (AZCERT Inc, Oro Valley, AZ) as having a known risk for TdP (haloperidol, methadone, citalopram, escitalopram, droperidol, chlorpromazine, pimozide, or thioridazine). Although many psychiatric medications have a risk of causing QT prolongation, and subsequently TdP, these medications were chosen because they have the strongest data to support this risk. Medications in other risk categories (eg, possible or conditional risk) were included in the analysis of all medications the patient received. Patients were excluded if they did not receive at least one dose of the studied medication. The first 50 chronologic patients from each service (medicine or psychiatry) who met these criteria were included. If patients were moved from a general medicine to a psychiatric unit, or vice versa, each transfer was considered a separate admission. Data collection included patients' age, sex, admitting diagnosis, medical service, full medication list, medication administration times, EKG results, and serum electrolyte concentrations. For patients with multiple electrolyte lab values reported, only the first set of values obtained was used. The QTc intervals were automatically calculated by the EKG machine using Bazett formula. 7 Heart rates were obtained from EKG reports to determine whether bradycardia was present. Other risk factors for TdP and QT prolongation were also monitored for, including serum potassium and magnesium concentrations, when available. The first EKG obtained was used in the analysis for baseline EKGs as long as it was performed prior to or within 24 hours after starting the studied medication. Additional medications administered that have a possible or conditional risk of TdP were also recorded, including both standing orders and medications ordered on an asneeded basis. Each medication's risk of TdP was classified as known, possible (if it prolongs the QT interval but there is no substantial evidence that it causes TdP), or conditional (if it carries a risk of TdP and/or QT prolongation only under certain conditions, such as congenital long QT syndrome, drug overdose, or when coadministered with other QT-prolonging drugs) based on the classification by crediblemeds.org as of December 2, 2014. 3 Demographic characteristics and clinical parameters were compared by treatment group using the Fisher exact test for categoric variables and Student t test for continuous variables. Data were analyzed using GraphPad software (La Jolla, CA). This study was approved by the appropriate institutional review boards at Monmouth Medical Center and Rutgers University.
Results
A total of 764 patients were screened for inclusion: 212 from psychiatric units and 552 from general medicine units. Of the former, 54 met inclusion criteria, and the first 50 were included, along with the first 50 qualified patients reviewed from the general medicine units. stay. One patient had standing orders for both haloperidol and methadone, and one received both haloperidol and escitalopram. Of the patients in the general medicine units, 16 were on citalopram, 28 on escitalopram, 4 on haloperidol, and 3 on methadone. One patient received escitalopram and citalopram at different times during his stay. The mean daily doses for the standing orders of these medications were calculated using the highest daily dose achieved during inpatient stay and are reflected in Table 1.
Baseline characteristics of all patients are listed in Table 2.
Most of the patients from the psychiatric units (62%; n ¼ 31) were not on the studied medication prior to admission, whereas most patients from the general medicine units (86%; n ¼ 43) were continuing the medication (P , .001). The most common admitting diagnosis for the general medicine patients was shortness of breath (26%; n ¼ 13), followed by medical or psychiatric screening exams (10%; n ¼ 5). Other diagnoses included altered mental status, weakness, lethargy, chest pain, rectal bleeding, and a drug overdose. The drugs involved in overdose were not recorded as part of data collection.
A total of 49 of the psychiatric patients (98%) and all of the general medicine patients had potassium levels checked during hospitalization (P ¼ 1.000). A total of 1 of the 49 psychiatric patients and 4 of the 50 general medicine patients had low potassium levels (,3.5 mEq/L) initially (P ¼.3622), 4 of whom (the general medicine patients) then received potassium supplementation. The psychiatric patient did not receive potassium for her potassium level of 3.4 mEq/L. Two general medicine patients with very low potassium levels of 2.9 and 2.8 mEq/L had prolonged QTc intervals: 465 and 550 ms, respectively. Both were treated with potassium supplementation and one was also treated with magnesium supplementation for a magnesium level of 1.5 mg/dL. Overall, magnesium monitoring was less frequent, particularly in psychiatric patients, with concentrations checked in 2 of the psychiatric patients (4%) and 19 of the general medicine patients (38%; P , .0001). Two patients, both from general medicine, had low magnesium concentrations (,1.5 mg/dL) and were treated with magnesium supplementation.
Results from baseline EKGs are shown in Table 4. Significantly more general medicine patients (96%; n ¼ 48) than psychiatric patients (64%; n ¼ 32) were monitored with EKGs during their stay (P , .0001). Of the psychiatric patients, 26 (52%) had an EKG done within 24 hours of admission or starting the medication. Another A total of 31 of the psychiatric patients (62%) and 7 general medicine patients (14%) started the medication during hospitalization (P , .0001). Of these, 58% (18 of 31) of the psychiatric patients and 71% (5 of 7) of the general medicine patients received an initial EKG prior to or within 24 hours of starting the medication (P ¼.6807). A total of 23% (7 of 31) of the psychiatric patients compared with 71% (5 of 7) of the general medicine patients who were started on the medication during hospitalization had a repeat EKG to monitor for changes in the QTc caused by the medication (P ¼.0224).
A total of 1 of 18 men (6%) and 7 of 18 men (39%) from the psychiatric units and general medicine units, respectively, had QTc intervals .450 ms (P ¼.0408). A total of 2 of 14 women (14%) and 14 of 30 women (46%) from the psychiatric units and general medicine units, respectively, had QTc intervals .470 ms at baseline (P ¼.0487). A total of 8% (n ¼ 8) of patients had QTc intervals .500 ms ( Table 5), all of them hospitalized in general medicine units. Only half of those with QTc intervals .500 ms had repeat EKGs performed, and none had medication changes made specifically to decrease TdP risk. One patient-a general medicine patient-with an initial QTc ,500 ms had QTc prolongation to .500 ms on a repeat EKG. None of the patients had a change in their QTc of .60 ms from baseline. There were no documented episodes of TdP in any of the studied patients.
Discussion
Overall, patients received an average of 3 to 4 medications that pose a risk of QT prolongation during hospitalization. Some received as many as 8 or 9 such medications. When patients are acutely ill, they often receive more medications than in their typical regimen, which may also have a risk of causing QT prolongation. The prevalence of these interactions has been seen in other studies. One study of 592 hospitalized psychiatric patients found a total of 965 drug interactions in patient profiles, 11.7% (n ¼ 113) of which carried a risk for QT prolongation. 8 Our study shows that most patients who were already on a psychiatric medication associated with a known risk of TdP received multiple medications that carry an additional risk of QT prolongation. Some recent data suggest that polypharmacy with QT-prolonging agents may not significantly increase the risk of QT prolongation over monotherapy. 9,10 Further studies on the impact of these interactions are warranted to fully understand their significance.
A total of 60% (23 of 38) of the patients who were newly started on one of the studied medications had an EKG. According to American Heart Association recommendations, they should receive both a baseline and a repeat EKG after starting the medication. 1 There was also a difference in the frequency of EKG monitoring between psychiatric and general medicine patients. This may be partly due to the fact that patients admitted to the general medicine floors likely received EKGs for medical reasons related to the cause of their admission and not purely for the monitoring of QTc intervals.
The low use of EKGs may not be unique to our institution and likely extends to outpatient populations. In a study of 3420 outpatients in the United Kingdom who were prescribed haloperidol during the study period, only 1.8% (n ¼ 62) had an EKG at the start of treatment. 11 Given how often potentially QTc-prolonging medications are used, evaluating the need for EKG monitoring should be part of all psychiatric patients' care. An analysis conducted in Switzerland found that routine EKG monitoring on all admitted psychiatric patients was costeffective at reducing sudden cardiac deaths, especially in cases of polypharmacy and illicit substance use. 12 Although normal QTc intervals for patients vary depending on sex, we chose to focus on those patients who had a QTc .500 ms because this is generally accepted as the cutoff above which the risk of TdP increases. 1 In our study, QTc prolongation above 500 ms was detected in 8% (8 of 100) of patients overall. There may have been more patients with QTc prolongation that were missed because of the low number of repeat EKGs performed on patients during their inpatient stay.
Except for correction of electrolyte abnormalities, no readily identifiable changes were made in the treatment of patients who had QTc intervals .500 ms. This presents an opportunity for improvement in patient care. All patients with prolonged QTc intervals should have, at a minimum, received repeat EKGs. In a population-based cohort study of 3484 patients 55 years and older who had records of multiple EKGs performed, those who had 2 consecutive EKGs with prolonged QTc intervals had an increased risk of sudden cardiac death compared with patients who had consistently normal QTc intervals (Bazett hazard ratio, 2.23; 95% confidence interval, 1.17- 13 Patients with a prolonged QTc interval in one EKG reading but not the other did not have an increased risk of sudden cardiac death compared with patients with consistently normal QTc intervals. 13 Although this study was not specifically in a psychiatric population and excluded patients on QTc-prolonging agents, it can be helpful in identifying patients who require an intervention based on repeat EKGs.
Patients with prolonged QTc intervals of .500 ms or with a change of .60 ms from baseline should have changes made to their medications in order to reduce the risk of TdP. Possible changes include switching to alternative agents with lower TdP risk, decreasing doses of the highrisk agents, identifying and rectifying drug interactions, or adjusting electrolyte imbalances. In our study, no changes were made to the studied medications in patients with prolonged QTc intervals. No history on how long patients were on these medications prior to admission was able to be obtained from the medical record, although most of the patients were on these medications prior to admission. Also, no earlier EKG results from prior to this hospitalization were checked for comparison.
Patients were also poorly monitored for electrolyte abnormalities. Most patients were monitored with baseline chemistry panels, which included potassium concentrations, but very few were checked for magnesium levels, especially in the psychiatric population. High-risk patients, including those with cardiac conditions, female patients, and older patients, should be assessed for all risk factors to ensure their safety when starting a medication with a relatively higher risk of TdP.
The present study has limitations. Because we focused our screening criteria on psychotropic medications, the results from our general medicine population may not be generalizable to all such patients. Another limitation is that we included EKGs that were performed within 24 hours of hospital admission. Ideally, these would have been done prior to initiating the medication; however, whether this is reasonable depends on EKG availability. Thus, the actual rate of EKGs performed prior to starting one of the studied medications may be even lower than reported here. The QTc intervals were only calculated using the Bazett formula, which works best when the heart rate is between 60 and 100 beats per minute, possibly overcorrecting at slower rates and undercorrecting at faster ones. The Fridericia formula for correction would have been another option: it has the same limitations at slow rates but is more accurate at faster ones. 14 Moreover, risk factors for TdP were not comprehensively assessed. Only the initial electrolyte levels were recorded, even though many patients, especially those from the general medicine units, had repeat labs done.
Also, aside from admitting diagnoses, we did not assess medical histories, including TdP risk factors like cardiovascular conditions and renal or hepatic impairment.
Conclusions
This study assessed the appropriateness of EKG monitoring in patients taking psychotropic medications with a known risk for causing TdP. Improvements can be made in both the psychiatric and general medicine areas of the hospital: more frequent monitoring of EKGs at baseline when starting a medication that can prolong the QT interval; monitoring of electrolytes, such as potassium and magnesium, in patients at a higher risk for TdP; and, perhaps most importantly, mitigating the risk of TdP through medication changes in patients who develop QTc prolongation. Pharmacists can play an important role in monitoring for QTc prolongation in patients and recommending medication changes in those patients at higher risk for TdP. | 2018-07-12T06:15:08.544Z | 2016-06-29T00:00:00.000 | {
"year": 2016,
"sha1": "cd9dcca7c2a7bd3a8bd6955684c193b4cea9b845",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.9740/mhc.2016.07.171",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd9dcca7c2a7bd3a8bd6955684c193b4cea9b845",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3046153 | pes2o/s2orc | v3-fos-license | A Reinterpretation of the Cytoarchitectonics of the Telencephalon of the Comoran Coelacanth
The cytoarchitecture of the telencephalon of the Comoran coelacanth, Latimeria chalumnae, was analyzed in the context of recent advances in our understanding of telencephalic organization in lungfishes and amphibians, which constitute the sister group to coelacanths. In coelacanths, the telencephalon is divided into pedunculated olfactory bulbs, paired hemispheres, and an unevaginated telencephalon impar. The hemispheres consist of a ventrally located subpallium and, dorsally, a greatly expanded pallium. Traditionally, the subpallium in coelacanths has been divided into a medial septal area and a lateral striatum. Re-examination of the lateral subpallial wall, however, suggests that the striatum is more restricted than previously believed, and it is replaced dorsally by a more scattered plate of cells, which appears to represent the ventral pallium. The putative ventral pallium is continuous with a ventromedial pallial formation, which appears to receive input from the lateral olfactory tract and should be considered a possible homolog of the lateral pallium in tetrapods. The putative lateral pallium is replaced by a more dorsomedial pallial formation, which may represent the dorsal pallium. This formation is replaced, in turn by an extensive lateral pallial formation, which appears to be homologous to the medial pallium of tetrapods. An expanded medial pallium in coelacanths, lepidosirenid lungfishes, and amphibians may be related to well developed spatial learning. Traditionally, the telencephalon impar of coelacanths, has been interpreted as an enlarged preoptic area, but reanalysis indicates that the so-called superior preoptic nucleus actually consists of the medial amygdalar nucleus.
Materials and Methods aniMals and tissue processing
Two brains of L. chalumnae were dehydrated, paraffin embedded, and cut into 15 μm serial sections in the transverse plane. The first brain was donated by the Field Museum of Natural History, Chicago, IL, USA (CCC number 61 of Bruton and Coutouvidis, 1991). Unfortunately, the body of this coelacanth was frozen prior to fixation, and the brain histology is marginal. The second brain was dissected from specimen 80 (Bruton and Coutouvidis, 1991) and fixed in 4% paraformaldehyde shortly after death, and the brain histology is very good. The serial sections of this brain were divided into three series: One series was stained with the Bodian silver method to reveal myelinated and unmyelinated fiber tracts; A second series was stained with 1% cresyl violet to reveal neuronal cell bodies; The third series was stained with the Klüver-Barrera method to reveal myelinated tracts and neuronal cell bodies.
introduction
During the Devonian, approximately 415 to 360 million years ago, the Sarcopterygii (lobe-finned fishes) constituted one of two major radiations of bony fishes. Today, only three groups of Sarcopterygii still exist: the coelacanths, the lungfishes, and the limbed vertebrates (Tetrapoda). All recent phylogenetic data indicate that lungfishes are the sister group of tetrapods, and that coelacanths are, in turn, the sister group of these taxa (Brinkmann et al., 2004;Takezaki et al., 2004). There are only two known living coelacanth species: the Comoran coelacanth, Latimeria chalumnae, and the Indonesian coelacanth, L. menadoensis. Both species are 1-2 m in length and weigh 65-100 kg. Both inhabit marine rocky slopes with caves at depths of 100-700 m. L. chalumnae was discovered in 1938, and over 100 specimens have been preserved in museums internationally. Their current population, however, is estimated to be less than 400 individuals (Hissmann et al., 1998). L. menadoensis was discovered only in 1997, and there are no estimates of their population size.
Although the anatomy of L. menadoensis has not been described, the general anatomy of L. chalumnae is known in detail Anthony, 1958, 1965;Millot et al., 1978), and there are several descriptions of the brain, including detailed reports on cell groups and major pathways (Nieuwenhuys, 1965(Nieuwenhuys, , 1998Nieuwenhuys et al., 1977;Nieuwenhuys and Meek, 1990). However, recent molecular and connectional advances in the study of telencephalic organization cytoarchitectonic criteria Six criteria are commonly used to recognize cell groups cytoarchitectonically: (1) differences in neuronal cell size; (2) differences in neuronal density; (3) relatively cell-free zones, which frequently indicate boundaries between cell groups; (4) differences in the distribution of genetic and immunohistochemical markers; (5) differences in the connectivity of the suspected cell groups; and (6) phylogenetic continuity of cellular groups and their molecular markers and connections within a clade. Frequently, when a brain region is first examined cytologically, the first three criteria are primarily used to generate hypotheses regarding the number of cell groups, their topology, and their possible homologs in other taxa. Ideally, these hypotheses are then tested using criteria 4 and 5. Criterion 6 is critical at both levels of analysis. Firstly, it has a predictive function at the first level of analysis, as brain organization within a clade is generally conserved, and cell groups that occur within members of a clade should be widely, if not universally, present (plesiomorphs) and can be predicted to occur in a member of the clade that has not yet been examined. Secondly, this criterion is critical in proposing homologies between cell groups in different members of a clade, as a cell group and its molecular and hodological characteristics should be widely, if not universally, present (plesiomorphs), or they should exhibit linear transformation (apomorphs) within the clade.
Unfortunately, in the case of the Comoran coelacanth, there are no molecular or hodological data to aid in proposing and evaluating hypotheses of the homology of neuronal cell groups. Thus, criteria 1-3, aided by the predictive power of criterion 6, are all that is presently available on which to base hypotheses regarding individual cell groups and their possible homology with cell groups in other taxa of lobe-finned fishes. For this reason, all statements regarding the recognition of cell groups in the telencephalon of the Comoran coelacanth, and their possible homologs in other taxa, can be only considered tentative. Hopefully they will be tested when molecular data becomes available, but given the rarity and endangered status of coelacanths, it may never be possible to undertake hodological experiments.
results
The olfactory bulbs were not collected with either of the brains so the cytoarchitecture of only the telencephalic hemispheres and the telencephalon impar is described. The olfactory bulbs lie immediately adjacent to the olfactory organs, so that the olfactory nerves are very short. The axons of the olfactory bulb collect as slender olfactory peduncles, which may be as long as 20 cm and can be traced caudally into the most rostral division of the telencephalic hemispheres, which form distinct lobes termed the rostral bodies ( Figures 1A,B and 2A,B).
rostral bodies
Each rostral body consists of a medial ependymal surface with scattered neurons that surround a dense core of secondary olfactory fibers (Figures 2A,B). These fibers arise from the olfactory peduncles, which divide into dorsally and ventrally situated tracts -i.e., the lateral and medial olfactory tracts ( Figure 2B)as each peduncle enters the ipsilateral rostral body along its lateral margin. Although the fibers of the olfactory peduncle initially divide into dorsal and ventral tracts as they course along the lateral margin of the rostral bodies (Figures 2A,B), these tracts are interpreted as homologous to the lateral and medial olfactory tracts of tetrapods, based on their subsequent trajectories in the telencephalic hemispheres. The fibers of both secondary olfactory tracts densely ramify within the rostral body, and it is clear that this body primarily receives olfactory input. The secondary olfactory tracts can be clearly distinguished from one another by their relative positions in the rostral body, and also by differences in fiber size, as fibers in the lateral olfactory tract are far more robust.
Many of the fibers of the medial olfactory tract terminate within the ventral half of the rostral body, but a number of these fibers continue more caudally and appear to innervate the subpallium of the rostral pole of the more caudal telencephalic hemisphere. The lateral olfactory tract, in addition to innervating the dorsal half of the rostral body, continues caudally as a distinct tract and enters the rostral pallium of the telencephalic hemisphere, where it can be traced medially into the core of the pallium ( Figure 2C).
The rostral bodies appear to be pallial in origin, as caudally each rostral body is gradually replaced by the cell groups that form the rostral pallial telencephalon ( Figure 2C). The cells of the dorsolateral rostral body are the first to be replaced by the larger cells of the medial pallium. Slightly more caudally the dorsomedial cells of the rostral body are replaced by the cells of the dorsal pallium, which at this level ( Figure 2C) are embedded in a pale-staining neuropil. There is no distinct border between the ventral cells of the rostral body and those of the lateral pallium, only a gradual transition from one cell group to the other, marked by an expansion of the lateral olfactory tract.
telencephalic heMispheres
The telencephalic hemispheres of the Comoran coelacanth are clearly evaginated, with the subpallium consisting primarily of a distinct periventricular cellular plate, and the pallium comprising a massive lobe, which appears to have protruded into the lateral ventricle ( Figure 1D). There is thus little evidence that the pallium participated in the extensive evagination that characterizes the subpallium.
A number of divisions of the subpallium can be recognized by differences in the thickness of its periventricular cellular plate. Medially, the periventricular plate lies very close to the ependymal lining of the ventricle and is approximately four to five cells in thickness, with the exception of a distinct cluster of neurons which occurs adjacent to the lamina terminalis ( Figure 1D). The topology of the medial periventricular plate, and the migrated cell group at the border of the lamina terminalis, suggest that these cell groups are homologous to the lateral and medial septal nuclei, respectively, in other lobe-finned fishes.
As the subpallial periventricular plate is traced laterally, the plate becomes more distal to the ependymal layer of the lateral ventricle and thickens to approximately 10 cells (Figures 1D and 2D). It continues to thicken as it is traced laterally, as does the entire wall of the subpallium, so that the periventricular plate forms a distinct ventrolaterally directed cellular prominence with the deeper subpallial wall, bulging into the lateral ventricle ( Figures 1D,E). This cellular prominence may mark the boundary Frontiers in Neuroanatomy www.frontiersin.org ( Figure 2C). At this level, the dorsal pallium appears to be a division of the pallium sandwiched between the more dorsolaterally situated medial pallium and the more ventrally situated lateral pallium. The cells of the dorsal pallium are smaller than those of either the medial or lateral pallial divisions and are surrounded by a paler staining neuropil ( Figure 2C). At this level, the lateral pallium occupies the ventral half of the rostral pallial lobe, and its core consists of numerous fascicles of the lateral olfactory tract (Figure 2C), which appear to radiate into all three pallial divisions. More caudally, the lateral pallium appears to be displaced from the lateral pial surface by the medial pallium (Figures 1D,E), but it is laterally continuous with the putative ventral pallium and is still bordered dorsally by the dorsal pallium. The lateral pallium at mid-hemispheric levels consists of a core of fascicles of the lateral olfactory tract and a neuropil surrounded by a more periventricularly located band of cells that is many times thicker than the central neuropil when seen in the transverse plane ( Figure 2F). It has not been possible to trace fascicles of the lateral olfactory tract into the periventricular cellular band. Even at mid-hemispheric levels, between a dorsally situated striatum and a more ventrally situated pallidum, but this interpretation remains speculative without immunohistochemical data.
As the subpallial periventricular cellular plate is traced further laterally and dorsally, its characteristics again change. Near the extensive recess of the lateral ventricle, which appears to separate the subpallium and pallium, its cells become more scattered (Figure 1D), and they take on a typical pallial appearance dorsal to the lateral ventricular recess ( Figure 2E). The topology and cellular organization of the subpallial periventricular plate suggest that this region is not part of the subpallium but is, rather, the ventral most division of the pallium, termed the ventral pallium in tetrapods.
The remaining three divisions of the pallium (the medial, dorsal, and lateral divisions) traditionally recognized in other lobed-finned fishes are clearly evident in the Comoran coelacanth. These divisions and their topology are most obvious in the rostral pole of the pallium (Figure 2C). The medial pallium is the most dorsally situated pallial division and, as in other lobe-finned fishes, it is continuous with the even more dorsally situated telencephalic tela Nieuwenhuys (1965) and Nieuwenhuys and Meek (1990). (D) A Nissl-stained section from the present study, showing the actual histology. (e) Line drawing indicating the boundaries of the cell groups as interpreted in the present study. Dotted lines in (C,e) indicate the lateral olfactory and olfactohabenular tracts. Short, heavy dashed lines indicate the subpallial-pallial boundary (C) or the boundary between the neuropil of the ventral pallium and the medial pallium (e). cot, Central olfactory tract; d, diencephalon; dmp, dorsomedial pallium; dp, dorsal pallium; Hy, hypothalamus; Hyp, hypophysis; lot, lateral olfactory tract; lp, lateral pallium; ls, lateral septum; lt, lamina terminalis; m, mesencephalon; mp, medial pallium; ms, medial septum; nII, optic nerve; oht, olfactohabenular tract; olt, olfactory tubercle; op, olfactory peduncle, Pa (in lateral view), pallium; pa (in transverse section), pallidum; rb, rostral body; SPa, subpallium; str, striatum; tt, telencephalic tela; vlpr, ventrolateral cellular prominence; vmp, ventromedial pallium; vp, ventral pallium.
Frontiers in Neuroanatomy
www.frontiersin.org olfactory tract. Lamination of the dorsal pallium also becomes more distinct, with the laminae being broader dorsally than ventrally as the caudal border of the dorsal pallium is reached. The medial pallium becomes narrower in cross-sectional thickness but appears to end more caudally in the telencephalon impar ( Figure 3B).
telencephalon iMpar
The telencephalon impar of the Comoran coelacanth is an unevaginated caudal part of the telencephalon, which forms a simple tube whose floor is formed rostrally by the optic chiasm and the walls of the preoptic recess of the third ventricle. Its lateral walls are slightly thickened, whereas its roof consists of a non-neural ependymal transverse velum (Figure 3). Unlike the situation in other vertebrates, the infundibulum of the hypothalamus and the hypophysis extend rostrally rather than caudally, apparently due to a general caudal shift of the rest of the brain during development. As a result, the infundibulum of the hypothalamus is located beneath the rostral most edge of the optic chiasm, and the ventral hypothalamus extends further rostral than in most vertebrates (Figures 1A,B and 3). Despite this distortion in the hypothalamus, the preoptic recess and adjacent preoptic nuclei fascicles of the lateral olfactory tract can still be seen more laterally within the neuropil of the ventral pallium. The ventral pallium can be divided into a periventricularly located cell plate and a more superficial molecular layer (Figures 1D,E) which can be clearly distinguished from the adjacent neuropil of the medial pallium. Throughout its rostrocaudal extent, the dorsal pallium can be distinguished from the lateral and medial pallia, because its cells are more scattered than those of the other two divisions ( Figure 2G). Furthermore, even though the dorsal pallial cells are more scattered, their lamination is more distinct than that of cells in either the lateral pallium or the medial pallium. The cellular density of the medial pallium is at least twice that of the dorsal pallium ( Figure 2G). The cells of the medial pallium also are larger and are embedded in a denser neuropil consisting of larger fibers (Figure 2H).
At caudal hemispheric levels, the pallial lobe has a deep horizontal sulcus along its medial edge, which clearly separates the dorsal and medial pallia from the lateral pallium. At this level, the periventricular cellular band of the lateral pallium decreases in crosssectional thickness and forms a distinct periventricular cellular plate, which almost completely encircles the core of fascicles of the lateral μm (B,F,g), 500 μm (A,C). dp, Dorsal pallium; lot, lateral olfactory tract; lp, lateral pallium; mot, medial olfactory tract, mp, medial pallium; tt telencephalic tela.
Frontiers in Neuroanatomy
www.frontiersin.org appear unaffected. A parvocellular preoptic nucleus, composed of small densely packed neurons, can be traced dorsal to the optic chiasm, where it is replaced more caudally by the much larger cells of the magnocellular preoptic nucleus. More rostrally, a more scattered group of neurons (Figures 2I and 3B) occurs dorsal to the parvocellular preoptic nucleus. This cell group appears to be homologous to the central amygdalar nucleus of lungfishes and tetrapods (Figures 3B,C), which is also located adjacent to the lateral forebrain bundle and consists of the largest cells in the vicinity. More dorsally, the putative central amygdalar nucleus is replaced, in turn, by a group of smaller cells, which appears to be homologous to the medial amygdalar nucleus of lungfishes and tetrapods (Figures 3B,C). Although the topology and cytology of these nuclei in the Comoran coelacanth are consistent with the interpretation that they are homologs of amygdalar nuclei in other lobe-finned fishes, these hypotheses would be greatly supported by immunohistochemical data.
discussion
Our reinterpretation of the telencephalic cytoarchitectonics of the Comoran coelacanth generally agrees with the earlier studies of Nieuwenhuys and colleagues (Nieuwenhuys, 1965(Nieuwenhuys, , 1998Nieuwenhuys et al., 1977;Nieuwenhuys and Meek, 1990), but differs regarding four points: (1) the pallial-subpallial border and the recognition of a putative ventral pallium; (2) the organization of the striatum; (3) the borders of the medial pallium; and (4) the organization of the amygdala. Each of these points is discussed below.
pallial-subpallial border
Nieuwenhuys and colleagues recognized the lateral pallial-subpallial border as occurring just dorsal to the lateral recess of the lateral ventricle (Figure 1C), and they interpreted the ventrolateral wall as homologous to the striatum of other lobe-finned fishes.
Re-examination of the ventrolateral wall of the Comoran coelacanth suggests that the periventricular cellular plate that characterizes the striatum changes as it is traced dorsally and transforms to a cell group typical of the pallium, and we have therefore recognized it as a putative ventral pallium ( Figure 1E). This interpretation is supported by new immunohistochemical data (Brox et al., 2003(Brox et al., , 2004Moreno et al., 2008;González and Northcutt, 2009) indicating that the pallium in the lobe-finned fish radiation (i.e., lobe-finned fishes and tetrapods) consists of not three but four pallial divisions, with the ventral pallium interposed between the lateral pallium and the striatum. As in other lobe-finned fishes, there is cytological evidence that the ventral pallium in the Comoran coelacanth may consist of dorsal and ventral subdivisions, which may be homologous to the dorsal ventricular ridge and lateral amygdalar nucleus, respectively, in amniotic vertebrates. This speculation, however, needs to be confirmed by molecular markers which have clearly delineated these cell groups in other lobe-finned fishes.
striatal organization
The striatopallidal system of tetrapods is now believed to comprise a dorsal system, the dorsal striatum and dorsal pallidum, and a ventral system, the nucleus accumbens plus the olfactory tubercle and a ventral pallidum (Heimer et al., 1995;Marín et al., 1998). Similar systems have recently been recognized in lungfishes (González and Northcutt, 2009;Northcutt, 2009), but it is still unclear whether or not separate dorsal and ventral pallidal cell groups exist. In any case, the striatum recognized by Nieuwenhuys and colleagues, and in the present analysis of the Comoran coelacanth, appears to be homologous to the dorsal striatum of other lobe-finned fishes. However, the ventral striatum (olfactory tubercle) recognized by Nieuwenhuys and colleagues at mid-hemispheric levels ( Figure 1C) appears to be far too caudal to represent the ventral striatum of other lobe-finned fishes. Based on the topology of the cell group, and the presence of a distinct ventrolateral cellular prominence dividing the ventrolateral hemispheric wall, this cell group is more parsimoniously interpreted as the pallidum (Figure 1E), although it is presently unclear whether it represents both pallidal cell groups or only the dorsal pallidum of other lobe-fined fishes (this situation is very similar to that observed in the subpallium of urodeles, Moreno and González, 2007a). Once again, there are molecular markers, such as DARPP-32, that could clarify these issues. Nieuwenhuys (1965) recognized three pallial divisions in the Comoran coelacanth: his dorsomedial, ventromedial, and lateral pallia ( Figure 1C). He noted that suspected secondary olfactory fibers, which he termed the central olfactory tract, penetrate the core of his ventromedial pallial division. Further, he concluded that the ventromedial pallium was probably homologous to the lateral pallium in other lobe-finned fishes (Nieuwenhuys, 1998). He concluded, however, that his dorsomedial and lateral pallial divisions could not be homologous to the dorsal and medial pallia in other vertebrates on topological grounds. We disagree with this conclusion. If the ependymal surface is followed dorsally, a second pallial division, which we interpret as a putative dorsal pallium, replaces the lateral pallium ( Figure 1E). In turn, a putative dorsal pallium is replaced by a third pallial division, which we interpret as references Brinkmann, H., Venkatesh, B., Brenner, S., and Meyer, A. (2004). Nuclear proteincoding genes support lungfish and not the coelacanth as the closest liv-ing relatives of land vertebrates. Proc. Natl. Acad. Sci. U.S.A. 101, 4900-4905. Brox, A., Puelles, L., Ferreiro, B., and Medina, L. (2003). Expression of the genes GAD67 and distal-less-4 in the forebrain of Xenopus laevis confirms a common pattern in tetrapods. J. Comp. Neurol. 461, 370-393. Brox, A., Puelles, L., Ferreiro, B., and Medina, L. (2004). Expression of the a putative medial pallium. As in other vertebrates, the telencephalic tela attaches at the pial-ependymal border (Figures 1C-E). Thus, topological relationships of the pallial divisions in the Comoran coelacanth are preserved along the ependymal surface. This is not the case, however, along the pial surface. In other vertebrates, the pallial divisions show a dorsoventral sequence from medial pallium, to dorsal pallium, to lateral pallium, to ventral pallium. In the Comoran coelacanth, the putative medial pallium borders the putative ventral and lateral pallia, as well as the putative dorsal pallium ( Figure 1E). This topological distortion could have occurred by an outward bending (eversion) of the developing pallial wall, coupled with a secondary fusion of the closely opposed pial surfaces of the lateral and medial pallia, as a secondary fusion of the caudal telencephalic wall with the rostral wall of the diencephalon is known to occur in hagfishes (Wicht and Northcutt, 1992). There is no histological trace of such an eversion and secondary fusion of the pallial walls in the Comoran coelacanth, however, and the simplest explanation is that the developing medial pallium has expanded rostrally and medially, displacing the putative lateral pallium and secondarily contacting the putative ventral pallium. Where there is a distinct difference in the texture of the neuropils of the putative medial and ventral pallia ( Figure 1E). We also differ from Nieuwenhuys and colleagues concerning the boundary between the putative dorsal and medial pallia (Figures 1C,E). Nieuwenhuys (1965) indicated that this boundary was marked by a cell-free zone, whereas we believe it is marked by a sharp change in cellular density ( Figure 2G). It appears that Nieuwenhuys focused on the position of the olfactohabenular tract to mark the border between his lateral and dorsomedial pallial divisions ( Figure 1C), whereas we have focused on cell density as an indicator of the putative dorsal-medial pallial border ( Figure 1E). In either case, however, the putative medial pallium of the Comoran coelacanth occupies an extensive segment of the pallium, equal to or much larger than the lateral pallium. Although few details are known about the general biology of the Comoran coelacanth, it is well documented that these fish return to marine caves during the day and that the same individuals have been identified in the same caves for several years (Fricke et al., 1991). Highly developed spatial learning is also documented in amphibians (Wells, 2007) as well, and also appears to be well developed in the lepidosirenid lungfish Protopterus (Greenwood, 1987). Thus it is possible that an expanded medial pallium and its associated circuitry mediates spatial learning in coelacanths as in other lobe-finned fishes. aMygdalar organization Following Rudebeck's (1945) description of the telencephalon impar in a lungfish, Protopterus, Nieuwenhuys (1965Nieuwenhuys ( , 1998 divided the telencephalon impar of the Comoran coelacanth into an inferior preoptic nucleus and a superior preoptic nucleus. He further divided the inferior preoptic nucleus in a parvocellular nucleus and a magnocellular nucleus (Figure 3A). Like Rudebeck, Nieuwenhuys also concluded that the superior preoptic nucleus was probably homologous to the caudal part of the strio-amygdaloid complex in tetrapods. Studies of lungfishes (González and Northcutt, 2009;Northcutt, 2009;González et al., 2010) and tetrapods (Moreno and González, 2004, 2005, 2006, 2007a have clearly established that the amygdalar complex consists of at least central, lateral, and medial nuclei.
Medial pallial borders
The Comoran coelacanth also appears to exhibit this same pattern of amygdalar organization. As already noted, a putative ventral pallium can be recognized, and the ventral subdivision of this pallial division is most likely homologous to the lateral amygdalar nucleus in lungfishes and tetrapods. Based on cytology and topography, however, we believe that the nucleus identified as the magnocellular preoptic nucleus by Nieuwenhuys ( Figure 3A) is actually homologous to the central amygdalar nucleus of lungfishes and tetrapods (Figures 3B,C). Another nucleus in the preoptic region of the Comoran coelacanth that is slightly more caudal and ventral appears to be a more likely candidate for a homolog of the magnocellular preoptic nucleus. This nucleus comprises larger, more compact cells that are located periventricularly, as in other lobe-finned fishes, and is thus a more likely homolog of the magnocellular preoptic nucleus than that recognized by Nieuwenhuys (1965Nieuwenhuys ( , 1998.
We do agree with Rudebeck (1945) and Nieuwenhuys (1965Nieuwenhuys ( , 1998 that the cell group recognized as a superior preoptic nucleus in lungfishes and the Comoran coelacanth is homologous to part of the amygdalar complex in tetrapods. Based on the topology of this nucleus in the Comoran coelacanth, it appears homologous to the medial amygdalar nucleus of lungfishes and tetrapods. It therefore appears that all extant lobe-finned fishes are characterized by the same pattern of amygdalar organization, which may have arisen even earlier in gnathostome phylogeny.
future directions
In spite of the relatively large numbers of coelacanths that have been captured, the quality of the brain histology presently available is clearly marginal. If the specimen on which this analysis is primarily based were captured today, rather than some 38 years ago, it would be possible to use the numerous molecular markers now available, and our knowledge of coelacanth brain organization and brain phylogeny in lobe-finned fishes would be greatly increased. Every attempt should be made by comparative neurobiologists to proselytize coelacanth researchers so that they understand the importance of properly fixing brain tissue from an available specimen. At the same time, preservation of coelacanth habitats and the protection of these fascinating fishes is uppermost. It would be criminal if humanity allowed these ancient fishes to become extinct in our lifetime.
acknowledgMents
This work was supported by the U.S. National Science Foundation (1BN-0919077), by The Spanish Ministry of Sciences and Education (BFU2009-12315), and by private funding. | 2016-06-17T08:42:49.027Z | 2010-12-06T00:00:00.000 | {
"year": 2010,
"sha1": "540e6a31634d353f967b6216422de4d59dd0fca6",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnana.2011.00009/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "540e6a31634d353f967b6216422de4d59dd0fca6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
257770640 | pes2o/s2orc | v3-fos-license | Caregiving burden, depression, and anxiety among family caregivers of patients with cancer: An investigation of patient and caregiver factors
Background Caring for patients with cancer can result in significant burden, anxiety, and depression among family caregivers, leading to alterations in their mental and physical wellbeing. Evidence on the level of cancer caregivers' burden, depression, anxiety, their role in assisting their patients, and other patient and caregiver factors that play in improving/worsening the outcomes, is limited. This study explored the prevalence of caregiving burden, depression, and anxiety with a focus on the patient and caregiver-related factors among cancer family caregivers. Methods A cross-sectional study was conducted on the population of caregivers of adult patients with cancer in Zanjan, Iran between 2019 and 2020. The Beck Depression Inventory (BDI), the Beck Anxiety Inventory (BAI), and the Zarit Burden Inventory (ZBI) were used to measure outcome variables. Clinical and basic characteristics of the caregivers and patients were also collected. An independent samples t-test, analysis of variance, Pearson's correlation coefficient, and stepwise linear regression were performed using SPSS software version 26. Results Mean ± standard deviation age of the caregivers (167 men and 133 women) was 40.77 ± 12.56. Of the caregivers, 46.3, 53, and 30.7% showed severe depression, anxiety, and burden, respectively. There was a significant positive correlation between ZBI with both BDI [r(298) = 0.19, p < 0.01] and BAI [r(298) = 0.20, p < 0.01]. Caregiving ≥24 months (B = 14.36, p < 0.001), outpatient care setting (B = −12.90, p < 0.001), being retired (B = −12.90, p < 0.001), depression (B = 0.28, p < 0.001), supplemental health insurance (B = −7.79, p < 0.001), being illiterate (B = 7.77, p < 0.01), surgery (B = 8.55, p < 0.01), ECOG1 (B = 4.88, p < 0.01), and patient's age (B = 0.11, p < 0.05) were found to be significant predictors of caregiving burden. Conclusion High levels of depression, anxiety, and burden were observed among the caregivers of patients with cancer. These findings underline the importance of paying close attention to the needs and psychological challenges of this population.
Background
Chronic and non-communicable diseases are the most important challenges of the current century, leading to increased costs and adverse social effects on patients and communities worldwide (Karbaschi et al., 2015). Cancer is one of the fastestgrowing health issues in Iran and all across the globe, and is the second most common disease and the third leading cause of death following cardiovascular disease and accidents (Karbaschi et al., 2015). According to recent statistics, each year, more than 50 million cancer deaths occur throughout the world, and more importantly, 80% of these deaths occur in low-and middle-income countries (Akpan-Idiok et al., 2020).
Due to the increasing incidence of new cancer cases in recent years, the number of people undertaking cancer caregiver roles has been on the rise dramatically (Onyeneho and Ilesanmi, 2021). A remarkable proportion of caregivers are the family members of patients with cancer, who play a noteworthy role in assisting their patients to confront the harsh reality of cancer diagnosis and equip them with both practical and emotional support (Onyeneho and Ilesanmi, 2021). Following the diagnosis of cancer and initiating the treatment process, the patient's family members feel responsible for taking care of their patient. Being in the role of a family caregiver is not usually predictable and optional for the family members and somehow seems inevitable (Abbasi et al., 2013). Indeed, being the main source of support, family caregivers are supposed to play a considerable role in caring for their patients with cancer. The fact that one of their loved ones has to struggle with a terminal disease often disrupts the family routine and makes the family keep a balance between the demands of cancer trajectory and their routines (Coppetti et al., 2019). Furthermore, in light of the fact that the need for caring often arises suddenly and caregivers do not have sufficient prior guidance and preparation, physical and psychological changes may occur (Coppetti et al., 2019).
How undertaking caregiving responsibility may impact the psychological health of caregivers and what its psychological outcomes have always been key questions to be addressed concerning the mental health of caregivers. Stress, anxiety, worry, depression, isolation, and anger are among the psychological outcomes of caregiving that have been examined in the existing literature (Given et al., 2012). Depression and anxiety are among the most frequent psychological consequences reported in previous studies, ranging from 52 to 94% among family caregivers (Thrush and Hyder, 2014). The level of such outcomes might be even higher among caregivers than patients themselves. For instance, it has been confirmed in a population of head and neck cancer and hematological cancer caregivers that they would even develop greater psychological distress than the general population and their patients (Caruso et al., 2017;Kassir et al., 2021). Existing evidence has pointed to the increased levels of stress and psychosocial distress among family caregivers when they have to maintain a balance between their professional careers and domestic duties (Gupta et al., 2022).
Moreover, assuming the role of caregiver imposes a great burden on caregivers, affecting different aspects of their life, including mental health, physical health, and financial status (Given et al., 2012). The fact that cancer care and treatment are costly adds to the financial challenges of caregivers.For example, among patients who are insured against medical costs, the outof-pocket expenses including deductibles, co-payments, and coinsurance may be enormous under some of the health insurance plans. In addition, there are copayments required to be paid for different services involved in cancer care treatment such as healthcare and medications, nutritional supplements, and meals at the hospital, which can double burden the financial issues (Xiang et al., 2022). In some cases, the unemployment of the patient and the caregiver aggravates these problems (Given et al., 2012). More importantly, the lack of social support and the unnecessary stringent measures of insurance companies in releasing the payments for medical expenses might make the burden even greater (Cejalvo et al., 2021).
It has already been indicated that in caregivers of patients with cancer, there has been a significant positive correlation between caregiver burden and family distress index. High scores of family distress index, along with other factors such as patient gender and time since cancer diagnosis significantly predicted the burden imposed on the family caregivers (Mirsoleymani et al., 2017). Recent research on the burden on family caregivers of patients with cancer has yielded some intriguing results of the various factors contributing to caregiver burden. Some of these factors might be related to caregiver characteristics (e.g., age, gender, and relationship to the patient), while others may be related to patient characteristics (e.g., patient health status) and care-related activities (Maguire et al., 2018). It has been suggested that the factors such as worsening functional status of patients with cancer, younger age of the caregivers, being female, and longer caregiving durations are among the significant predictors of caregiving burden (Unsar et al., 2021).
A wide variety of factors might contribute to the cancer caregiver burden as multidimensional issues. Identifying such factors seems to be a key step for healthcare providers in managing the caregiving burden. However, there seems to be a paucity of data on the burden, anxiety, and depression among family caregivers of patients with cancer, as well as the contributing factors of such outcomes especially when it comes to low-and middle-income countries (Thrush and Hyder, 2014;Maguire et al., 2018). Therefore, we conducted a study to attain three objectives: prevalence of the caregiving burden, depression, and anxiety along with the relationships between aforementioned outcomes and the probable predicting factors for the caregiving burden among family caregivers of patients with cancer.
Methods and materials Study design and participants
This was a cross-sectional study, conducted on a population of family caregivers of adult (≥18) patients with cancer at Vali-e-Asr Hospital in Zanjan, Iran from July 2019 to February 2020. We included participants aged ≥18 years who were the main family caregivers (unpaid and informal) of the patients with cancer. We defined main caregivers as those who had been providing care for the patient for at least 6 months and who had the most involvement in giving care for the patient and assisting them to adapt and manage the disease. In fact, they had been helping the patient Karimi Moghaddam et al. . /fpsyg. . in day-to-day activities such as feeding, relocation, psychological support, and emotional support, in addition to communicating with the healthcare team in relation to the patient's condition and medication. Participants with a confirmed history of psychological or debilitating physical condition, as well as those unable to respond to the questionnaires, were excluded.
The study protocol was approved by the Ethics Committee of Zanjan University of Medical Sciences [IR.ZUMS.REC.1398.105]. Written informed consent was acquired from all participants after clarifying the purposes of the study. We adhered to the requirements of the Declaration of Helsinki.
Measurements
Outcome variables, which included three variables of depression, anxiety, and caregiving burden, were measured by trained researchers using the Beck Depression Inventory (BDI-II), Beck Anxiety Inventory (BAI), and the Zarit Burden Interview (ZBI,, respectively. Furthermore, the data on variables such as gender, age, education, marital status, relationship to the patient under care, and duration of patient care as effect modifiers were collected using a questionnaire. In addition, patient-related data such as gender, stage of cancer, time since cancer diagnosis, care setting (inpatient or outpatient), type of treatment (radiation therapy or combination of radiation and chemotherapy), and Eastern Cooperative Oncology Group (ECOG) performance status were collected as confounding variables.
Beck depression inventory (BDI-II)
BDI-II consists of 21 items, each scoring on a 4-point Likert scale from 0 to 3. Interpretation of the total score has been defined to be 10-13 for minimal depression, 14-19 for mild depression, 20-28 for moderate depression, and 29-63 for severe depression. Previous studies on the psychometric properties of this questionnaire in various countries have shown excellent validity. Wang et al. (2013) reported a high internal consistency reliability (Cronbach's α coefficient = 0.91) and a test-retest reliability of 0.93. According to a study conducted in Iran on non-clinical and clinical samples, internal consistency coefficients were reported to be 0.90 and 0.89, respectively. Additionally, the test-retest reliability coefficient has been shown to be 0.94 for the non-clinical sample (Ghassemzadeh et al., 2005). In this study, the Persian version of BDI-21 indicated excellent internal consistency with Cronbach's α coefficient of 0.92.
Beck anxiety inventory (BAI)
This is a 21-item Likert-scale questionnaire designed by Beck and Steer (1990) to measure the anxiety of adults and adolescents. The total score of the questionnaire ranges between 0 and 63 in which minor, mild, moderate, and severe anxiety are represented by total scores of 0-7, 8-15, 16-25, and 26-63, respectively. de Beurs et al. (1997) obtained a Cronbach's α coefficient of 0.93 and a 5-week test-retest reliability coefficient of 0.83 for this questionnaire. A study on the Iranian population has shown Cronbach's α coefficient of 0.92 and a test-retest reliability of 0.83 (Rafiei and Seifi, 2013). In this study, the Persian version of BAI-21 demonstrated excellent internal consistency with Cronbach's α coefficient of 0.94.
Zarit burden interview
ZBI initially consisted of 29 items; however, later in 2001, a shorter version, ZBI-22 with 12 questions and 4 questions was designed. ZBI-22 is rated on a 5-point Likert scale from 0 (never) to 4 (nearly always) for 21 first items and rated from 0 (not at all) to 4 (extremely) for the last item (total score, 0-88). Higher scores specify a greater burden on caregivers. ZBI-22 encompasses five domains of burden in the relationship (6 items), emotional wellbeing (7 items), social and family life (4 items), finances (1 item), and loss of control over one's life (4 items), which is designed to measure the perceived effect of caregiving on the caregiver's physical health, emotional health, social activities, and financial status (Boluarte-Carbajal et al., 2022). In fact, it evaluates the respondent's subjective burden by asking questions such as "Do you feel or do you wish . . . ." (Yu et al., 2020) Cronbach's α coefficient of ZBI-22 in caregivers of patients with cancer and dementia has been indicated to be in a range between 0.85 and 0.93 (Al-Rawashdeh et al., 2016). In this study, the Persian version of the 22-item of ZBI was applied. Navidian et al. (2010) have reported Cronbach's α coefficient of 0.91 and a test-retest reliability of 0.94 for ZBI-22 among Iranian subjects. In this study, the Persian version of ZBI showed good internal consistency with Cronbach's α coefficient of 0.88.
Sample size
Regarding a caregiver burden of 87% (Mishra et al., 2021), sample size (n) was calculated to be at least 216 participants using the following formula: n = Z 2 × p (1p)/d 2 , in which prevalence, p = 0.87; Z = 1.96; and margin of error, d = 0.05. However, we included more participants in the study. The participants were chosen by convenient sampling.
Statistical analysis
Data were analyzed using SPSS software version 26. Descriptive statistics were reported using mean ± standard deviation (SD) and frequency (%), as applicable. To test the normality of data distribution, we used Shapiro-Wilk's test and the Box-Plot. For the purpose of comparing two groups in terms of outcome variables, we applied an independent samples t-test. An analysis of variance (ANOVA) was performed to compare ≥3 groups considering outcome variables with Tukey's honestly significant difference (HSD) post-hoc test if equal variances were assumed. In the instances where the assumption of equal variances was violated, Welch's ANOVA was used as an alternative to the Games-Howell post-hoc test. Pearson's correlation coefficient was conducted to examine the relationships between caregiving burden, depression, and anxiety. A stepwise linear regression analysis was done using dummy coded variables to investigate the role of demographic and basic characteristics of caregivers/patients as predictive variables on the outcomes of caregiving burden, anxiety, and depression among the family caregivers. The level of significance was considered 0.05 (two-sided) for all statistical analyses.
Results
Basic characteristics of the participants A total of 300 family caregivers were included in the study. Of whom, 167 (55.7%) were males and 133 (44.3%) were females. The mean ± SD age of the caregivers was 40.77 ± 12.56 years. The majority of caregivers were offspring of the patients (148, 49.3%), married (239, 79.7%), and self-employed (81, 27.0%) ( Table 1).
With regard to the basic characteristics of the patients, as is shown in Table 2, they had an average age of 52.94 ± 14.33 and most of them were women (164, 54.7%). Stomach (61, 20.3%), lung (55, 18.3%), and colorectal (41, 13.7%) cancers were the most prevalent among patients. Most of them were under chemotherapy (151, 50.3%) and under chemotherapy + radiation therapy (74, 24.7%). Public health insurance (118, 39.3%) was the most common type of insurance applied by the patients.
The BAI's total score mean was found to be 31.49 ± 13.87. Almost half of the caregivers represented severe anxiety (159, 53%) ( Figure 2).
Then, we conducted an ANOVA test to examine the effects of participants' basic characteristics on the level of depression, anxiety, and burden among family caregivers. BDI from those who were either offspring or others to the patient (both, p < 0.001). The difference between the five age categories of the patients was found to be statistically significant regarding the level of . /fpsyg. . There was a significant difference between various types of patient treatments [F (5,294) = 2.161, p = 0.022]. The Games-Howell post-hoc test comparisons uncovered that the mean BDI score for the surgery (M = 21.06, SD = 9.91) was significantly lower than the other treatments (all, p < 0.05).
The mean BDI score for patients with social security insurance was significantly higher than that among those who had Armed Forces Medical Services Insurance (21.37 vs. 31.38, p = 0.013). None of the other comparisons were significant.
None of the other variables showed a significant difference (all, p ≥ 0.05) ( Table 3). Different conditions of the "relationship to the patient" variable differed significantly in terms of BAI total score [F (4, 295) = 3.062, p = 0.017] although there were no significant differences in multiple comparisons using Tukey's HSD (all, p ≥ 0.05).
The level of anxiety was significantly different between the three conditions of "duration of caregiving" [F (2, 297) = 7.206, p = 0.001]. Being a caregiver for ≥24 months resulted in a significantly greater level of anxiety compared to the duration of 6-11 months (mean, 37.22 vs. 29.54, p = 0.001).
Surprisingly, caring for female patients was also associated with higher levels of anxiety compared to male patients, t(298) = −2.162, p = 0.031.
There was a significant difference between age categories of patients regarding the caregivers' level of anxiety (p = 0.036) so the Games-Howell post-hoc test revealed that caring for patients ≤30 years was significantly linked with higher levels of anxiety in comparison with patients ≥61 years (mean, 37.89 vs. 29.33, p = 0.043).
The mean score of total BAI showed a significant difference between different types of cancer [F (7, 292) Time since diagnosis showed a significant effect on the level of anxiety for the three conditions [F (2, 297) = 10.383, p < 0.001]. Caring for patients with ≥24 months from their diagnosis was significantly related to more levels of anxiety in contrast to those with either 6-11 months (M = 29.69, SD = 13.59) or 12-23 months (M = 29.16, SD = 13.55).
. /fpsyg. . The mean total score of BAI was significantly different between different types of treatment (p = 0.003) so the Games-Howell post-hoc test demonstrated that being treated by surgery was associated with a lower level of anxiety among caregivers compared to chemotherapy + hormone (p = 0.043) and chemotherapy + radiation therapy (p = 0.001).
Caregivers also experienced significantly different levels of anxiety based on the types of health insurance, even though multiple comparisons were not significant (all, p ≥ 0.05).
None of the other variables showed a significant difference (all, p ≥ 0.05) ( Table 4).
The total score of ZBI was significantly different between age groups of caregivers [F (4, 295) = 7.280, p < 0.001]. Being a caregiver ≥61 years was significantly associated with higher burdens compared to those ≤30 (p = 0.001), 31-40 (p = 0.001), and 41-50 (p < 0.001) years. The age group of 51-60 years also showed a significant The total score of ZBI significantly differed among caregiver's levels of education [F (7, 292) = 2.955, p = 0.005] so that being illiterate was significantly linked with higher levels of the burden against those with junior high school (p = 0.007), high school diploma (HSD) (p = 0.005), and master of science (MSc) (p = 0.018) degrees.
Regarding employment status, the total score of ZBI was significantly different among different conditions (p < 0.001). A Games-Howell post-hoc test revealed that being a retired caregiver was significantly related to a greater burden as opposed to being government-employed (p < 0.001), self-employed (p < 0.001), and unemployed (p < 0.001).
There was a significant association between the duration of caregiving and ZBI [F (2, 297) = 24.564, p < 0.001]. Being in the role of a caregiver for ≥24 months was significantly related to higher levels of burden compared to 12-23 (p < 0.001) and 6-11 months (p < 0.001).
Similarly, time since the diagnosis also showed a significant effect on the level of burden [F (2, 297) = 20.217, p < 0.001]. Caregiving for a patient diagnosed ≥24 months was associated with higher levels of burden compared to 12-23 (p = 0.001) and 6-11 months (p < 0.001).
Caregivers of inpatients experienced significantly more levels of burden in comparison with outpatients [t(298) = 5.924, p < 0.001].
We found a significant effect of health insurance type on the level of burden (p < 0.001). The Games-Howell posthoc test revealed that caregivers of patients with supplemental insurance experienced lower levels of burden compared to those with public health (p = 0.001) and social security (p < 0.001) insurance.
No significant associations were found between other basic characteristics of the participants and ZBI (all, p ≥ 0.05) ( Table 5).
Predicting factors for caregiver burden
A stepwise linear regression was performed to predict caregiver burden based on the variables of basic characteristics, depression, and anxiety of the participants. Regression analysis resulted in nine significant models (Table 7).
The results of the first model were found to be statistically significant (p < 0.001), suggesting duration of caregiving (≥24 months) is a significant predictor of caregiver burden. According to the R 2 -value (R 2 = 0.12) associated with this model, the duration of caregiving (≥24 months) accounts for 12% of the variation in caregiver burden, which means that 88% of the variation in the burden cannot be explained by the duration of caregiving (≥24 months) alone. The regression coefficient [B = 16.06, 95% confidence interval (CI): 11.32-20.81, p < 0.001] associated with duration of caregiving showed that caregiving for patients ≥24 months resulted in 16.06 units more burden than either 6-11 or 12-23 months.
The second model was also statistically significant (p < 0.001) for which, care setting (outpatient) was added to the analysis. The
R 2 -value (R 2 = 0.22) associated with this model indicates that the addition of care setting to the first model accounts for 22% of the variation in caregiver burden, which means that 78% of the variation in the burden cannot be explained by the duration of caregiving and care setting alone. Controlling for care setting, the regression coefficient (B = 15.92, 95% CI: 11.46-20.38, p < 0.001) associated with duration of caregiving demonstrated that caregiving for patients ≥24 months resulted in 15.92 units more burden than either 6-11 or 12-23 months. Controlling for the duration of caregiving, the regression coefficient (B = −11.45, 95% CI: −15.02 to −7.88, p < 0.001) associated with care setting revealed that caregivers of outpatients experienced 11.45 units lower levels of burden than those of inpatients. Table 7 shows all nine regression models in detail. We tested the data to explore whether the assumption of collinearity was met or not met. The results indicated that multicollinearity was not a concern (caregiving ≥24 months, tolerance = 0.98, variance inflation factor (VIF) = 1.01, outpatient care setting, tolerance = 0.87, VIF = 1.14; being retired, tolerance = 0.76, VIF = 1.30; depression, tolerance = 0.94, VIF = 1.05; supplemental health insurance, tolerance = 0.93, VIF = 1.06; being illiterate, tolerance = 0.75, VIF = 1.32; surgery, tolerance = 0.85, VIF = 1.16; ECOG1, tolerance = 0.93, VIF = 1.07; and patients' age, tolerance = 0.94, VIF = 1.05).
The data also met the assumption of independent errors (Durbin-Watson value = 1.92).
Discussion
In this study, three key results were established. First, our findings demonstrated a noticeable prevalence of depression, anxiety, and high burden among family caregivers. Second, the caregiving burden was positively correlated with both depression and anxiety. Finally, nine significant variables were suggested for predicting the caregiving burden.
With regard to psychological consequences, a considerable number of cancer caregivers have been discovered to be positive for anxiety and depression screening (Sklenarova et al., 2015). As studied by Götze et al. (2018), a significant proportion of cancer caregivers showed severe symptoms of anxiety (32%) and depression (29%). According to a systematic review, the prevalence of depression and anxiety among the population of cancer caregivers was found to be 42.30 and 46.56%, respectively (Geng et al., 2018). As indicated by this study, almost half of the caregivers showed severe anxiety (53%) and depression (46.3%).
This notable prevalence of psychological consequences can be caused by the challenges family caregivers have to face and the painful realities they should accept. One qualitative study has inferred that the major worry of the family caregivers was the gradual weariness of their patients and the fact that they are on the edge of their impending death (Taleghani et al., 2021). In fact, the family caregivers are likely to suffer immense psychological distress from the time of cancer diagnosis to the last moment of their patient's life. However, it has been stated that the anxiety and depression of cancer caregivers who were grieved at losing their loved one had lessened significantly although extreme depressive symptoms remained among 25.0% of the bereaved (Sklenarova et al., 2015). It is worth bearing in mind that emotional distress and unmet needs of the patients might vary significantly between different countries. It has been demonstrated that in comparison with their Western counterparts, Asian patients may develop more critical symptoms of emotional stress, which can negatively affect the caregivers' emotional health and quality of life (Lim et al., 2017).
Nipp et al. have reported that younger age, female gender, being married to the patient, and greater depression were significantly related to higher levels of anxiety among family caregivers of patients with cancer, which is in line with the findings of this study. They also have shown that nearly half of the family caregivers represented high rates of depression and anxiety (Nipp et al., 2016).
In this study, with regard to a cutoff score of 21, 83% of the caregivers were screened positive for burden, 30.7% of whom were in the severe category. Alsirafy et al. conducted a study on a population of family caregivers of incurable patients with cancer from Egypt and Saudi Arabia. In line with this study, they also assessed the caregiving burden using ZBI-22, and reported a 58.7% prevalence for significant caregiving burden (Alsirafy et al., 2021).
In a Nigerian study, only 4.4% of caregivers have found to be in a severe category, while most of the caregivers showed mild burden (44.5%). They have explained that a possible cause for such findings could be the embarrassment of caregivers to express their real burden due to their relationship with the patients (Onyeneho and Ilesanmi, 2021).
The positive correlation between depression and caregiving burden, as is proved in this study, has been demonstrated in a number of previous studies (Adelman et al., 2014;Seo and Park, 2019;Ahmad Zubaidi et al., 2020;Fang et al., 2022).
As discussed in a Malaysian study (Ahmad Zubaidi et al., 2020), among a population of informal caregivers in a palliative care unit, only half of the population were reported to experience caregiving burden, most of whom were in the mild-to-moderate burden category. Having symptoms of depression and anxiety along with being male, highly educated, caring for patients with cancer, and having long hours of caregiving were significant predictors of caregiving burden. In fact, caregivers with symptoms of depression and anxiety were 3 times more likely to bear the burden of caregiving. Conversely, caring for patients who do not have cancer has been associated with less likelihood of carrying the burden of caregiving. Such findings authenticate more challenges that caregivers may face in caring for patients with cancer compared to other chronic diseases.
In contrast to the aforementioned study, a review has indicated that female sex and low educational level are significant risk factors for caregiving burden, which is consistent with the finding of this study. Long durations of caregiving, depression, social isolation, financial stress, and lack of choice have also been proposed as other significant predicting factors of caregiving burden (Adelman et al., 2014). From the authors' point of view, the significant effect of educational level on the burden of caregiving could be explained in two ways. On the one hand, caregivers with higher levels of education may be properly informed of the prognosis of the disease and the challenges their patients would be facing through the cancer trajectory, which can result in an emotional burden on caregivers. On the other hand, caregivers with lower levels of education, to be specific, being illiterate or at the primary school level, may cause difficulties for the caregivers in terms of being actively involved in the process of cancer diagnosis and treatment and maintaining effective communication with healthcare centers and insurance companies.
In an Indian study, 70.22% of the cancer caregivers reported mild-to-moderate burden and 21.38% reported moderate-to-severe burden. They have indicated that the level of burden does not differ significantly according to marital status, education level, caregiver age group, and type of relationship to the patient even though they found gender (male) and employment status (unemployment) two significant factors associated with the high burden (Mishra et al., 2021).
In this study, the univariate analysis revealed some caregiverrelated factors to be significantly associated with greater burden, including female sex, older age, lower educational level, being retired, and quitting for care, as well as longer durations of caregiving. These findings have profound implications for healthcare providers and for clinicians to take into consideration the influencing factors to manage the burden of cancer family caregivers.
In an Iranian population of cancer family caregivers, the prevalence of high caregiving burden was reported to be 48.1%. In the aforementioned study, four predicting variables including, being a spouse, caring for a male patient, being dissatisfied with family monthly income, and early cancer diagnosis (<1 month) have been suggested (Mirsoleymani et al., 2017). However, in this study, the gender of the patients was not significantly associated with the caregiving burden, caring for female patients was related to significantly more levels of anxiety.
A study has concluded that in comparison to more objective disease-related factors, e.g., stage of cancer, patient healthrelated quality of life seems to be more significantly influential on the burden perceived by family caregivers (Maguire et al., 2018). For instance, in terms of depression and anxiety, we showed that caregiving for patients who underwent surgery was related to significantly lower levels of these outcomes. One possible explanation for this observation might be linked to the patients' quality of life and psychological distresses which can have a significant impact on their caregivers' quality of life and psychological wellbeing. The findings of a recent systematic review support this explanation; it reveals that in a population of patients with small renal masses, undergoing active surveillance (AS) vs. surgery may be related to a significant reduction in total scores of Short Form-12 at enrollment and at the end of each follow-up period (2 and 3 years) (Vartolomei et al., 2022). Karimi Moghaddam et al. . /fpsyg. . However, the existing evidence is in line with the fact that both disease-related and patient health-related quality of life factors might be crucial in predicting a higher burden among family caregivers (Thrush and Hyder, 2014;Mirsoleymani et al., 2017).
Limitations
This study has some limitations. First, due to the cross-sectional nature of the study, further analysis to secure causal inferences was not possible. Second, we did not evaluate the validity of the instruments used to measure the outcome variables in this study. Third, this study was conducted using convenient sampling; therefore, this study cannot be generalized with its findings. Finally, we did not include participants who had been caring for their patients for periods of <6 months.
We strongly recommend conducting longitudinal studies and clinical trials considering a wide variety of participants in terms of cultural, economic, and social differences between countries.
Conclusion
This study demonstrated a high prevalence of burden, anxiety, and depression among family caregivers of patients with cancer. Additionally, nine predicting factors for caregiving burden were found. Healthcare policymakers and clinicians should take these factors into consideration to take timely and effective measures, aimed at managing the burden and relieving psychological distress among cancer caregivers. Caregivers' wellbeing and welfare should be given close and thoughtful attention by healthcare providers. Neglecting caregivers' needs and burdens may create a disturbance to them in looking after their patients which for a long period of time could result in low quality of life and ill health among either patients or caregivers. From the authors' point of view, all related data to the main caregivers of patients with cancer must be documented to generate electronic data. Therefore, by means of these data, the caregivers of the patients can be periodically monitored by social workers and psychologists in terms of the caregiving burden, psychological consequences, and unmet needs.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by the Ethics Committee of Zanjan University of Medical Sciences [IR.ZUMS.REC.1398.105]. The patients/participants provided their written informed consent to participate in this study. | 2023-03-28T21:02:21.240Z | 2023-03-28T00:00:00.000 | {
"year": 2023,
"sha1": "fe85b5414351fbc44c8e0391e8ef96829fa5e282",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "fe85b5414351fbc44c8e0391e8ef96829fa5e282",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230000311 | pes2o/s2orc | v3-fos-license | Comparative Evaluation Of The Effect Of Curing Lights On The Microleakage Of Posterior Resin Composite Restoration – An Invitro Study
Introduction: The primary goal of successful restorative treatment is the effective replacement of lost tooth structure and maintenance of the integrity of the restoration. The success of Resin composite restorations depends on many factors, including the degree of moisture control, the effects of shrinkage during polymerization and how well the resin is cured. The purpose of this study was to evaluate the effect of two LED curing units on microleakage of posterior composite resins. Methods: For determination of microleakage, standardized MO or DO box cavities were prepared on 50 human extracted premolar teeth which was divided into 5 groups. Control group were only acid etched but adhesive was not applied. All other groups were etched with 37% phosphoric acid for 15 seconds, rinsed for 30 seconds with water and blot dried, adhesive was applied and light cured. Results: FiltekTM Bulk ll composite cured with Valo curing light exhibited least microleakage when compared to all other groups. Conclusion: The study showed that control group as well as other groups exhibited microleakage but FiltekTM Bulk ll composite resin showed lesser microleakage than Tetric N-Ceram.
Introduction
Resin composite is the most commonly used direct tooth coloured restorative material. Composite resins have gained popularity because of 1 the increasing demand for esthetic restorations .
For success and longevity of esthetic restorations 2,3 it is important that the restoration has perfect seal . Despite improvements in materials and techniques for light-cured composites polymerization 4,5 shrinkage has remained a problem .
There are many parameters that inuence the degree of polymerization of composite resins such as their composition, shade and translucency, characteristics of the light -curing unit used, rate of 6,7 curing and duration of photopolymerizaion .
One of the reasons for failure of a composite resin is their insufcient polymerization. The compromised mechanical characteristics such as reduced hardness, microleakage and secondary caries are the consequences of poor curing which 7 leads to failure of composite restorations .
The standard equipment used for polymerizing composite resins is conventional quartz tungsten halogen (QTH) light curing units (LCU's). The limitations of these lights are degradation of the bulb, reduction of light intensity and the lter all of which may lead to incomplete polymerization. Light-emitting diode (LED) LCU's that produce blue light have been advocated for curing dental materials. LEDs produce less heat hence cooling fan is not required. The other advantages of LED are they are small in size and cordless and they can operate for thousands of hours with a constant light 8 output in power and spectrum .
The study was done to evaluate the effect of two different LED curing units on microleakage of posterior composite resins. On selected teeth, standardized Class II (MO or DO) box cavities were prepared with the following dimensions: Gingival seat width 1.5mm (Mesio-Distal) and 2.5mm (Bucco-lingual), depth of 1.5mm. The preparations were made with a No. 245 carbide bur under copious water coolant with the help of a high speed airotor handpiece. The control group comprising of 10 teeth were only acid etched, with 37% phosphoric acid for 15 seconds, rinsed with water for 15 seconds and excess water was removed with blotting paper, leaving a glistening hydrated surface. The other 40 teeth were acid etched, followed by application of Adper single bond adhesive to etched enamel and dentin and light cured. The restoration were then nished and polished, and the specimens were washed under running tap water for 2 minutes and stored in distilled water at 37 degree Celsius for 2 weeks and then thermocyled at 1500 cycles between 5to 55 degree Celsius at a dwell time of 30 seconds, prior to testing for microleakage.
GROUPS FOR MICROLEAKAGE STUDY
Apices of the samples were sealed with sticky wax, then teeth were painted with 2 coats of varnish, except for the restoration and 1 mm around the gingival margins and air dried. It was then immersed in 0.5% methylene blue for 24 hours. After removal from the dye, the samples were cleaned under running tap water for 2 minutes and were sectioned mesio-distally through the centre of the restoration with a water cooled diamond disk to obtain two sections from each tooth. Dye penetration was examined (both-halves) at the gingival margins using Stereomicroscope under 10X magnication.
Dye penetration was evaluated at the toothrestoration interphase based on the scoring criteria given below 1. Dye penetration less than half the length of gingival oor 2. Dye penetration greater than half, up to the whole length of gingival oor.
3. Dye penetration the whole length of gingival oor plus up to half of the axial wall. 4. Dye penetration the whole length of gingival oor plus greater than half the axial wall and existence of lateral microleakage at dentinal tubules. responsible for the formation of internal stresses in the material and leakage between the lling and the walls of the cavity and formation of post treatment 9 sensitivity .
RESULTS
Microleakage studies are the most common method of detecting the causes that result in bond failure along the tooth restoration interface. There are many methods for detecting marginal leakage and the organic dye method was chosen for this study because of its simplicity & extensive use in the literature. 0.5% basic fuchsin, 2% methylene blue and 50% silver nitrate are the routinely used dyes. The advantages of dye penetration assay are rst, no reactive chemicals are used along with no radiation. Second, different dye solutions are available; therefore, the technique is highly feasible and easily 10 reproducible .
In the present study, the groups were restored with two different bulk ll composite resins such as Table 1 shows comparison of stereomicroscopic results between different groups and sub-group IA exhibited microleakage with mean value of 6.4 followed by sub-group IB with a mean value of 11.10.
Sub-groups IIA and IIB exhibited values of 12.70 and 11.80 but after comparison with all other groups it was found to be non-signicant with a p value of 0.294.
DISCUSSION
In early 1960s Resin based composites were rst developed and provided materials with higher mechanical properties than acrylics and silicates.
Polymerization shrinkage is the biggest disadvantage of composite material which is TM 3M ESPE Filtek Bulk Fill posterior restorative material is a visible, light-activated restorative Ceram cured using Valo (sub-group IB) then FiltekTM cured using Blue phase LED curing unit (sub-group IIA) and last Tetric N-Ceram cured using Blue Phase (sub-group IIB). TM In the present study Filtek showed superior properties to Tetric N Ceram and this could be attributed to Valo LED light curing system which has shown to have better penetration depth.
Conclusion
Within the limitations of the methodology followed and procedures performed; following conclusions were drawn from this study: TM Both Bulk ll resin composite materials Filtek and Tetric N-Ceram Bulk Fill exhibited microleakage.
There was signicantly less microleakage for TM Filtek followed by Tetric N-Ceram. | 2021-01-02T10:05:03.866Z | 2020-06-10T00:00:00.000 | {
"year": 2020,
"sha1": "c372a97ad4f458122e1880d2e2facc82aad49dd7",
"oa_license": "CCBY",
"oa_url": "https://jmdr-idea.com/download-article.php?Article_Unique_Id=SRSJ29&Full_Text_Pdf_Download=True",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "5a44894ce4a33c82d0e32fbd3dcff17f97704d24",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
15316671 | pes2o/s2orc | v3-fos-license | Major Phytochemical as γ-Sitosterol Disclosing and Toxicity Testing in Lagerstroemia Species
Medicinal plants in genus Lagerstroemia were investigated for phytochemical contents by GC-MS and HPLC with ethanol and hexane extracts and their toxicity MTT and comet assay on human peripheral blood mononuclear cells (PBMCs). γ-Sitosterol is the major component found in all species at 14.70–34.44%. All of the extracts, except for L. speciosa ethanol extract, showed high percentages of cell viability. The IC50 value, 0.24 mg/mL, of ethanol L. speciosa extract predicted an LD50 of 811.78 mg/kg, which belongs to WHO Class III of toxic chemicals. However, in-depth toxicity evaluation by the comet assay showed that the four tested species induced significant (p < 0.05) DNA damage in PBMCs. γ-Sitosterol was previously reported to possess antihyperglycemic activity by increasing insulin secretion in response to glucose. Nonetheless, consumers should consider its toxicity, and the amount of consumption should be of concern.
Introduction
Lagerstroemia species, including L. speciosa, L. loudonii, L. indica, L. villosa, and L. floribunda, were used worldwide as medicinal and ornamental plants. L. indica extract has been used for treating allergic diseases such as asthma due to its anti-inflammatory properties [1,2], analgesic, antihyperglycemic, and antioxidant hepatoprotective effects [1], and antidiabetic activity by its containing corosolic acid [3]. L. speciosa and L. loudonii have also been reported for their chemical constituents [4,5]. L. speciosa leaf extract containing corosolic acid as an active compound has been reported for diabetes treatment [6,7]. The hypoglycemic effects of L. speciosa have been attributed to both corosolic acid and ellagitannins [8]. Current knowledge on the phytochemicals and pharmacology of L. speciosa has regarded it as a natural antidiabetes product, whose leaves contained triterpenes, tannins, ellagic acids, glycosides, and flavones [9].
Remarkably, out of all of the natural products for diabetes treatment, the L. speciosa species was registered as the one of the 170 medicinal plants in Thailand listed by Ministry of Public Health announcements. However, with diverse growth factors and environments in each area of the country, its chemicals should be clarified and toxicity tested, including both cytotoxicity and genotoxicity levels. Therefore, this research focuses on the information described above and includes the following four species: L. speciosa, L. indica, L. loudonii, and L. villosa.
Plant Materials.
Leaves of Lagerstroemia speciosa, L. indica, L. loudonii, and L. villosa were collected and used to make the crude extracts by hexane and ethanol. Then, further studies on phytochemical analysis by gas chromatographymass spectrometry (GC-MS) and high performance liquid 2 Evidence-Based Complementary and Alternative Medicine chromatography (HPLC), cytotoxicity by 3-(4,5-dimethylthiazol-2yl)-2,5-diphenyltetrazolium bromide (MTT) assay and genotoxicity by the comet assay were performed.
Phytochemical Extracts.
The samples were rinsed with water and air-dried until the water evaporated from the leaves. A 20 g sample was then ground into a powder, mixed with 120 mL hexane or ethanol (analytical grade), separately for 72 h. Samples were filtered through a filter paper at room temperature, and the filtrates in this step were subjected to GC-MS analysis. For further experiments with the remaining filtrates, the solvents were evaporated with a rotary evaporator (Rotavapor R-210, Buchi, Switzerland) at 800-1,000 mbar, 15 ∘ C, and 600 rpm for 2 h. Dark green, thick, viscous crude extracts were obtained. Dimethyl sulfoxide (DMSO) was added to the extracts until being completely dissolved and maintained as stock extracts at −20 ∘ C until the cytotoxicity and genotoxicity experiments were conducted.
Analysis of the Plant Extract Component by GC-MS.
The analysis was performed using an Agilent Technologies GC 6890 N/5973 inert mass spectrometer fused with a capillary column (30.0 m × 250 m × 0.25 m). Helium gas was used as the carrier at a constant flow rate of 1 mL/min. The injection and mass-transferred line temperature was set at 280 ∘ C. The oven temperature was programmed for 70 ∘ C to 120 ∘ C at 3 ∘ C/min, held isothermally for 2 min, and then raised to 270 ∘ C at 5 ∘ C/min. A 1 L aliquot of the crude extract was injected in split mode. The relative percentage of the crude constituents was expressed as a percentage using peak area normalization. Component identification was determined by comparing the obtained mass spectra with the reference compounds in the Wiley 7N.1 library.
Analysis of the Plant Extract Component by HPLC.
The amount of corosolic acid from L. speciosa (1 mg, Sigma Aldrich) was weighed and dissolved in 1 mL of ethanol for standard solution. Contents of corosolic acid from crude extracts were determined by HPLC, using Agilent Technologies 1260 Infinity, compared to the standard. The column Hypersil ODS C18, 4.0 × 250 mm, 5 Micron (Agilent) was used. The detection wavelength was 210 nm. The mobile phase consisted of two solvents: 0.1% phosphoric acid (A) and acetonitrile (B). The gradient elution was carried out by acetonitrile 55% to 100% (0-35 min). The flow rate was 1 mL/min, and 10 L of the sample was injected. (PBMCs). PBMCs were isolated from sodium heparin anticoagulated venous blood from a blood bank using Ficoll-Paque Plus (GE Healthcare), as recommended. Freshly isolated PBMCs with viability of at least 98% were used for the toxicity testing. The cells were suspended at a concentration of 10 6 cells/mL in modified RPMI-1640 medium, with 2 mM L-glutamine and 25 mM HEPES, supplemented with 10% FBS, 5 g/mL phytohemagglutinin (PHA), 100 g/mL streptomycin, and 100 U/mL penicillin.
Cell Preparations, Extract Treatments, and the MTT
Assay for Cytotoxicity Testing. Upon testing, the primary crude extract concentrations were serially 10-fold diluted with water, for five levels as working concentrations. The prepared cells were seeded in 96-well plates, 125 L per well. Another 12.5 L of the proper extract working concentrations was added to the corresponding wells in triplicate. The cells were incubated for 4 h in a humidified CO 2 incubator at 37 ∘ C and 5% CO 2 . Corresponding DMSO concentrations were similarly prepared as vehicle controls. The untreated cells were used as a negative control, whereas the positive control cells were treated with UV light for 20 min.
At the end of the treatment, the plates were centrifuged at 1,500 rpm for 10 min and the medium was removed by pipetting. The MTT (Sigma, USA) was added to a final concentration of 0.5 mg/mL in a volume of 10 L per well. Then, the plates were wrapped with aluminum foil and incubated for 4 h at 37 ∘ C. After the formazan crystals were solubilized by adding 100 L DMSO to each well, the plates were left in the dark for 2-4 h. The absorbance was read at 570 nm with a microtiter plate spectrophotometer (Fluorescence microplate reader; SpectraMax M5 series, Molecular Devices). Wells containing medium and MTT without cells were used as blanks. Each concentration treatment was performed in triplicate. All values were expressed as the mean ± S.D. Cellular reduction of tetrazolium salt, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT), formed a violet crystal formazan through mitochondrial succinate dehydrogenase activity of the viable cells, and the violet crystal formazan was quantified following the methods of Freshney [10]. Percentage of cell viability was calculated using the equation (cell viability (%) = average viable of treated cells/average viable of negative control cells × 100) to reveal the cytotoxicity of the plant extracts. Doses inducing 50% inhibition of cell viability (IC 50 value) were determined by plotting a graph of the extract concentration against the cell viability. The IC 50 value was used for the LD 50 calculation [11] to release hazardous levels, according to the World Health Organization [12].
Genotoxicity Assay by the Comet Assay.
The cells were treated as in the MTT assay with concentration at IC 50 value or at a maximum-treated concentration, in case no IC 50 value was detected. The alkaline comet assay was used to assess the genotoxicity of plant extracts, according to a method previously described by Singh et al. [13]. Briefly, the electrophoresis buffer consisted of 0.3 M NaOH and 1 mM EDTA (pH = 10). The power was supplied at a constant of 3.4 V/cm with an adjustment to 300 mA, for 25 min. To quantify the level of DNA damage, the extent of DNA migration was defined using the "Olive Tail Moment" (OTM), which is the relative amount of DNA in the tail of the comet multiplied by the median migration distance. The comets were observed at 200 magnifications and images were obtained using an image analysis system (Isis) attached to a fluorescence microscope (Nikon, Japan), equipped with a 560 nm excitation filter, 590 nm barrier filter, and a CCD video camera PCO (Germany). At least 150 cells (50 cells for each of triplicate slides) were examined for each experiment. The CASP software (Wroclaw, 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 Poland) was used to analyze the OTM. The negative control was untreated cells, and the positive control was UV-treated cells. All experiments were in triplicate. The triplicate cultures were scored for an experiment. All values were expressed as the mean ± S.D. The nonparametric Mann-Whitney U test was used for statistical analysis of the comet assay results; statistical significance was set at < 0.05.
Results
Phytochemical analysis of the filtrates from ethanol and hexane crude extracts (Figures 1 and 2) of the four studied samples as L. speciosa, L. indica, L. loudonii, and L. villosa revealed that there are several substances with some major components in higher amounts than others ( Figure 3). There is an IC 50 value, 0.24 mg/mL, of ethanol L. speciosa extract, which refers to an LD 50 of 811.78 mg/kg.
Because the ethanol L. speciosa extract and the ethanol and hexane L. indica, L. loudonii, and L. villosa extracts have no IC 50 values and high % cell viability, the first highest Figure 4).
Discussion
Since the announcement that L. speciosa and L. indica contain corosolic acid, which is used in the prevention and treatment of type 2 diabetes [3, 6-9], the species studied here have been widely used in both prepared and traditional forms worldwide. Conversely, this research found a large amount of -sitosterol (14.7-34.4%) in all four of the studied species.
Through GC-MS supported information by HPLC, lack or a small amount (0.002-0.07 mg/mL) of corosolic acid was detected. The quantity found leads to an assumption that corosolic acid may not be a factor in the treatment of diabetes. Currently, -sitosterol, an epimer of -sitosterol, has been insisted to possess antihyperglycemic activity by increasing insulin secretion in response to glucose confirmed with immune histochemical study of pancreas [14,15]. Additionally, Sundarraj et al. [16] demonstrated in vitro results that support the ethnomedical use of -sitosterol against cancer through the growth inhibition and cell cycle arrest on the apoptosis of cancer cells in accord with Endrini et al. [17], which showed that -sitosterol was cytotoxic against colon and liver cancer cell lines and that this effect was mediated by downregulation of c-myc expression and induction of the apoptotic pathways. Currently, studies in the many plant species where -sitosterol is found, such as in Girardinia heterophylla [18] and Lippia nodiflora [14], agree with the four studied Lagerstroemia species, the highest level found in L. speciosa and followed by the level in L. indica. The other substances in small amounts were quoted as phytol, (Z)-9-octadecenamide (oleamide), squalene, n-hexadecanoic acid, linolenic acid, octacosane, tetratriacontane, andtocopherol, most of which are beneficial in humans; for examples, oleamide is a protective agent against scopolamineinduced memory loss and is suggested as useful as a chemopreventive agent against Alzheimer's disease [19], and it induces deep sleep [20] and the upregulation of appetite [21,22]. Squalene is a triterpene necessary for life. In the human body, it is a natural and essential component used for the syntheses of cholesterol, steroid hormones, and vitamin D. It may also be an anticancer substance, as it possesses chemopreventive activity [23,24]. Phytol is a diterpene alcohol that can be used as a precursor for the manufacture of synthetic forms of vitamin E [25] and vitamin K1 [26] and is used in the fragrance industry and in cosmetics, shampoos, toilet soaps, household cleaners, and detergents. Its worldwide use has been estimated to be approximately 0.1-1.0 metric tons per year [27]. Hexadecanoic acid or palmitic acid and linolenic acid are types of fatty acids. Octacosane is an alkane, which has been used as a lubricant, transformer oil, and anticorrosion agent; parts of the paraffin . Each phytochemical actually has specific functions, but they may potentially not be known at all. Therefore, the tests for total substance contents, for human safety usage without toxicity, are further experiments of cytotoxicity and genotoxicity levels.
The mass showed higher concentration with ethanol solvent than hexane in all four studied species (Table 3). These assumptions are caused by the fact that polar phytochemicals dissolve more easily in ethanol because it is a more polar substance than the hydrocarbon hexane, which is part of the nonpolar group. The vehicle control (DMSO) was performed for every tested concentration, and it was demonstrated that DMSO does not induce cell death at the highest tested concentration (10%) in PBMCs, so the effects mentioned above can only be attributed to the plant extracts' bioactive compounds (data not shown). Therefore, it was not a surprise that IC 50 with cytotoxicity appeared in the ethanol L. speciosa extracts, but not in the hexane extracts, when the same species were studied.
The MTT assay led to a LD 50 at 811.78 mg/kg. The extrapolated data on predicted LD 50 dose demonstrated that all tested compounds of L. speciosa belong to the WHO Class III (over 500 mg/kg body weight, oral), slightly hazardous category of toxic chemicals. For the evaluation of toxicity, 50 kg body weight would have to consume possibly a dose of 25,000 mg, to reach this level. However, consumers should see more toxicity by the in-depth comet assay. The first highest 10-fold diluted concentration extracts were selected for comet assay for the following reasons: firstly, to have the nearest concentration at usually human consuming of plant parts and secondly, to have not used more than 10% DMSO concentration for final 1% concentration, to avoid affecting on cells.
The results showed that, compared to negative control (untreated cells), the four tested species induced significant DNA damage in PBMCs ( < 0.05). Untreated cells for the negative control appeared as spherical nucleoids with no DNA migration. In the case of the positive control (UVlighted cells), the gradual increase of strand breaks was evident, and they were represented as cells with a long tail of DNA streaming out from the nucleoid, forming a comet-like appearance ( Figure 4). | 2018-04-03T04:29:14.282Z | 2017-01-16T00:00:00.000 | {
"year": 2017,
"sha1": "8759d34109bdbb9c71ad20944bd04927048f9000",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ecam/2017/7209851.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "085c4b63b0b32157a403d3ed152cbff9b48bf2e9",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
257751595 | pes2o/s2orc | v3-fos-license | Translation Technology and Ethical Competence: An Analysis and Proposal for Translators’ Training
: The practice of translation today is inextricably linked to the use of technology, and this is reflected in how translator training is conceptualized, with technologies present in every area of such training. More and more authors have begun to voice their concerns about the ethical issues posed by the use of technology and artificial intelligence systems, and our focus here is to ask whether such concerns are being reflected in pedagogical models and teaching programs in the field of translation. To this end, we analyze a variety of translation and translation technology (TT) competence models, together with a review of the literature on ethics, and a corpus analysis of TT syllabi to explore the different sub-competences addressed in these. The analysis reveals that ethical competence is not specifically addressed in TT classes, or at least it is not reflected in our corpus. The literature review also illustrates a dearth of specific competence models for TT classes, as well as a lack of pedagogical interventions to develop ethical sub-competence, something we aim to address by developing a series of new models and tools. We conclude that the inclusion of ethical issues in the TT classroom is still far from widespread, despite it being a necessary step towards enabling new generations to act critically and professionally.
Introduction
In February 2022, Meta Platforms, a multinational technology conglomerate formerly known as Facebook, which also owns Instagram and the messaging app WhatsApp, announced its intention to build "a universal speech translation" system which would allow people from all around the world to communicate in their preferred language.The accompanying narrative argued that such a technology was needed to limit the widespread use of majority languages such as English, Spanish and Mandarin in favor of other less-used languages, and two new projects were announced, "No Language Left Behind", which would be able to learn any language, and a "Universal Speech Translator".The creators of these envisaged a world of mutual multilingual understanding, "a superpower people have dreamed of forever", as Mark Zuckerberg (2022) himself described it.
This example well illustrates how prevalent translation technologies (TT) have become in our day-to-day activities, and how communication and languages are at the center of our current lives and the "metaverse".Professional translators, and scholars in the fields of Linguistics, Translation and Interpreting, Language Acquisition, etc., have been forced to reflect on the inevitable rise in language and TT.Whereas attitudes towards new technologies have ranged from acknowledging their usefulness to considering them "inconvenient" (Corpas Pastor et al. 2015), they are now an unavoidable part of how translation, and more broadly communication, works.It therefore comes as no surprise that the ethical implications posed by the use of technology here have been a concern since the early 2000s (Topping 2000;Pym 2001), and indeed even somewhat earlier (Melby and Warner 1995).For a number of years, these concerns were accepted as the price to be paid for the advantages of technology; yet attention has also been drawn to broader ethical issues that arise.Examples of such issues are the sharing and commoditization of translation resources and the privacy and confidentiality of data (Bowker 2020), copyright issues (Moorkens and Lewis 2019), data extractivism (Paullada 2020), the risk of using technologies in safety-critical domains (Canfora and Ottmann 2020;O'Mathúna et al. 2019;Hunt et al. 2019), environmental sustainability (Cronin 2017), the ethics of algorithms (Tsamados et al. 2021), and gender bias in translation data (Savoldi et al. 2021;Prates et al. 2020).
Whereas we recognize that such attention in the academic and professional worlds is definitely a step forward, we also believe that a need currently exists for the introduction of ethics in the translation curriculum in general, and, more specifically, in the TT classroom.
This paper takes an analytical approach to the ethical questions that arise when using TT and proposes a model to integrate ethics into the training of translators when they learn how to work with technology.The first section of the article offers a state of the art on competences in translator training, providing an overview of general competence models, i.e., a set of desired competencies for specific translation tasks, as well as a survey of competence models in TT.We then review the literature on ethics in translation generally, before moving onto ethical issues in TT in particular, performing an analysis of a corpus of syllabi to explore how this sub-competence is addressed in the TT classroom, thus offering a classification of topics and areas found in these syllabi.The final section then offers a proposal to integrate ethics into the TT classroom, as well as an updated model for technological competence that includes four sub-competences.The study ends with some conclusions regarding the validity and future implementations of our pedagogical and new technological competence models.
Competence Models in Translation
Globalization has led to an increase in the demand for translation services to ensure that communication flows between different economic agents.Thus, commercial and technical documents, websites, software, mobile applications, etc., all need to be translated into the languages of the market in which a product or service is to be sold or provided.The volume of texts, number of words and languages translated is continually increasing due to ever-increasing technological progress and the context of globalized markets.In light of the constant changes in these markets, the incorporation of technological innovations into the translator's workstation, and the implications here in terms of the teaching-learning process, it is worth considering whether current curricula adequately deal not only with the technological needs of the translation industry, but also with the repercussions that the use of these technologies may have in the professional sphere.Hence, two key concepts are of particular importance.The first of these is 'competence', that is, a "dynamic combination of cognitive and metacognitive skills, demonstration of knowledge and understanding, interpersonal, intellectual and practical skills, and ethical values" (Lokhoff et al. 2010); the second is 'professionalization' (Wikipedia Contributors 2021), involving the "social process by which any trade or occupation transforms itself into a true profession of the highest integrity and competence".
In Europe, a competence-based approach is now fully established in tertiary education.Within the European Higher Education Area, the term "competence" is associated with the process of the transparent harmonization of university degrees and directly links university education not only to the free movement of students, but also to the professional world.In translation, "competence" is the term used to denote the knowledge, skills, and attitudes necessary for translating.As Tao (2012, p. 291) notes, translation competence includes the epistemological context (what?), the practice (how?) and "the ability to reflect" (knowing why).
The competences and skills that translation students need to acquire in order to develop a successful career in the industry have been widely addressed in the Translation Studies literature (Neubert 2000;PACTE Group 2001, 2005;Kiraly 2000Kiraly , 2013;;Kelly 2002;Göpferich 2009;Tao 2012;Krajcso 2018; Rodríguez de Céspedes 2019; Szabó 2020, among many other).As a consequence of this, several multi-componential models of translation competence have been developed.One of the first and most frequently cited was developed by the PACTE Group ( 2001) and consists of five sub-competences: bilingual sub-competence, extra-linguistic sub-competence, translation knowledge sub-competence, instrumental sub-competence, and strategic sub-competence.A further set of psychophysiological components has also been added to the model.
Another model, known as the "competence wheel", was designed by the European Master's in Translation (EMT) expert group in 2009 and initially included six categories: language competence, intercultural competence, info-mining competence, technological competence, thematic competence, and translation service provision competence.In 2017, it was updated, reducing these to five: language and culture, service provision, technology, translation, and personal and interpersonal.The EMT model is no longer presented as a wheel, but as a mechanism composed of cogs that represent the movement, synchrony, and smooth functioning of all competences as a whole.Göpferich (2009) also developed an influential model which contains a more complex arrangement of competences using a wheel and a base composed of three elements (translation norms and translation assignment, translator's self-concept/professional ethos, and translator's psychophysical disposition) and which, in addition to those competences mentioned in the previous models, also includes motivation competence, psychomotor competence and translation routine activation competence.
The final approach we will review here is that of Kiraly (2000), whose constructivist model introduces an interesting point of difference between translation competence and translator competence, with the latter focusing on developing skills, acquiring knowledge, and adopting attitudes for the practical demands of real professional life, as opposed to those implied in translation competence, which are more linguistic in nature.The professional orientation of Kiraly's socio-constructivist model has been taken up widely in translation studies, and a variety of initiatives have emerged to bridge the gap between what is taught at university and what is required by the market.In this sense, Krajcso (2018, p. 692) cites not only publications by various well-known translation scholars, but also forums-such as the Translating Europe Forum-expert groups-such as the EMT-and European research projects such as Optimale, TransCert, and eTransFair, all of which address the issue of translation competence.Krajcso (2018) He concludes that these are all quite similar and revolve around three core competences: (a) translation (language, culture, info-mining, specialization); (b) technological (including terminological competence); and (c) operational (project and quality management); as such, he notes that they are not, in fact, very different from previous models.Krajcso (2018) also presents a comparison of how these initiatives reflect market demands, using data provided by the results of various surveys (OPTIMALE, EUATC, CIUTI and EMT).He highlights two issues that seem particularly relevant for the present article.The first is that more emphasis still needs to be placed on the acquisition of technological skills in translation studies considering the inseparable relationship between the activity of translation and the various ever-evolving technological tools used therein.Indeed, in an analysis of the models of translation competence, Pym (2013, p. 490) observes that technology cannot be considered simply another component in a model, since this would mean that we would always "lag behind both technology and the market".Krajcso's second conclusion (2018, p. 705) relates to the need for research on "operational competence", noting a dearth of research that addresses issues such as marketing, risk management, legal aspects, and ethical issues in the profession.These latter questions will be addressed below.
Competences in the Translation Technology Classroom
It is undeniable that the practice of translation today cannot be separated from technology since the professionals moved from using programs as simple as word processors to software capable of translating computer programs, and integrating neural machine translation engines (Google Translate, DeepL. . . ) in computer-assisted translation (CAT) systems.In this sense, translation has long been characterized as a having an essentially human-machine form (O'Brien 2012), or rather, in terms of the professionals involved, a form of translator-machine interaction (Vargas-Sierra 2020).It is not only essential to teach translation practice in digital contexts in the classroom; there are also underlying theoretical issues here, arising specifically from the impact of technology on translation.O'Hagan (2013) has referred to the "technological turn" that the translation sector has adopted as a result of the advent of CAT tools and, more recently, neural machine translation engines; it has had specific consequences in translation studies, and these should certainly be taken into account in the development of theories and teaching-learning models.Over the last 20 years, the evolution of information and communication technologies (ICT) and their incorporation into the classroom, whether virtually or face-to-face, have brought about great changes in the teaching of translation at the tertiary level.Translatortraining practices in CAT environments have led to a process of professionalization in the curriculum, one in which technology is key in the training of translators; as noted above, however, it is still necessary to develop specific technological competences in order to bridge the gap between the skills acquired by students and those demanded by the market (Calvo 2009;Vargas-Sierra and Ramírez-Polo 2011;Krajcso 2018;Rodríguez de Céspedes 2019).
The acquisition of technological competence, also known as instrumental or technical competence, is embraced in all models and approaches, such as those mentioned above.The PACTE Group (2009), for example, consider "instrumental competence" as a key element here, defining it as the "procedural knowledge related to the use of documentation resources and information and communication technologies applied to translation".Meanwhile, the "tool and research competence" described in Göpferich's (2009, p. 21) translation competence model involves "the ability to use conventional and electronic tools specific to translation"; in line with PACTE, Göpferich considers this to be one of the three sub-competences that are critical in a translator's competence.A more up-to-date version of what this competence currently entails can be found in the European Master's in Translation (2017, p. 9) framework, in the sense that this approach includes "all the knowledge and skills used to implement present and future translation technologies within the translation process.It also includes basic knowledge of machine translation technologies and the ability to implement machine translation according to potential needs".
For the acquisition of technological competence, we suggested (Vargas-Sierra and Ramírez-Polo 2011) a model called "Training Web Interaction and Translation Technologies" (TWITT), designed with the aim of contributing to the teaching of ICT in translation, and including the use of educational platforms, the Internet, and social networks.It aims to develop various skills, these arising from instilling a collaborative attitude in the classroom context, with students engaging in self-directed learning, and developing an acceptable level of the kinds of professional and technological skills useful for the various roles that translators can play within the translation process.In this model, students of translation use various ICTs to collaborate with each other by sharing materials, information, and knowledge, all of this yielding advantages in terms of both learning and social interaction during the course, which, in turn, positively impact motivation, in that students feel that they are taking an active role in their own learning.The model has a ladder form (Figure 1), and two types of competences are shown within each step: on the left, those related to technological knowledge, documentation and information management and access; and, on the right, those related to translation in electronic collaborative learning environments.motivation, in that students feel that they are taking an active role in their own learning.The model has a ladder form (Figure 1), and two types of competences are shown within each step: on the left, those related to technological knowledge, documentation and information management and access; and, on the right, those related to translation in electronic collaborative learning environments.A recent model that explores the development of technological competence is proposed by He and Tao (2022).Their translation technological thinking competence (TTTC) model comprises four levels: translation technological awareness, translation technological learning, translation technological application and sharing, and translation technological evaluation and creation (p.352).This model shows the results of implementing technological competence through the knowing-acting translation curriculum (KATC), an approach based on the unity of knowing and acting philosophy of Wang Yangming.Such knowing, it is claimed, emerges through action (p.553).
As we have seen in the above review, translation competence models attempt to encompass the overall translation process, and technologies are just one component of that process.However, it is our belief, and that of other authors (see, for example, Pym 2013; Krajcso 2018; Rodríguez de Céspedes 2019; He and Tao 2022), that this subcompetence should be more prominent and further developed in the context of current realities.Despite the increasing complexity of the translation workstation due to the incorporation of multiple technological innovations and the central character that technological competence has assumed in the present digital climate, research on specific models or approaches devoted exclusively to developing technological competence is still As we have seen in the above review, translation competence models attempt to encompass the overall translation process, and technologies are just one component of that process.However, it is our belief, and that of other authors (see, for example, Pym 2013; Krajcso 2018; Rodríguez de Céspedes 2019; He and Tao 2022), that this sub-competence should be more prominent and further developed in the context of current realities.Despite the increasing complexity of the translation workstation due to the incorporation of multiple technological innovations and the central character that technological competence has assumed in the present digital climate, research on specific models or approaches devoted exclusively to developing technological competence is still scarce, and more research is needed to understand how a teaching-learning environment should be shaped to respond to real-life professional needs.
Ethics in Translation and the Ethical (Sub)competence
Though it is not the goal of this article to enter into a discussion on the issues of ethics in translation, we do feel that it is appropriate to explore some basic notions here in order to develop the notion of ethical competence in translation technology.Traditionally, ethical issues in translation typically revolve around the relationship between the translator, the text, and the other participants in the translation process: the author and the reader (Alwazna 2014).Two influential writers who address ethical questions relat-ing to translation are Chesterman and Pym, with many of their discussions focusing on issues of equivalence, faithfulness, the concept of value (Chesterman 1995, p. 147) and the practice of maintaining the meaning of the source text undistorted (Robinson 2003, p. 25).Chesterman (2001) summarizes the myriad perspectives on ethics and translation in four complementary yet contradictory models: ethics of representation, which highlights issues of faithfulness, fidelity, accuracy regarding the source author's intentions, alterity and true representation of the other; ethics of service, which aligns with theories of functionalism and compliance with the translation brief, with the concept of "loyalty" at its core; ethics of communication, with an emphasis on communicating with others and the translator as a "mediator working to achieve cross-cultural understanding"; and norm-based ethics, which entails behaving according to the norms of a certain time and culture about what an acceptable translation product should look like, the notion of trust being the cornerstone of this model.Chesterman proposes a new model for translation ethics with the notion of professional practice and "commitment" at its core.
Audi (1995, p. 244, in Drugan andMegone 2011, p. 188), points out that ethics "can be subdivided into the general study of goodness, the general study of right action, applied ethics, metaethics, moral psychology, and the metaphysics of moral responsibility"; in turn, Nagel (2006, p. 379, in Drugan andMegone 2011, p. 188) expresses the view that "Ethics is the branch of philosophy that tries to understand a familiar type of evaluation: the moral evaluation of people's character traits, their conduct, and their institutions.We speak of good and bad people, the morally right and wrong thing to do, just or unjust regimes, and how we should live".It follows from these two definitions that different strands of ethics can be identified, with the most relevant for the professional world being "the general study of right action" or "the morally right and wrong thing to do".
Furthermore, many associations of professional translators have codes of ethics that reflect this approach to ethical issues, focusing mainly on aspects such as the obligation of the translator to be accurate and faithful, that is, on "how to translate" (Pym 2001), or on issues of professional conduct and excellence.From the pedagogical point of view, this is reflected in most translation teaching programs, which typically offer either some type of content in general translation classes relating to issues of fidelity, loyalty or the ethical position of translators with regard to texts, or indeed have specific courses on Deontology, where professional and ethical aspects are discussed.
In terms of the competence models analyzed, as well as the ethical notions presented therein, it is clear that different models include ethical competence, ethical sub-competence, or even an ethical "element" or disposition in some way or another.In the case of the PACTE model, this is found under the concept of psychophysiological competence, where "critical spirit" could be seen as comparable to "ethical thinking", in that the "substantive conception of ethics as a critically reflective morality aimed at identifying, examining, and addressing practical problems" (Borstner and Gartner 2014, p. 15).For Göpferich (2009, p. 21), ethics is embedded in one of the elements that determines the development of the other competences, namely "the translator's self-concept/professional ethos", where aspects of social responsibility are addressed; Tao (2012, p. 295) specifically includes "professionalism: ethical issues" as an element of professional competence.O'Brien (2012, p. 202) understands "professional responsibility" as a personal competence, in contrast to translation and social competences, where "etiquette" covers issues about how to deal properly with clients and other stakeholders in a translation project.Finally, Massey and Kiraly (2021, p. 243) have recently presented a multi-vortex model of translation competence development based on Dreyfus' "Five-stage model of the Mental Activities Involved in Directed Skill Acquisition", in which personal and interpersonal dispositions are said to embrace ethical behavior.
Thus, an ethical (sub)competence can be seen as an essential component in all professional translation activity, one that distinguishes those who simply have a certain set of skills from those who have acquired a true sense of professionalism.As such, ethical competence can reasonably be expected of any professional translator.
Ethics in Translation Technology
As we have seen, ethical considerations in translation are by no means new.Nevertheless, technology has not been addressed widely in either theoretical or professional approaches to translation.For instance, Chesterman (2001, p. 147) mentions that a translator's skill set should include a series of competences embracing "adequate technical and research skills in order to discover and evaluate possible alternatives".Some professional codes of ethics mentioned above refer to TT in relation to the commitment translators make in striving for excellence.This is the case with covenant 2 of the ASETRAD code of ethics (Spanish Association of Translators, Editors and Interpreters), which states that professionals should "have access to information sources, reference materials, as well as know the tools of the profession".The ATA code of ethics does not explicitly mention TT, but its fourth covenant includes the principle "to enhance those capabilities at every opportunity by continuing education in language, subject field, and professional practice", with technologies being part of the means through which translator's capabilities can be enhanced.However, as Bowker notes, this "seeming absence of technology-related guidance in professional associations' codes of ethics" poses problems relating to professional identity (Bowker 2020, p. 269).
Finally, resources such as UNESCO's (2022, p. 95) Recommendations on the Ethics of Artificial Intelligence also mention TT in terms of cultural policies: Member States are encouraged to examine and address the cultural impact of AI systems, especially natural language processing (NLP) applications such as automated translation and voice assistants, on the nuances of human language and expression.Such assessments should provide input for the design and implementation of strategies that maximize the benefits from these systems by bridging cultural gaps and increasing human understanding, as well as addressing the negative implications such as the reduction of use, which could lead to the disappearance of endangered languages, local dialects, and tonal and cultural variations associated with human language and expression.
This recommendation thus attempts to address both the potential benefits (bridging cultural gaps and increasing human understanding) with negative outcomes (reduction in the use of certain languages, disappearance of endangered languages, local dialects, and tonal and cultural variations).
Although some early reflections on ethical issues and machine translation (MT) were voiced in the 1990s by authors such as Melby and Warner (1995), the late 2000s saw a growing number of authors raising concerns about ethical issues that the use of TT has not only created, but de facto exacerbated, in that "many of these questions about ethical aspects of new technologies are difficult to separate from broader sociocultural issues" (Drugan 2019, p. 250).Bowker (2020) offers a very comprehensive overview of the ethical issues raised here in the literature and elsewhere.She classifies the different concerns into six main "core issues and topics": the sharing and commoditization of translation resources; privacy and confidentiality of data; fidelity and collaboration; professional identity, autonomy, and job satisfaction; productivity, time, and money; and cultural hegemony versus the linguistic diversity paradox.She adds some further emerging areas of concern: social responsibility and teaching ethics on TT courses; MT in literary translation; the funding of MT research; and computer-aided interpreting.Similarly, Moorkens (2022) reviews various ethical issues raised by MT: data use, ownership, permissions, distribution, and privacy; the ethics of how MT is evaluated; the use of MT in professional workflows; sustainability issues relating both to working conditions and to environmental concerns; and diversity among developers and users, as well as how this is reflected in terms of MT output.Moorkens also includes a final remark on how computers are acquiring a certain kind of "agency" and how an ethical bias can indeed be implicit in their design.
Other recent contributions on the topic include a Special Issue of Translation Spaces in 2020.In one of the articles therein, by Canfora and Ottmann (2020), the authors classify the risks of neural machine translation (NMT) into three levels: first, possible damage in safety-critical domains if a NMT contains errors; second, issues of liability in cases of damage; and third, cyber risks such as data breaches or loss, especially when online and cloud-based solutions are used.Canfora and Ottmann advocate for good risk management and the implementation of sustainable workflows.They also recommend a high level of MT literacy (Bowker and Ciro 2019) among translation and interpreting graduates and, by extension, anyone who works in the industry.Elsewhere, Kenny and Winters (2020) argue that MT influences not only the authorial voice in literary texts, but also the textual voice of the translator.It has also been claimed by Do Carmo (2020) that, contrary to industry narratives that frame post-editing as a simple and time-saving task, it is in fact a complex undertaking, one which involves intensive decision-making processes.This in turn implies the need for a wider recognition of translators as "specialized knowledge workers".
Currently, translators and translation producers are paying the price for this depreciation of the value of their work, which, in turn, underlines the importance of reconceptualizing the value of this labor not only in terms of time, but also in relation to other dimensions involved in the complexity of the role of the translator.Moorkens (2020) explains that the tendency towards localization in the translation industry, breaking large projects down into small tasks as a means of maximizing efficiency, can be seen as a new form a Taylorism, which he calls "Digital Taylorism".He considers translation here in relation to job satisfaction and sustainable work systems, highlighting the need to train future translators in ethical issues related to technologies and the industry, as they will become the gatekeepers and decision makers "about work practices and data harvesting that will impact many other stakeholders within the translation industry" (Moorkens 2020, p. 27).Nurminen and Koponen (2020) discuss fairness and ethics in MT, focusing on the use of MT for humanitarian purposes as well as to increase accessibility to information for underserved groups.These include ethical issues such as "quality, acceptability, and the need to involve stakeholders in development."Finally, Kenny et al. (2020) deal with general ethical questions of MT and ethics.Particularly interesting here is the notion of how MT is approached, namely as a reductionist idea that seeks to "eliminate" alterity and to equate one language to another without all the nuances each language brings, with English always as the guiding reference.Hunt et al. (2019, p. 28) bring a new perspective to ethical considerations in the context of humanitarian crises, focusing on the health sector.They include topics related to accuracy, privacy and security, inequalities, respect for individuals and communities, relationship protection, and managing expectations.Their view is complemented by the Ethics Recommendations for Crisis Translation Settings of the INTERACT Research Project (O'Mathúna et al. 2019), where questions are raised such as difficulties in access to technologies due to either affordability or because insufficient language resources are available to build reliable systems; this, in turn, can intensify economic differences between communities.Some other considerations that these authors discuss are the cost of data, internet connectivity, bandwidth, phone types and access to SIM cards, operating systems, and access to technology through social media.
Analysis of the Syllabi of Translation Technologies Courses
In an attempt to update the TWITT model and to include the latest research, we carried out an analysis of the syllabi for those translation technology courses that form part of undergraduate degrees in Translation and Interpreting at various universities in a number of countries.Our aim was to reveal which specific competences these courses sought to develop, and to establish a catalog of these as a means of identifying which competences figured most frequently, and whether ethical issues were included.We analyzed a total of 30 TT courses from Argentina, Belgium, Canada, France, Hong Kong, Ireland, Jordan, Lithuania, Saudi Arabia, Spain, Switzerland, Turkey, the UK, and the USA.The selection was based on the availability on the web of the full syllabus in English, French or Spanish, using mainly Google as a search engine and performing searches that contained the words "translation technology" OR "computer-assisted translation" either in English, French, or Spanish.We are aware that the sample may be small; however, we believe that it will provide some useful initial insights into the issue, as well as being quite heterogeneous in terms of the countries involved.The following table (see Table 1) shows the university name where a syllabus was found, its country, and the subject name.Through a careful reading of each TT syllabus, we extracted each objective and assigned each one a tag so that statistics on frequencies could be generated and patterns identified.In Table 2, the columns show the tag used for each objective, its meaning or scope, the objective or goal that a course was intended to achieve, and the specific university where it was found.In order to observe whether the objectives of TT courses were common to all universities studied, as well as their level of cohesion and which objectives predominated, we used network analysis as a method of representation.The initial idea is a simple one: the objectives of the syllabi are the nodes of an interrelated network.The matrices with tags and universities were created using UCINet (Matrix Editor).Finally, using the graphic representation software (NetDraw), we loaded the various matrices built to obtain a graph of the relationships between the tags, taking into account their frequency (size of the nodes), their position in the network, and the relationships between them in terms of the different universities where they were taught.In Figure 2, below, the tagging of the various objectives of the syllabi is depicted in terms of their inter-relationships.
Languages 2023, 8, x FOR PEER REVIEW 13 of 2 the different universities where they were taught.In Figure 2, below, the tagging of th various objectives of the syllabi is depicted in terms of their inter-relationships.The relationships between the nodes in Figure 2 describe the relationships betwee learning goals and universities.When a goal is repeated in several institutions, the size o degree of the goal is greater (the more relationships, the larger the node).The size of eac node thus reflects the frequency of use of that goal in the data, with the most frequen goals at the center and the least frequent on the periphery.Thus, the goals relating t "Project Management" (PRJ_MNGMT), "Terminology Database Knowledge (TERM_DB), "Using Information Technologies" (IT_USE), "Using Computer-Assiste Technologies" (CAT_USE), and "Translation Technologies Knowledge" (TT_KNWL) ar the most predominant or frequent ones.The only university that mentioned specificall an ethical aspect in its course description was Middlebury University, in which one of it objectives was expressed as follows: "Best practices for training and using a customize statistical machine translation engine".However, it is unclear if "best practices" include ethical or professional conduct practices, or whether this merely refers to the technologica skills involved in the training and use of these technologies.In what follows, we wi address ethics in translation studies and, more specifically, in the development o professional technological competence.
A Classification of Translation Technology Ethics
The issues that technologies pose are extensive, often complementary, yet als sometimes partially contradictory.For instance, the extraction of data from low-resourc and crisis-relevant languages without permission poses ethical issues (ethics of data while, at the same time, the lack of data in these languages and the difficulty in extractin data from them make access to technology, training and infrastructure in those language The relationships between the nodes in Figure 2 describe the relationships between learning goals and universities.When a goal is repeated in several institutions, the size or degree of the goal is greater (the more relationships, the larger the node).The size of each node thus reflects the frequency of use of that goal in the data, with the most frequent goals at the center and the least frequent on the periphery.Thus, the goals relating to "Project Management" (PRJ_MNGMT), "Terminology Database Knowledge" (TERM_DB), "Using Information Technologies" (IT_USE), "Using Computer-Assisted Technologies" (CAT_USE), and "Translation Technologies Knowledge" (TT_KNWL) are the most predominant or frequent ones.The only university that mentioned specifically an ethical aspect in its course description was Middlebury University, in which one of its objectives was expressed as follows: "Best practices for training and using a customized statistical machine translation engine".However, it is unclear if "best practices" includes ethical or professional conduct practices, or whether this merely refers to the technological skills involved in the training and use of these technologies.In what follows, we will address ethics in translation studies and, more specifically, in the development of professional technological competence.
A Classification of Translation Technology Ethics
The issues that technologies pose are extensive, often complementary, yet also sometimes partially contradictory.For instance, the extraction of data from low-resource and crisis-relevant languages without permission poses ethical issues (ethics of data), while, at the same time, the lack of data in these languages and the difficulty in extracting data from them make access to technology, training and infrastructure in those languages more challenging (ethics of justice).Following Bowker's work on categorizing these issues into areas or topics in the Special Issue of Translation Spaces, as well as Chesterman's classification of ethical models in translation, we present here a proposal that seeks to group current ethical concerns relating to the use of TT.Our proposal includes six areas or models of ethics that arise, or come into greater focus, as the result of the implementation of TT.
First, ethics of data cover issues relating to everything that occurs during the handling of data.This includes issues linked to intellectual property and copyright, as well as the protection of translation data from modification and other forms of violation which might have an impact on the quality of work performed and, consequently, on the translator's reputation (Mitchell-Schuitevoerder 2020, pp. 113, ff.).This, in turn, is related to the confidentiality of the information contained in translated contents, especially when these are shared in Translation Memory Exchange (TMX) format and then reused in MT, as well as issues of confidentiality, ownership, copyright, authoring rights, associated legal matters, costs, trust, and reliability (Drugan and Babych 2010).Liability is also an important aspect in terms of issues related to breaches of confidentiality.Paullada (2020) and Moorkens (2020, p. 28) advocate moving away from the kind of data extraction that is necessary to build most MT systems and to consider long-term value, including all stakeholders.On the other hand, O'Mathúna et al. ( 2019) discuss translation in crisis situations and raise issues concerning the support of low-resource and crisis-relevant languages, even if they lack commercial viability, so as to ensure linguistic diversity.Finally, issues about data dispossession and misuse, as well as surveillance capitalism, are also possible problem areas.
The ethics of professional value are addressed in detail by Do Carmo (2020) and Moorkens (2020).The latter contends that the "Taylorization" of the translation industry, as well as other technology-mediated practices such as crowdsourcing and fansub, are affecting the inherent value of the translation profession, as well as the pricing of services and fair compensation of professionals.These practices can, in turn, result in the risk of exploitation of well-intentioned volunteers, especially if they are not familiar with what translation entails or the full scope of a project (O'Hagan 2022, p. 432).
The ethics of sustainability, following Kenny et al. (2020, p. 5), address concerns both about the sustainability of current and new work systems and their impact on translator training, as well as concerns relating to the environment and how an uncontrolled explosion in the volume of translation can lead to more energy consumption and therefore have an impact on the environment.In relation to work practices, Nurminen and Koponen (2020) claim a space for all stakeholders, and especially translators and linguists, in the development of technological progress.As for environmental concerns, one of the main voices here is Cronin (2017Cronin ( , 2020)), who argues for more ecological approaches to translation.
The ethics of representation include questions directly associated with the target text in relation to the source text, with issues that cover quality, acceptability, as well as bias in the translation output due to the data technology used to generate translation models.
The ethics of justice deal with inequalities regarding access to technology, technological infrastructure, and training.This dimension is especially important for those communities and stakeholders in underprivileged positions, such as migrants or communities affected by crises, who might not have the means to purchase or develop their own technological resources for translation.
Finally, the ethics of the market cover more general economic issues relating to the motivations behind the development of TT, ranging from the financing of systems and the interests of stakeholders to problematic narratives of selling TT as the final solution to the problem of all communication barriers, with possible neoliberal intentions of the homogenization of languages and cultures and the opening up of new markets.When we analyze these narratives, we realize there is a reason for linguistic diversity, independence, and self-determination, and that the corollary of "No Language Left Behind" could indeed mean "No Market Left Behind".
Figure 3 below summarizes the six areas of ethics outlined in the preceding paragraphs.
An Integrated Proposal to Develop Ethical Competence in the Translation Technology Classroom
The teaching of TT gained momentum in the 1990s and 2000s.At that point, it was mainly framed in terms of instrumental skills and procedural knowledge, with the focus "largely placed on describing tool function and design and on creating 'how to' guides" (Bowker 2020, p. 262).Practitioners and scholars felt less threated because, as Kenny (2011) notes, "I could live with a technology that I understood, whose developers I could talk to, and whose limitations were well known.I could live with a technology that co-existed with human translation without really rivalling human translation".Another example of this idealism is seen in Hutchins's (1986, p. 15) work, who observes that many researchers had been "motivated by idealism: the promotion of international cooperation and peace, the removal of language barriers, the transmission of technical, agricultural and medical information to the poor and developing countries of the world."Melby and Warner (1995, p. 13) referred to this as "the fascination of machine translation", and Kenny (2011) reflects on the enthusiasm that was felt at the time:
An Integrated Proposal to Develop Ethical Competence in the Translation Technology Classroom
The teaching of TT gained momentum in the 1990s and 2000s.At that point, it was mainly framed in terms of instrumental skills and procedural knowledge, with the focus "largely placed on describing tool function and design and on creating 'how to' guides" (Bowker 2020, p. 262).Practitioners and scholars felt less threated because, as Kenny (2011) notes, "I could live with a technology that I understood, whose developers I could talk to, and whose limitations were well known.I could live with a technology that co-existed with human translation without really rivalling human translation".Another example of this idealism is seen in Hutchins's (1986, p. 15) work, who observes that many researchers had been "motivated by idealism: the promotion of international cooperation and peace, the removal of language barriers, the transmission of technical, agricultural and medical information to the poor and developing countries of the world."Melby and Warner (1995, p. 13) referred to this as "the fascination of machine translation", and Kenny (2011) reflects on the enthusiasm that was felt at the time: I had, after all, completed a postgraduate degree in machine translation (admittedly in the early 1990s) and had been happily teaching budding translators about the technology for many years.I enjoyed participating in a community in which machine translation researchers and translation scholars could understand each other and work together, and I appreciated that machine translation could act as a test bed for lexical and grammatical formalisms that attempted to capture the complexity of natural languages.
Many things have changed since then.After some ups and downs since early experiments in the field, we have observed the rampant development of new MT technologies, with the latest advances including the use of artificial intelligence and neuronal networks, leading to unprecedented quality in output texts, and with MT thus becoming, for the first time, a genuine rival to human translation.Hence, we are now at a new "Peak of Inflated Expectations" of the Gartner Hype Cycle (Gartner 2021; Bern 2022), with technologies such as advanced AI language models, deep learning and neural machine translation being discussed in the mainstream press, such as the recently released application ChatGPT (Castillo-Gonzalez 2022).Some stakeholders in the industry even argue that humans will eventually disappear from the loop (Van der Meer 2022).Further, technologies have also enabled non-professional practices such as crowdsourcing and fansub (Jiménez-Crespo 2017) and disrupted more traditional technology-resistant realms such as literary translation (Kenny and Winters 2020) and interpreting (Braun 2019), intensifying the feeling of an approaching threat.
In a race to reflect these trends in the TT course syllabi, most courses now include teaching MT and post-editing as part of the new workflow for most current and future translators (Rodríguez de Céspedes 2019).As Canfora and Ottmann (2020) argue, however, it might well be that such perspectives and attitudes reflect inflated expectations that in turn lead to a certain blindness regarding the possibilities of technology and, more importantly, its effect on human activity.
Therefore, while it becomes clear that it is essential to educate students so that they gain awareness and knowledge of current technological advances, it is also becoming more important than ever to reflect on and teach ethical and critical thinking in relation to technology among current and future professionals.Authors such as Moorkens (2020, p. 27) highlight this need and the responsibility of trainers to introduce business ethics in particular, as current students will become future professional translators, project managers, technology experts, and localizers, that is, the ones who will devise and implement translation workflows and data-harvesting practices that will shape the future of the sector.
Including ethics in the TT class in this sense is essential; yet it is often an after-the-fact measure.Therefore, it is important to stress that ethical considerations should ideally be focused on at the beginning of the development process, that is, when software is designed, since "The social, legal, and ethical considerations of technology should not be something we consider after the fact" (Derrow 2022).This is in line with the notion that all stakeholders should be included in the development process, as reflected in the views of Bender et al. (2021, p. 619) when they observe that "Work on synthetic human behavior is a bright line in ethical AI development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups".However, this is not always the case, due to the great speed at which technology evolves, and we are thus often left in the position of having to consider ethical questions once new technology has been developed and is already widely used.
With regard to the inclusion of ethics in the training of translators, perhaps one of the most pertinent contributions is that of Joanna Drugan, in the form of an intervention on ethics training for interpreters and social workers in the UK (Drugan 2017) and, most interestingly for us, a proposal for an "integrated, inter-disciplinary approach to bringing ethics into translator training" (Drugan and Megone 2011).This proposal aims to frame ethics training as an interdisciplinary endeavor, one in which an ethicist and a translation studies scholar work together, involving the integration of ethics into already-existing translation courses.The main pedagogical approach is the use of case studies, which allow learners to study real-life problems and take into account a variety of perspectives in offering responses and solutions.Concrete proposals for the inclusion of ethics in the TT classroom are also developed by Mitchell-Schuitevoerder (2020) and the research group MultiTrainMT.The former dedicates a whole chapter to digital ethics and risk management in her recent handbook A Project-Based Approach to Translation Technology.She focuses on intellectual property rights, confidentiality, collaborative translation, non-disclosure agreements and liability, and also highlights the disadvantageous position in which translators often find themselves in relation to both the client and the language service provider.The handbook also contains a number of activities.The project MultitrainMT (https://www.multitrainmt.eu/,accessed on 22 April 2022) has created training materials for both multilingual citizens and translators, including a course and a book, specific activities, and an online platform to learn neural machine translation (MutNMT).The book includes the chapter cited above by Moorkens (2022), "Ethics and Machine Translation", and the activities section comprises 34 activities, including quizzes, topics for analysis and discussion, essay prompts, fill-in-the-blanks exercises, and games.
For our proposal, we have drawn particularly on Bloom's taxonomy, which dates back to 1956 and was updated in 2001.It is a multi-tiered model that classifies thinking into six levels of cognitive complexity.The classic taxonomy includes three lower levels: knowledge, comprehension, and application; and three higher levels: analysis, synthesis, and evaluation.The revised taxonomy includes terminological and structural changes to best reflect the challenges students and teachers face in the 21st century.For example, nouns in English can evolve into verbs, and the new taxonomy tries to embrace such changes through the inclusion of two dimensions.These are the knowledge dimension: factual, conceptual, procedural, and meta-cognitive knowledge; and the cognitive dimension: remember, understand, apply, analyze, evaluate, create (Anderson and Krathwohl 2001, pp. 28-29).We have also been inspired by Drugan and Megone (2011, p. 188), who claim that the goal of ethics in translator training should be "to develop good judgment", as well as by the work of Mitchell-Schuitevoerder and MultiTrainMT.
Our proposal adapts Bloom's taxonomy and contains four levels: (1) understand; (2) apply; (3) evaluate; and (4) create (Figure 4).The aim, then, is for students to be able to understand general ethical issues as they relate to TT, to identify and analyze these as they arise in specific contexts, and to justify and develop an effective response.Further, we add the level "create" so that students are encouraged to develop new materials and devise new approaches to address ethical issues, from rewriting professional codes of ethics to cater for the specifics of TT to engaging in activism and resistance towards creating a more "humane technology", as well as to explore new approaches as a means of solving current issues, such as the use of blockchain, a technology based on shared, immutable ledgers that can publicly track all transactions, towards addressing issues of privacy, ownership, and liability issues.
Based on this model, we propose the following taxonomy, adapted from the work of Anderson and Krathwohl (2001).Once the basic notions are established, our focus is largely on creating conceptual and procedural knowledge so that students develop a series of strategies on how to approach and respond to the ethical issues they will face in the future (see Table 3).
Therefore, our new competence model for the TT classroom, based on the results presented from our syllabi analysis, is as can be seen below in Figure 5.
First, a general IT competence would be the basis on which to start building a specific sub-competence, one which would include factual, conceptual, and procedural knowledge about different tools, from translation memories to terminology, corpora, MT, project management, social media, etc. (cf.Vargas-Sierra 2020).This would be complemented by a higher level of procedural and meta-cognitive knowledge on project management, so as to be able to integrate these tools into real processes.Finally, this ethical sub-competence would complete and complement all other sub-competences by providing a critical per-spective on every step of the learning process, as well as providing the knowledge and instruments needed to tackle possible ethical conflicts arising from the use of technologies, and to offer innovative solutions here.Based on this model, we propose the following taxonomy, adapted from the work of Anderson and Krathwohl (2001).Once the basic notions are established, our focus is largely on creating conceptual and procedural knowledge so that students develop a series of strategies on how to approach and respond to the ethical issues they will face in the future (see Table 3).First, a general IT competence would be the basis on which to start building a specific sub-competence, one which would include factual, conceptual, and procedural knowledge about different tools, from translation memories to terminology, corpora, MT, project management, social media, etc. (cf. Vargas-Sierra 2020).This would be complemented by a higher level of procedural and meta-cognitive knowledge on project management, so as to be able to integrate these tools into real processes.Finally, this ethical sub-competence would complete and complement all other sub-competences by providing a critical perspective on every step of the learning process, as well as providing the knowledge and instruments needed to tackle possible ethical conflicts arising from the use of technologies,
Conclusions
The practice of translation today is inextricably linked to technology.Every step of the process, from reception of the text to final delivery, is shaped and facilitated by applications, telecommunications, and digital formats.This is reflected in how translator training is conceptualized, where technologies are introduced in specific courses on translation technology but are also often used as part of other translation classes.Whereas no one denies the need to train students to use and apply these technologies as a means of becoming competitive in the translation marketplace, more and more authors, both from the academic world and from the industry itself (Joscelyne 2022), are voicing their concerns about the ethical issues posed by the use of technology and, more recently, artificial intelligence systems.Having analyzed these issues and confirmed that they are not specifically addressed in TT classes, or at least are not reflected in the syllabi we assessed, we have argued that a space should be opened up in the TT classroom for consideration of ethical competence.Specifically, we have proposed a model to classify those ethical issues that are specifically related to the use of TT.Although it is becoming more and more difficult to differentiate technologically derived issues from a broader, general ethics of translation, especially considering that translation is now so highly immersed in technology, we argue that these issues, as identified in the literature, relate most closely to the deployment of certain technologies in the translation process.Further, we have introduced a model, based on Bloom's taxonomy and the approaches by Drugan and Megone (2011) and Mitchell-Schuitevoerder (2020), that seeks to develop ethical competence in the TT classroom.
Future research here will include a survey of universities and translator training centers to compare our results on the presence or absence of ethics in TT courses, as well as a survey of industry stakeholders and professionals on the ethical issues they face and how these might be included in training.The goal here would be to enrich and finetune the proposed model and then evaluate its viability through applications in real teaching contexts.Our overall aim is to demonstrate that ethics training in the TT classroom can help develop critical thinking skills and be used as a way of exploring topics relating to professional development, economic growth, technology, and social issues.
also analyzes a series of normative competences proposed by the EMT model, those set out in CIUTI (Conférence Internationale Permanente d'Instituts Universitaires de Traducteurs et Interprètes), those established by the TransCert project, and those included in the ISO 17100:2015 standard.
Figure 1 .
Figure 1.TWITT model.As shown in Figure 1, the model is structured into five phases or steps.These are: (1) introducting the working scenario and motivating the team; (2) socialisation; (3) information exchange; (4) knowledge construction and development, and (5) completion.In each phase, knowledge and technology management competences (left side) and translation skills within the collaborative learning environment (right side) are addressed.A recent model that explores the development of technological competence is proposed by He and Tao (2022).Their translation technological thinking competence (TTTC) model comprises four levels: translation technological awareness, translation technological learning, translation technological application and sharing, and translation technological evaluation and creation (p.352).This model shows the results of implementing technological competence through the knowing-acting translation curriculum (KATC), an approach based on the unity of knowing and acting philosophy of Wang Yangming.Such knowing, it is claimed, emerges through action (p.553).As we have seen in the above review, translation competence models attempt to encompass the overall translation process, and technologies are just one component of that process.However, it is our belief, and that of other authors (see, for example, Pym 2013; Krajcso 2018; Rodríguez de Céspedes 2019; He and Tao 2022), that this subcompetence should be more prominent and further developed in the context of current realities.Despite the increasing complexity of the translation workstation due to the incorporation of multiple technological innovations and the central character that technological competence has assumed in the present digital climate, research on specific models or approaches devoted exclusively to developing technological competence is still
Figure 1 .
Figure 1.TWITT model.As shown in Figure 1, the model is structured into five phases or steps.These are: (1) introducting the working scenario and motivating the team; (2) socialisation; (3) information exchange; (4) knowledge construction and development, and (5) completion.In each phase, knowledge and technology management competences (left side) and translation skills within the collaborative learning environment (right side) are addressed.A recent model that explores the development of technological competence is proposed by He and Tao (2022).Their translation technological thinking competence (TTTC) model comprises four levels: translation technological awareness, translation technological learning, translation technological application and sharing, and translation technological evaluation and creation (p.352).This model shows the results of implementing technological competence through the knowing-acting translation curriculum (KATC), an approach based on the unity of knowing and acting philosophy of Wang Yangming.Such knowing, it is claimed, emerges through action (p.553).As we have seen in the above review, translation competence models attempt to encompass the overall translation process, and technologies are just one component of that process.However, it is our belief, and that of other authors (see, for example, Pym 2013; Krajcso 2018; Rodríguez de Céspedes 2019; He and Tao 2022), that this sub-competence should be more prominent and further developed in the context of current realities.Despite the increasing complexity of the translation workstation due to the incorporation of multiple technological innovations and the central character that technological competence has assumed in the present digital climate, research on specific models or approaches devoted exclusively to developing technological competence is still scarce, and more research is needed to understand how a teaching-learning environment should be shaped to respond to real-life professional needs.
Figure 3 .
Figure 3. Ethical issues with translation technologies.
Figure 3 .
Figure 3. Ethical issues with translation technologies.
Figure 4 .
Figure 4. Learning taxonomy for ethical issues in Translation Technology courses.
Figure 4 .
Figure 4. Learning taxonomy for ethical issues in Translation Technology courses.
Figure 5 .
Figure 5. Sub-competences in the Translation Technology Classroom.
Figure 5 .
Figure 5. Sub-competences in the Translation Technology Classroom.
Table 2 .
Sample processing and tagging learning goals.
Table 3 .
Taxonomy with learning goals and knowledge and cognitive dimension.
Table 3 .
Taxonomy with learning goals and knowledge and cognitive dimension. | 2023-03-26T15:04:28.711Z | 2023-03-24T00:00:00.000 | {
"year": 2023,
"sha1": "dae8d96a6a889916d4fc64125624e30c9be451af",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2226-471X/8/2/93/pdf?version=1680069801",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f57a49c82c7259a78e4ffae762f27954c1e4a621",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
265559969 | pes2o/s2orc | v3-fos-license | Effect of Family Support on Psychological Disorders in Pregnant Women in Pulo Lor Village, Pulo Lor District
Psychological health disorders during pregnancy can cause poor pregnancy outcomes in the form of complications including the risk of preterm labor, delays in the delivery process, low birth weight, hypertension and impaired fetal neurodevelopment and development. The risk of these complications can be reduced by improving the factors that affect the psychological health of pregnant women including interpersonal relationships, family roles and social support. The purpose of this study was to determine the effect of family support on the psychological health status of pregnant women using the unpaired categorical analytic research method of more than two groups with univariate, bivariate and multivariate data analysis. The research process carried out on pregnant women in Jombang Regency has found that influential family support can reduce the risk of psychological disorders in the form of anxiety during pregnancy.
Introduction
Pregnancy is a natural condition for women that takes place which can cause physiological changes and psychological changes which, if these changes cannot be properly adapted by the mother, can cause various kinds of complaints or even pregnancy complications if not managed properly (Baharvand P, Anbari K, 2022).Psychological adaptation of pregnant women who are not able to do well can cause various problems or psychological disturbances, one of which tends to be experienced by pregnant women is anxiety which can cause various psychological disorders such as insomnia, stress, depression and even post-traumatic stress disorder (Bedaso A, Adams J, Peng W, 2021).This study aims to determine the effect of family support on psychological disorders in pregnant women (Bhushan NL, Krupp K, Jaykrishna P, Ravi K, Khan A, Shidhaye R, Kiplagat S, Srinivas V, 2020).
Psychological health disorders during pregnancy can result in poor pregnancy outcomes including increasing the risk of preterm labor and even miscarriage, slowing of the delivery process, low birth weight, hypertension in pregnancy, obesity in children and growth disorders in fetal neurodevelopment (Dewi, 2021).(Upadhyaya S, 2023) Various complications for the mother and fetus due to psychological problems and disorders can certainly be avoided by reducing the various factors supporting psychological disorders in pregnant women including interpersonal relationships, family roles, antepartum stress, social support, self-confidence, mastery of fear, doubt, and depression (Romauli, 2011).
Based on some of the things above, the researchers were interested in examining the effect of family support on the psychological disorders of pregnant women in Pulo Lor Village, Pulo Lor District, Jombang Regency.Maternal anxiety levels at the end of the pandemic in the UK showed a visible decline with depression rates following a similar pattern as available health information and insurance increased health through social media.Husband's support is not a factor main thing during the COVID-19 pandemic which can be done as an effort to reduce anxiety and stress levels.In pregnant women it shows an increase depression, anxiety and other negative effects compared to women who are not pregnant, there is a need to provide health services to optimize perinatal health care.
Research Method
This research is an unpaired categorical analytic study (Dahlan, 2010) which reveals the effect of family support on the incidence of psychological disorders during pregnancy (Standeven LR et al, 2022).The research was started by identifying the risk factors for psychological disorders during pregnancy in the form of family support obtained by the mother during pregnancy and then carrying out tests to identify the types of psychological disorders in pregnant women at this time (Oktalia Juli.et al, 2016).The population of this study were all pregnant women in Pulo Lor Jombang Village.The sampling technique used total random sampling so that the research sample was all pregnant women who were willing to take part in the study by filling out the questionnaires that had been distributed (Yoon SH, 2021).The instruments used in this study were the family support questionnaire to identify the forms of family support that mothers receive during pregnancy and the GAD-7 Scale questionnaire to identify the level of psychological disorders in the form of anxiety for pregnant women (Darlington CK, Compton PA, Teitelman AM, 2021).The implementation of this study took into account the 6 guidelines set by the American Nursing Association (ANA) (Huang J, Xu L, Xu Z, Luo Y, Liao B, Li Y, 2022).
Result/Findings
This research was conducted in Jombang Regency to pregnant women respondents by way of researchers distributing questionnaires online using Googleform to village midwives in the Pulo Lor Health Center area from March to July 2022 to be disseminated to pregnant women in their respective work areas.Research data that had been collected from 47 pregnant women were then analyzed univariately and grouped by family support group and anxiety level (Hawke et al., 2020).The data from univariate analysis were then analyzed bivariately using the Kruskal Wallis test because the distribution of the data was not normal with 4 unpaired samples and the result was an asymp sig value of 0.00 <0.05 so it could be concluded that there was a significant difference between groups or the hypothesis is accepted that family support has an effect on the anxiety level of pregnant women (Tri, 2022).The results of the bivariate analysis were then followed by multivariate analysis and it was found that the group of pregnant women who received family support on a scale of 4 obtained significantly significant results with a value of 0.00 (Noonan M, Jomeen J, 2021).
The results of this study are in line with the results of research conducted by (Ike, 2021), namely that there is a significant relationship between family support and the anxiety level of pregnant women.(Listia Dwi Febriati, 2021) also explained in her research that there is a significant relationship between family support and changes in the psychological adaptation of pregnant women.(Agustin, 2021) also explained in her research that there were two factors that were significantly related to the anxiety of pregnant women, namely emotional support and instrumental support and (Tri, 2022) confirmed the suitability of the results of this study with the results of her research that pregnant women who get high family support can make pregnant women not experience worry (Kondou A, Yasui T, 2021).
Discussion
Anxiety is a condition that occurs due to changes and new experiences, characterized by feelings of fear that have no clear cause and are not supported by the existing situation and are caused by various factors.Change emotions that occur during pregnancy due to hormonal function can also trigger emotional stability in pregnant women so that it can have an impact on the psychological health of the mother and the welfare of the fetus.Efforts are needed to suppress psychological disorders that occur in pregnant women, namely by providing support Severe Anxiety Level from family such as husbands, children and parents.Then stay in contact with health workers using tellemedicine or the drive-through method to reduce the mother's anxiety because she has received counseling from health workers.
Conclusion
This study, which aims to identify the effect of family support on the psychological health status of pregnant women, has found that family support has an effect on reducing the risk of psychological disorders in the form of anxiety during pregnancy (Prameswari yudistia. et al, 2019).We give suggestions for future researchers to carry out further research related to forms of family support to reduce the anxiety of pregnant women by increasing the number of respondents so that the research results can be more generalized in society (Lailatul L, 2016).
Review for Describing Anxiety or Psychological Problems of Pregnant Women | 2023-12-04T16:55:00.365Z | 2023-12-01T00:00:00.000 | {
"year": 2023,
"sha1": "62620e41dbbb25343c1e34cb63bebedee2857837",
"oa_license": "CCBYSA",
"oa_url": "https://risetpress.com/index.php/ijmars/article/download/413/317",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "2384fd3fe14a9e2fefb4d9db0fa8c9dbb1641fea",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.